Device Files

Device Files

Device files are special files that represent devices in Solaris 9. Device files reside in the /dev directory, and its subdirectories (such as /dev/dsk), while the /devices directory is a tree that completely characterizes the hardware layout of the system in the file system namespace. Although it may seem initially confusing that separate directories exist for devices and for system hardware, the difference between the two systems will become apparent in the discussion that follows. Solaris refers to both physical and logical devices in three separate ways, with physical device names, physical device files, and logical device names. Physical device names are easily identified because they are long strings that provide all details relevant to the physical installation of the device. Every physical device has a physical name. For example, an SBUS could have the name /sbus@1f,0, while a disk device might have the name /sbus@1f,0/SUNW,fas@2,8800000/sd@1,0. Physical device names are usually displayed at boot time and when using selected applications that access hardware directly, such as format. On the other hand, physical device files, which are located in the /devices directory, comprise an instance name that is an abbreviation for a physical device name, which can be interpreted by the kernel. For example, the SBUS /sbus@1f,0 might be referred to as sbus, and a device disk /sbus@1f,0/SUNW,fas@2,8800000/sd@1,0 might be referred to as sd1. The mapping of instance names to physical devices is not hardwired: the /etc/path_to_inst file always contains these details, keeping them consistent between boots. For an Ultra 2, this file looks like this:

"/sbus@1f,0" 0 "sbus"
"/sbus@1f,0/sbusmem@2,0" 2 "sbusmem"
"/sbus@1f,0/sbusmem@3,0" 3 "sbusmem"
"/sbus@1f,0/sbusmem@0,0" 0 "sbusmem"
"/sbus@1f,0/sbusmem@1,0" 1 "sbusmem"
"/sbus@1f,0/SUNW,fas@2,8800000" 1 "fas"
"/sbus@1f,0/SUNW,fas@2,8800000/ses@f,0" 1 "ses"
"/sbus@1f,0/SUNW,fas@2,8800000/sd@1,0" 16 "sd"
"/sbus@1f,0/SUNW,fas@2,8800000/sd@0,0" 15 "sd"
"/sbus@1f,0/SUNW,fas@2,8800000/sd@3,0" 18 "sd"
"/sbus@1f,0/SUNW,fas@2,8800000/sd@2,0" 17 "sd"
"/sbus@1f,0/SUNW,fas@2,8800000/sd@5,0" 20 "sd"
"/sbus@1f,0/SUNW,fas@2,8800000/sd@4,0" 19 "sd"
"/sbus@1f,0/SUNW,fas@2,8800000/sd@6,0" 21 "sd"
"/sbus@1f,0/SUNW,fas@2,8800000/sd@9,0" 23 "sd"
"/sbus@1f,0/SUNW,fas@2,8800000/sd@8,0" 22 "sd"
"/sbus@1f,0/SUNW,fas@2,8800000/sd@a,0" 24 "sd"
"/sbus@1f,0/sbusmem@f,0" 15 "sbusmem"
"/sbus@1f,0/sbusmem@d,0" 13 "sbusmem"
"/sbus@1f,0/sbusmem@e,0" 14 "sbusmem"
"/sbus@1f,0/cgthree@1,0" 0 "cgthree"
"/sbus@1f,0/SUNW,hme@e,8c00000" 0 "hme"
"/sbus@1f,0/zs@f,1000000" 1 "zs"
"/sbus@1f,0/zs@f,1100000" 0 "zs"
"/sbus@1f,0/SUNW,bpp@e,c800000" 0 "bpp"
"/sbus@1f,0/lebuffer@0,40000" 0 "lebuffer"
"/sbus@1f,0/lebuffer@0,40000/le@0,60000" 0 "le"
"/sbus@1f,0/SUNW,hme@2,8c00000" 1 "hme"
"/sbus@1f,0/SUNW,fdtwo@f,1400000" 0 "fd"
"/options" 0 "options"
"/pseudo" 0 "pseudo"

/dev and /devices Directories

In addition to physical devices, Solaris also needs to refer to logical devices. For example, physical disks may be divided into many different slices, so the physical disk device will need to be referred to using a logical name. Logical device files in the /dev directory are symbolically linked to physical device names in the /devices directory. Most user applications will refer to logical device names. A typical listing of the /dev directory has numerous entries that look like the following:

arp         ptys0       ptyyb       rsd3a       sd3e        ttyu2
audio       ptys1       ptyyc       rsd3b       sd3f        ttyu3
audioctl    ptys2       ptyyd       rsd3c       sd3g        ttyu4
bd.off      ptys3       ptyye       rsd3d       sd3h        ttyu5
be          ptys4       ptyyf       rsd3e       skip_key    ttyu6
bpp0        ptys5       ptyz0       rsd3f       sound/      ttyu7
...

Many of these device filenames are self-explanatory:

  • /dev/console represents the console device-error and status messages are usually written to the console by daemons and applications using the syslog service (described in Chapter 20). /dev/console typically corresponds to the monitor in text mode; however, the console is also represented logically in windowing systems, such as OpenWindows, where the command server% cmdtool -C brings up a console window.

  • /dev/hme is the network interface device file.

  • /dev/dsk contains device files for disk slices.

  • /dev/ttyn and /dev/ptyn are the n terminal and n pseudoterminal devices attached to the system.

  • /dev/null is the endpoint of discarded output to which many applications pipe their output.

The drvconfig command creates the /devices directory tree, which is a logical representation of the physical layout of devices attached to the system, and pseudodrivers. drvconfig is executed automatically after a reconfiguration boot. It reads file permission information for new nodes in the tree form /etc/minor_perm, which contains entries like this:

sd:* 0666 httpd staff

where sd is the node name for a disk device, 0666 is the default file permission, httpd is the owner, and staff is the group.

Storage Devices

Solaris 9 supports many different kinds of mass-storage devices, including SCSI hard drives (and IDE drives on the x86 platform), reading and writing standard and rewritable CD-ROMs, Iomega Zip and Jaz drives, tape drives, DVD-ROM, and floppy disks. Hard drives are the most common kinds of storage devices found on a Solaris 9 system, ranging from individual drives used to create system and user file systems to highly redundant, server-based RAID systems. These RAID configurations can comprise a set of internal disks, managed through software (such as DiskSuite), or high-speed external arrays like the A1000, which include dedicated RAM for write caching. Because disk writing is one of the slowest operations in any modern server system, this greatly increases overall operational speed.

Hard drives have faced stiff competition in recent years, with new media such as Iomega's Zip and Jaz drives providing removable media for both random and sequential file access. This makes them ideal media for archival backups, competing with the traditional magnetic tape drives. The latter have largely been replaced in modern systems by the digital DAT tape system, which has high reliability and data throughput rates (especially the DDS-3 standard).

In this section, we look at the issues surrounding the installation and configuration of storage devices for Solaris 9, providing practical advice for installing a wide range of hardware.

CD-ROMs

A popular format of read-only mass storage on many servers is the compact disc read-only memory (CD-ROM). Although earlier releases of Solaris worked best with Sun-branded CD-ROM drives, as of Solaris 2.6, Solaris fully supports all SCSI-2 CD-ROMs. For systems running older versions of Solaris, it may still be possible to use a third-party drive, but the drive must support 512-byte sectors (the Sun standard). A second Sun default to be aware of is that CD-ROMs must usually have the SCSI target ID of 6, although this limitation has again been overcome in later releases of the kernel. However, a number of third-party applications with 'autodetect' functions may still expect to see the CD-ROM drive at SCSI ID 6.

A number of different CD formats are also supported with the mount command, which is used to attach CDs to the file system. It is common to use the mount point /cdrom for the primary CD-ROM device in Solaris 9 systems, although it is possible to use a different mount point for mounting the device by using a command-line argument to mount.

Zip and Jaz Drives

There are two ways to install Zip and Jaz drives: by treating the drive as a SCSI disk, in which case format data needs to be added to the system to recognize it, or to use Andy Polyakov's ziptool, which will format and manage protection modes supported by Zip 100 and Jaz 1GB/2GB drives. Both of these techniques only support SCSI and not parallel port drives.

Treating the Zip 100 SCSI drive or the Jaz 1GB drive as a normal SCSI device is the easiest approach, because there is built-in Solaris 9 support for these SCSI devices. However, only standard, non-write-protected disks can be used.

Tape Drives

Solaris 9 supports a wide variety of magnetic tapes using the 'remote magtape' (rmt) protocol. Tapes are generally used as backup devices, rather than as interactive storage devices. What they lack in availability, they definitely make up for in storage capacity- many digital audio tape (DAT) drives have capacities of 24GB, making it easy to perform a complete backup of many server systems on a single tape. This removes the need for late-night monitoring by operations staff to insert new tapes when full (as many administrators will have experienced in the past).

Device files for tape drives are found in the /dev/rmt directory. They are numbered sequentially from 0, so default drives will generally be available as /dev/rmt/0.

To back up to a remote drive, use the command ufsdump, which is an incremental file system dumping program. For example, to create a full backup of the /dev/ rdsk/ c0t1d0s1 file system to the tape system /dev/rmt/0, simply use the following command:

# ufsdump 0 /dev/rmt/0 /dev/rdsk/c0t1d0s1

This command specifies a level 0 (that is, complete) dump of the file system, specifying the target drive and data source as /dev/rmt/0 and /dev/rdsk/c0t1d0s1, respectively. Other devices like 0c and 0cb may also be used.

Floppy Disks

Floppy disk drives (1.44MB capacity) are standard on both SPARC and Intel architecture systems. In addition, by using the Volume Manager, detecting and mounting floppy disks is straightforward. Insert the target disk into the drive, and use this command:

# volcheck

This will check all volumes that are managed by volume management and will mount any valid file system that is found. The mount point for the floppy drive is determined by the settings in /etc/vfstab:

fd   -   /dev/fd   fd   -   no   -

Refer to the section on entering disk information into the virtual file system database for more details on configuring the /etc/vfstab file. A very useful feature of the volcheck command is to automatically check for new volumes; for example,

# volcheck -i 60 -t 3600 /dev/diskette0 &

works in the background to check every minute if a floppy is in the drive. However, this polling takes place only for one hour unless renewed.

CD-ROMs and DVD-ROMs

CD-ROMs are supported directly by the operating system in SPARC architectures and do not require any special configuration, other than the usual process of initializing the system for a reconfiguration reboot: powering down the system, attaching the CD-ROM device to the SCSI bus, and powering on the system. It is not necessary to use format or newfs to read the files on the CD-ROM, nor is it usually necessary to manually mount the file system, because the volume manager (vold) is usually enabled on server systems.

A common problem for Solaris x86 users is that there are few tested and supported CD-ROM brands for installing the operating system (although most fully compliant ATA/ATAPI CD-ROMs should work). The older Sound Blaster IDE interface for CD-ROMs does not appear to be suitable, although support may be included in a later release (the Alternate Status register is apparently not implemented on the main integrated circuit for the controller board). It is always best to check the current Hardware Compatibility List (HCL) on the Sun developer site.

Many recent SPARC and Intel systems come installed with a DVD-ROM drive. Although the drive cannot be yet used to play movies, it can be effectively used as a mass storage device, with a capacity equal to several individual CD-ROMs. Future releases of Solaris may include a DVD player and support for the newer DVD-RAM technology.

CD-Rs and CD-RWs

Solaris 9 supports both reading and writing CD-ROMs. In addition to the CD-R (CD-Readable) format, Solaris 9 also supports CD-RW (CD-ReWritable), previously known as CD-Erasable. It is a new optical disc specification created by the industry organization OSTA (www.osta.org). You can hook up many different SCSI CD-R and CD-RW devices to a SPARC system on SCSI device ID 6, and they will function as normal CD-ROM drives. Although the technical ability to support any SCSI-based device is a given for the operating system, a potentially limiting factor for nonstandard hardware is usually finding software to adequately support it. Luckily, many different open source and commercial editions of CD-recording software are available for the Solaris platform. To obtain support for both Solaris 1.x and 2.x, the best application is cdrecord, by Jörg Schilling, which you can download from ftp://ftp.fokus.gmd.de/pub/unix/cdrecord/ . It is freeware, and it makes use of the real-time scheduler in Solaris. It also compiles on the Solaris x86 platform, and can create both music and data discs. It has a rather clunky command-line interface, but it has more features than some of the commercial systems, including the capability to simulate a recording for test purposes (-dummy option); using a single CD for multiple recording sessions (-multi option); manually fixing the disk, if you want to view data from an open session on a normal CD-ROM (-fix option); and setting the recording speed factor (-speed option). If you prefer a commercial system, GEAR for UNIX is also available (http://www.gearcdr.com/html/products/gear/unix/index.html ), as well as Creative Digital Research's CDR Publisher (http://www.cdr1.com/), which is available through Sun's Catalyst program. For more general information about the CD recording process, see Andy McFadden's very comprehensive FAQs at http://www.fadden.com/cdrfaq/.

Adding Devices

In many cases, adding new devices to a Solaris system is straightforward because most devices connect to the SCSI bus, which is a standard interface. The steps involved are usually: preparing the system for a reconfiguration boot, powering down the system, connecting the hardware device, noting the SCSI device number, powering on the system, and using the format command (if necessary) to create a file system. In this section, we examine the procedure for adding disks to both SPARC and Intel architecture machines and highlight potential problems that may occur.

Hard Drives

Hard disk installation and configuration on Solaris 9 is often more complicated than other UNIX systems. However, this complexity is required to support the sophisticated hardware operations typically undertaken by Solaris systems. For example, Linux refers to hard disks using a simple BSD-style scheme: /dev/hdn are the IDE hard disks on a system, and /dev/sdn are the SCSI hard disks on a system, where n refers to the hard disk number. On Linux, a system with two IDE hard disks and two SCSI hard disks will therefore have the following device files configured:

/dev/hda
/dev/hdb
/dev/sda
/dev/sdb

Partitions created on each drive are also sequentially numbered: if /dev/hda is the boot disk, it may contain several partitions on the disk, reflecting the basic UNIX system directories:

/dev/hda1 (/ partition)
/dev/hda2 (/usr)
/dev/hda3 (/var)
/dev/hda4 (swap)

Instead of simply referring to the disk type, disk number, and partition number, the device filename for each partition ('slice') on a Solaris disk contains four identifiers: controller (c), target (t), disk (d), and slice (s). Thus, the device file,

/dev/dsk/c0t3d0s0

identifies slice 0 of disk 0, controller 0 at SCSI target ID 3. To complicate matters further, disk device files exist in both the /dev/dsk and /dev/rdsk directories, which correspond to block device and raw device entries, respectively. Raw and block devices refer to the same physical partition, but are used in different contexts: using raw devices allows only operations of small amounts of data, whereas a buffer can be used with a block device to increase the data read size. It is not always clear whether to use a block or raw device interface, but low-level system commands (like the fsck command, which performs disk maintenance) typically use raw device interfaces, whereas commands that operate on the entire disk (such as df, which reports disk usage) will most likely use block devices.

To install a new hard drive on a Solaris system, just follow these steps:

  1. Prepare the system for a reconfiguration boot by issuing the following command:

    server# touch /reconfigure
  2. Synchronize disk data and power down the system using these commands:

    server# sync; sync; sync; shutdown
    
  3. Switch off power to the system and attach the new hard disk to the external SCSI chain, or install it internally into an appropriate disk bay.

  4. Check that the SCSI device ID does not conflict with any existing SCSI devices. If a conflict exists, simply change the ID using the switch.

  5. Power on the system and use the boot command in this manner to load the kernel if the OpenBoot monitor appears:

    ok boot

The next step (assuming that you have decided which partitions you want to create on your drive), using the information supplied earlier, is to run the format program. In addition to creating slices, format also displays information about existing disks and slices and can be used to repair a faulty disk. When format is invoked without a command-line argument,

# format

it displays the current disks and asks the administrator to enter the number of the disk to format. Selecting a disk for formatting at this point is nondestructive, so even if you make a mistake, you can always exit the format program without damaging data. For example, on a SPARC-20 system with three 1.05G SCSI disks, format opens with this screen:

Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t1d0 <SUN1.05 cyl 2036 alt 2 hd 14 sec 72>
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/
sd@1,0
1. c0t2d0 <SUN1.05 cyl 2036 alt 2 hd 14 sec 72>
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/
sd@2,0
2. c0t3d0 <SUN1.05 cyl 2036 alt 2 hd 14 sec 72>
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/
sd@3,0
Specify disk (enter its number):

It is also possible to pass a command-line option to format, comprising the disk (or disks) to be formatted-for example:

# format /dev/rdsk/c0t2d0

After selecting the appropriate disk, the message

[disk formatted]

will appear if the disk has previously been formatted. This is an important message, because it is a common mistake to misidentify a target disk from the available selection of both formatted and unformatted disks. The menu looks like this:

FORMAT MENU:
        disk       - select a disk
        type       - select (define) a disk type
        partition  - select (define) a partition table
        current    - describe the current disk
        format     - format and analyze the disk
        fdisk      - run the fdisk program
        repair     - repair a defective sector
        show       - translate a disk address
        label      - write label to the disk
        analyze    - surface analysis
        defect     - defect list management
        backup     - search for backup labels
        verify     - read and display labels
        save       - save new disk/partition definitions
        volname    - set 8-character volume name
        !<cmd>     - execute <cmd>, then return
        quit

If the disk has not been formatted, the first step is to prepare the disk to contain slices and file systems by formatting the disk by issuing the command format:

format> format
Ready to format. Formatting cannot be interrupted
and takes 15 minutes (estimated). Continue? yes

The purpose of formatting is to identify defective blocks and mark them as bad, and generally to verify that the disk is operational from a hardware perspective. Once this has been completed, new slices can be created and sized by using the partition option at the main menu:

format>> partition

In this case, we want to create a new slice 5 on disk 0 at target 3, which will be used to store user files when mounted as /export/home, and corresponding to block device /dev/dsk/c0t3d0s5. After determining the maximum amount of space available, enter that size in gigabytes (in this case, 1.05GB) when requested to do so by the format program for slice 5 (enter 0 for the other slices). If the disk is not labeled, you will also be prompted to enter a label, which contains details of the disk's current slices (useful for recovering data). This is an important step, because the operating system will not be able to find any newly created slices unless the volume is labeled. To view the disk label, use the prtvtoc command. Here's the output from the primary drive in an x86 system:

# prtvtoc /dev/dsk/c0d0s2
* /dev/dsk/c0d0s2 partition map
*
* Dimensions:
*     512 bytes/sector
*      63 sectors/track
*     255 tracks/cylinder
*   16065 sectors/cylinder
*    1020 cylinders
*    1018 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      2    00      48195    160650    208844   /
       1      7    00     208845     64260    273104   /var
       2      5    00          0  16354170  16354169
       3      3    01     273105    321300    594404
       6      4    00     594405   1317330   1911734   /usr
       7      8    00    1911735  14442435  16354169   /export/home
       8      1    01          0     16065     16064
       9      9    01      16065     32130     48194

The disk label contains a full partition table, which can be printed for each disk by using the print command:

format> print

For the 1.05GB disk, the partition table will look like this:

Part Tag Flag Cylinders Size Blocks
0 root wm 0 0 (0/0/0) 0
1 swap wu 0 0 (0/0/0) 0
2 backup wm 0 - 3732 (3732/0/0) 2089920
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 home wm 0 - 3732 1075MB (3732/0/0) 2089920
6 usr wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0

After saving the changes to the disk's partition table, using label, exit the format program and create a new UFS file system on the target slice by using the newfs command:

# newfs /dev/rdsk/c0t3d0s5

After a new file system is constructed, it is ready to be mounted. First, a mount point is created

# mkdir /export/home

followed by the appropriate mount command:

# mount /dev/dsk/c0t3d0s5 /export/home

At this point, the disk is available to the system for the current session. However, if you want the disk to be available after reboot, you need to create an entry in the virtual file systems table, which is created from /etc/vfstab file. An entry like this,

/dev/dsk/c0t3d0s5 /dev/rdsk/c0t3d0s5 /export/home ufs 2 yes -

contains details of the slice's block and raw devices, the mount point, the file system type, instructions for fsck, and most importantly, a flag to force mount at boot.

For an x86 system, the output of format looks slightly different, given the differences in the way that devices are denoted:

AVAILABLE DISK SELECTIONS:
       0. c0d0 <DEFAULT cyl 1018 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@7,1/ata@0/cmdk@0,0
Specify disk (enter its number):

The partition table is similar to that for the SPARC architecture systems:

partition> print
Current partition table (original):
Total disk cylinders available: 1018 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0       root    wm       3 -   12       78.44MB    (10/0/0)     160650
  1        var    wm      13 -   16       31.38MB    (4/0/0)       64260
  2     backup    wm       0 - 1017        7.80GB    (1018/0/0) 16354170
  3       swap    wu      17 -   36      156.88MB    (20/0/0)     321300
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6        usr    wm      37 -  118      643.23MB    (82/0/0)    1317330
  7       home    wm     119 - 1017        6.89GB    (899/0/0)  14442435
  8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
  9 alternates    wu       1 -    2       15.69MB    (2/0/0)       32130

Installing a Zip/Jaz Drive

The steps for installation are similar for both the Zip and Jaz drives:

  1. Set the SCSI ID switch to any ID that is not reserved.

  2. Attach the Zip or Jaz drive to your SCSI adapter or chain and ensure that it has power.

  3. Create a device entry in /etc/format.dat by editing the file and inserting the following for a Zip drive:

    disk_type="Zip 100"\
                          :ctlr=SCSI\
                           :ncyl=2406:acyl=2:pcyl=2408:nhead=2\
                           :nsect=40:rpm=3600:bpt=20480
            partition="Zip 100"\
                           :disk="Zip 100":ctlr=SCSI\
                           :2=0,192480
                           :2=0,1159168

    For a Jaz drive, enter the following information in /etc/format.dat:

    disk_type="Jaz 1GB"\
                           :ctlr=SCSI\
                           :ncyl=1018:acyl=2:pcyl=1020:nhead=64\
                           :nsect=32:rpm=3600:bpt=16384
            partition="Jaz 1GB"\
                           :disk="Jaz 1GB":ctlr=SCSI\
                           :2=0,2084864
  4. Perform a reconfiguration boot by typing

    ok boot -r

    at the OpenBoot prompt, or by using these commands from a superuser shell:

    server# touch /reconfigure
    server# sync; sync; init 6

    The drive should now be visible to the system. To actually use the drive to mount a volume, insert a Zip or Jaz disk into the drive prior to booting the system. After booting, run the format program:

    # format
  5. Assuming that the sd number for your drive is 3, select this sd as the disk to be formatted. Create the appropriate partition using the partition option, then create an appropriate label for the volume and quit the format program.

    Next, create a new file system on the drive by using the newfs command-for example:

    # newfs -v /dev/sd3c
  6. After creating the file system, you can mount it by typing

    # mount /dev/sd3c /mount_point

where /mount_point is something self documenting (such as /zip or /jaz). You need to create this before mounting by typing the following:

# mkdir /zip

or

# mkdir /jaz

An alternate and more flexible approach is to use the ziptool program, which is available at http://fy.chalmers.se/~appro/ziptool.html. Ziptool supports all Zip and Jaz drive protection modes, permits unconditional low-level formatting of protected disks, disk labeling, and volume management for Solaris 2.6 and later. The program has to be executed with root privileges regardless of the access permissions set on SCSI disk device driver's entries in /devices. Consequently, if you want to let all users use it, you must install it as set-root-uid:

# /usr/ucb/install -m 04755 -o root ziptool /usr/local/bin

However, you should note that running setuid programs has security implications.

After downloading and unpacking the sources, you can compile the program by using this:

# gcc -o ziptool ziptool.c -lvolmgt

Of course, you will need to ensure that the path to libvolmgt.a is in your LD_LIBRARY_PATH (usually /lib)

ziptool device command

where device must be the full name of a raw SCSI disk file, such as /dev/rsdk/c0t5d0s2, and command is one or more of the following:

rw

Unlocks the Zip disk temporarily.

RW

Unlocks the Zip disk permanently.

ro

Puts the Zip disk into read-only mode.

RO

Puts the Zip disk into a read-only mode that is password protected.

WR(*)

Protects the disk by restricting reading and writing unless a password is entered.

eject

Ejects the current Zip disk.

noeject

Stops the Zip disk being ejected.

You can find further information on installing Jaz and Zip drives on the Iomega support web site:

http://www.iomega.com/support/documents/4019.html
http://www.iomega.com/support/documents/2019.html

Checking for Devices

Obtaining a listing of devices attached to a Solaris system is the best way to begin examining this important issue. In Solaris, you can easily obtain system configuration information, including device information, by using the print configuration command,

# prtconf

on any SPARC or Intel architecture system. On an Ultra 5 workstation, the system configuration looks like this:

SUNW,Ultra-5_10
    packages (driver not attached)
        terminal-emulator (driver not attached)
        deblocker (driver not attached)
        obp-tftp (driver not attached)
        disk-label (driver not attached)
        SUNW,builtin-drivers (driver not attached)
        sun-keyboard (driver not attached)
        ufs-file-system (driver not attached)
    chosen (driver not attached)
    openprom (driver not attached)
        client-services (driver not attached)
    options, instance #0
    aliases (driver not attached)
    memory (driver not attached)
    virtual-memory (driver not attached)
    pci, instance #0
        pci, instance #0
            ebus, instance #0
                auxio (driver not attached)
                power (driver not attached)
                SUNW,pll (driver not attached)
                se, instance #0
                su, instance #0
                su, instance #1
                ecpp (driver not attached)
                fdthree (driver not attached)
                eeprom (driver not attached)
                flashprom (driver not attached)
                SUNW,CS4231, instance #0
            network, instance #0
            SUNW,m64B, instance #0
            ide, instance #0
                disk (driver not attached)
                cdrom (driver not attached)
                dad, instance #0
                atapicd, instance #2
        pci, instance #1
            pci, instance #0
                pci108e,1000 (driver not attached)
                SUNW,hme, instance #1
                SUNW,isptwo, instance #0
                    sd (driver not attached)
                    st (driver not attached)
    SUNW,UltraSPARC-IIi (driver not attached)
    pseudo, instance #0

Never panic about the message that a driver is 'not attached' to a particular device. Because device drivers are loaded only on demand in Solaris 9, only those devices that are actively being used will have their drivers loaded. When a device is no longer being used, the device driver is unloaded from memory. This is a very efficient memory management strategy that optimizes the use of physical RAM by deallocating memory for devices when they are no longer required. In the case of the Ultra 5, we can see that devices like the PCI bus and the IDE disk drives have attached device drivers, and they were being used while prtconf was running.

For an x86 system, the devices found are quite different:

System Configuration:  Sun Microsystems  i86pc
Memory size: 128 Megabytes
System Peripherals (Software Nodes):
i86pc
    +boot (driver not attached)
        memory (driver not attached)
    aliases (driver not attached)
    chosen (driver not attached)
    i86pc-memory (driver not attached)
    i86pc-mmu (driver not attached)
    openprom (driver not attached)
    options, instance #0
    packages (driver not attached)
    delayed-writes (driver not attached)
    itu-props (driver not attached)
    isa, instance #0
        motherboard (driver not attached)
        asy, instance #0
        lp (driver not attached)
        asy, instance #1
        fdc, instance #0
            fd, instance #0
            fd, instance #1 (driver not attached)
        kd (driver not attached)
        bios (driver not attached)
        bios (driver not attached)
        pnpCTL,0041 (driver not attached)
        pnpCTL,7002 (driver not attached)
        kd, instance #0
        chanmux, instance #0
    pci, instance #0
        pci8086,1237 (driver not attached)
        pci8086,7000 (driver not attached)
        pci-ide, instance #0
            ata, instance #0
                cmdk, instance #0
                sd, instance #1
        pci10ec,8029 (driver not attached)
        pci5333,8901 (driver not attached)
    used-resources (driver not attached)
    objmgr, instance #0
    pseudo, instance #0

At Boot Time

The OpenBoot monitor has the ability to diagnose hardware errors on system devices before booting the kernel. This can be particularly useful for identifying bus connectivity issues, such as unterminated SCSI chains, but also for basic functional issues such as whether devices are responding. Issuing the command,

ok reset

will also force a self-test of the system.

Just after booting, it is useful to review the system boot messages, which you can retrieve by using the dmesg command or by examining the /var/log/messages file. This displays a list of all devices that were successfully attached at boot time, and it also displays any error messages that were detected. Let's look at the dmesg output for a SPARC Ultra architecture system:

# dmesg
Jan 17 13:06
cpu0: SUNW,UltraSPARC-IIi (upaid 0 impl 0x12 ver 0x12 clock 270 MHz)
SunOS Release 5.9 Version Generic_103640-19
[UNIX(R) System V Release 4.0]
Copyright (c) 1983-2002, Sun Microsystems, Inc.
mem = 131072K (0x8000000)
avail mem = 127852544
Ethernet address = 8:0:20:90:b3:23
root nexus = Sun Ultra 5/10 UPA/PCI (UltraSPARC-IIi 270MHz)
pci0 at root: UPA 0x1f 0x0
PCI-device: pci@1,1, simba #0
PCI-device: pci@1, simba #1
dad0 at pci1095,6460 target 0 lun 0
dad0 is /pci@1f,0/pci@1,1/ide@3/dad@0,0
        <<Seagate Medalist 34342A cyl 8892 alt 2 hd 15 sec 63>>
root on /pci@1f,0/pci@1,1/ide@3/disk@0,0:a fstype ufs
su0 at ebus0: offset 14,3083f8
su0 is /pci@1f,0/pci@1,1/ebus@1/su@14,3083f8
su1 at ebus0: offset 14,3062f8
su1 is /pci@1f,0/pci@1,1/ebus@1/su@14,3062f8
keyboard is <</pci@1f,0/pci@1,1/ebus@1/su@14,3083f8>>
  major <<37>> minor <<0>>
mouse is <</pci@1f,0/pci@1,1/ebus@1/su@14,3062f8>>
  major <<37>> minor <<1>>
stdin is <</pci@1f,0/pci@1,1/ebus@1/su@14,3083f8>>
  major <<37>> minor <<0>>
SUNW,m64B0 is /pci@1f,0/pci@1,1/SUNW,m64B@2
m64#0: 1280x1024, 2M mappable, rev 4754.9a
stdout is <</pci@1f,0/pci@1,1/SUNW,m64B@2>> major <<8>> minor <<0>>
boot cpu (0) initialization complete - online
se0 at ebus0: offset 14,400000
se0 is /pci@1f,0/pci@1,1/ebus@1/se@14,400000
SUNW,hme0: CheerIO 2.0 (Rev Id = c1) Found
SUNW,hme0 is /pci@1f,0/pci@1,1/network@1,1
SUNW,hme1: Local Ethernet address = 8:0:20:93:b0:65
pci1011,240: SUNW,hme1
SUNW,hme1 is /pci@1f,0/pci@1/pci@1/SUNW,hme@0,1
dump on /dev/dsk/c0t0d0s1 size 131328K
SUNW,hme0: Using Internal Transceiver
SUNW,hme0: 10 Mbps half-duplex Link Up
pcmcia: no PCMCIA adapters found

Output from dmesg shows that the system first performs a memory test, sets the Ethernet address for the network interface, and then initializes the PCI bus. Setting the Ethernet address is critical on SPARC systems, because the Ethernet interfaces will have the same address stored in PROM. An IDE disk is then recognized and mapped into a physical device, and the appropriate partitions are activated. The standard input devices (keyboard and mouse) are then activated, and the boot sequence is largely complete. However, the output is slightly different for the x86 system:

Jan 17 08:32
SunOS Release 5.9 Version Generic [UNIX(R) System V Release 4.0]
Copyright (c) 1983-2002, Sun Microsystems, Inc.
mem = 130688K (0x7fa0000)
avail mem = 114434048
root nexus = i86pc
isa0 at root
pci0 at root: space 0 offset 0
        IDE device at targ 0, lun 0 lastlun 0x0
        model ST310230A, stat 50, err 0
                cfg 0xc5a, cyl 16383, hd 16, sec/trk 63
                mult1 0x8010, mult2 0x110, dwcap 0x0, cap 0x2f00
                piomode 0x200, dmamode 0x200, advpiomode 0x3
                minpio 240, minpioflow 120
                valid 0x7, dwdma 0x407, majver 0x1e
ata_set_feature: (0x66,0x0) failed
        ATAPI device at targ 1, lun 0 lastlun 0x0
        model CD-912E/ATK, stat 50, err 0
                cfg 0x85a0, cyl 0, hd 0, sec/trk 0
                mult1 0x0, mult2 0x0, dwcap 0x0, cap 0xb00
                piomode 0x200, dmamode 0x200, advpiomode 0x1
                minpio 209, minpioflow 180
                valid 0x2, dwdma 0x203, majver 0x0
PCI-device: ata@0, ata0
ata0 is /pci@0,0/pci-ide@7,1/ata@0
Disk0:  <<Vendor 'Gen-ATA ' Product 'ST310230A       '>>
cmdk0 at ata0 target 0 lun 0
cmdk0 is /pci@0,0/pci-ide@7,1/ata@0/cmdk@0,0
root on /pci@0,0/pci-ide@7,1/ide@0/cmdk@0,0:a fstype ufs
ISA-device: asy0
asy0 is /isa/asy@1,3f8
ISA-device: asy1
asy1 is /isa/asy@1,2f8
Number of console virtual screens = 13
cpu 0 initialization complete - online
dump on /dev/dsk/c0d0s3 size 156 MB

While the System Is Up

If you are working remotely on a server system, and you are unsure of the system architecture, the command

# arch -k

returns sun4u on the Ultra 5 system, but sun4m on a SPARC 10 system. For a complete view of a system's device configuration, you may also want to try the sysdef command, which displays more detailed information concerning pseudodevices, kernel loadable modules, and parameters. Here's the sysdef output for an x86 server:

# sysdef
# sysdef
*
* Hostid
*
  0ae61183
*
* i86pc Configuration
*
*
* Devices
*
+boot (driver not attached)
        memory (driver not attached)
aliases (driver not attached)
chosen (driver not attached)
i86pc-memory (driver not attached)
i86pc-mmu (driver not attached)
openprom (driver not attached)
options, instance #0
packages (driver not attached)
delayed-writes (driver not attached)
itu-props (driver not attached)
...
*
* System Configuration
*
  swap files
swapfile             dev  swaplo blocks   free
/dev/dsk/c0d0s3     102,3       8 321288 321288

The key sections in the sysdef output are details of all devices, such as the PCI bus and pseudodevices for each loadable object path (including /kernel and /usr/kernel). Loadable objects are also identified, along with swap and virtual memory settings. Although the output may seem verbose, the information provided for each device can prove to be very useful in tracking down hardware errors or missing loadable objects.



Part I: Solaris 9 Operating Environment, Exam I