Initialization

Initialization

Upon booting from OpenBoot, Solaris has several different modes of operation, which are known as “run levels” or “init states”—so called because the init command is often used to change run levels, although init-wrapper scripts (such as shutdown) are also used. These init states can be single- or multiuser, and often serve a different administrative purpose, and are mutually exclusive (that is, a system can only ever be in one init state). Typically, a Solaris system designed to "stay up" indefinitely will cycle through a predefined series of steps in order to start all the software daemons necessary for the provision of basic system services, primary user services, and optional application services. These services are often only provided when a Solaris system operates in a multiuser run state, with services being initialized by run control (rc) shell scripts. Usually, one run control script is created to start each system, user, or application service. Fortunately, many of these scripts are created automatically for administrators during the Solaris installation process. However, if you intend to install third-party software (such as a database server), it will be necessary to create your own run control scripts in the /etc/init.d directory to start up these services automatically at boot time. This process is fully described later in this chapter.

If the system needs to be powered off for any reason (for example, a scheduled power outage), or switched into a special maintenance mode to perform diagnostic tests, there is also a cycle of iterating through a predefined series of run control scripts to kill services and preserve user data. It is essential that this sequence of events be preserved so that data integrity is maintained. For example, operating a database server typically involves communication between a server-side, data-writing process and a daemon listener process, which accepts new requests for storing information. If the daemon process is not stopped prior to the data-writing process, it could accept data from network clients and store it in a cache while the database has already been closed. This could lead to the database being shutdown in an inconsistent state, potentially resulting in data corruption and/or record loss.

Caution 

It is essential that Solaris administrators apply their knowledge of shell scripting to rigorously managing system shutdowns as well as startups using run control scripts.

Run Levels

In terms of system startup, Solaris has some similarities to Microsoft Windows and Linux. Although it doesn’t have an AUTOEXEC.BAT or CONFIG.SYS file, Solaris does have a number of script files that are executed in a specific order to start services, just like Linux. These scripts are typically created in the /etc/init.d directory as Bourne shell scripts, and are then symbolically linked into the “run level” directories. Just like Microsoft Windows has “safe modes,” Solaris supports a number of different modes of operation, from restricted single-user modes to full multiuser run levels. The complete set of run levels, with their respective run control script directories, is displayed in Table 5-1.

Table 5-1: Solaris Run Levels and Their Functions

Run Level

Description

User Status

Run Control Script Directory

0

Hardware maintenance mode

Console access

/etc/rc0.d

1

Administrative state; only root file system is available

Single user

/etc/rc1.d

2

First multiuser state; NFS resources unavailable

Multiuser

/etc/rc2.d

3

NFS resources available

Multiuser

/etc/rc3.d

4

User-defined state

Not specified

N/A

5

Power down state

Console access

/etc/rc5.d

6

Operating system halted and reboot

Multiuser

/etc/rc6.d

S

Administrative tasks and repair of corrupted file systems

Console access

/etc/rcS.d

Each run level is associated with a run level script, as shown in Table 5-2. The run level script is responsible for the orderly execution of all run level scripts within a specific run level directory. The script name matches the run level and directory name.

Table 5-2: Solaris Run Level Scripts

Run Level

Run Control Script

0

/etc/rc0

1

/etc/rc1

2

/etc/rc2

3

/etc/rc3

4

N/A

5

/etc/rc5

6

/etc/rc6

S

/etc/rcS

When a Solaris system starts, the init process is spawned, which is responsible for managing processes and the transitions between run levels. You can actually switch manually between run levels yourself by using the init command, as shown in the following example:

# init 3

Control Scripts and Directories

Every Solaris init state (such as init state 6) has its own run level script directory (for example, /etc/rc6.d). This contains a set of symbolic links (like shortcuts in Microsoft Windows) that are associated with the service startup files in the /etc/init.d directory. Each linked script starts with a letter S (“start”) or the letter K (“kill”), and is used to start or kill processes, respectively. When a system is booted, processes are started. When a system is shut down, processes are killed. The start and kill links are typically made to the same script file, which interprets two parameters: “start” and “stop.” The scripts are executed in numerical order, so a script like /etc/rc3.d/ S20dhcp is executed before /etc/rc3.d/ S21sshd. If you’re curious about what kind of scripts are started or killed in Solaris during startup and shutdown, Table 5-3 shows the startup scripts in /etc/rc2.d, while Table 5-4 shows the kill scripts found in /etc/rc0.d. It’s important to realize that these will change from system to system.

Table 5-3: Typical Multiuser Startup Scripts Under Solaris 9

Script

Description

S05RMTMPFILES

Removes temporary files in the /tmp directory.

S20sysetup

Establishes system setup requirements, and checks /var/crash to determine whether the system is recovering from a crash.

S21perf

Enables system accounting using /usr/lib/sa/sadc and /var/adm/sa/sa.

S30sysid.net

Executes /usr/sbin/sysidnet, /usr/sbin/sysidconfig, and /sbin/ifconfig, which are responsible for configuring network services.

S69inet

Initiates second phase of TCP/IP configuration, following on from the basic services established during single-user mode (rcS). Setting up IP routing (if /etc/defaultrouter exists), performing TCP/IP parameter tuning (using ndd), and setting the NIS domain name (if required) are all performed here.

S70uucp

Initializes the UNIX-to-UNIX copy program (UUCP) by removing locks and other unnecessary files.

S71sysid.sys

Executes /usr/sbin/sysidsys and /usr/sbin/sysidroot.

S72autoinstall

Script to execute JumpStart installation if appropriate.

S72inetsvc

Final network configuration using /usr/sbin/ifconfig after NIS/NIS+ have been initialized. Also initializes Internet Domain Name Service (DNS) if appropriate.

S80PRESERVE

Preserves editing files by executing /usr/lib/expreserve.

S91leoconfig

Configuration for ZX graphics cards (if installed).

S92rtvc-config

Configuration for SunVideo cards (if installed).

S92volmgt

Starts volume management for removable media using /usr/sbin/vold.

Table 5-4: Typical Single-User Kill Scripts Under Solaris 9

Script

Description

K00ANNOUNCE

Announces that “System services are now being stopped.”

K10dtlogin

Initializes tasks for the CDE (Common Desktop Environment), including killing the dtlogin process.

K20lp

Stops printing services using /usr/lib/lpshut.

K22acct

Terminates process accounting using /usr/lib/acct/shutacct.

K42audit

Kills the auditing daemon (/usr/sbin/audit).

K47asppp

Stops the asynchronous PPP daemon (/usr/sbin/aspppd).

K50utmpd

Kills the utmp daemon (/usr/lib/utmpd).

K55syslog

Terminates the system logging service (/usr/sbin/syslogd).

K57sendmail

Halts the sendmail mail service (/usr/lib/sendmail).

K66nfs.server

Kills all processes required for the NFS server (/usr/lib/nfs/nfsd).

K69autofs

Stops the automounter (/usr/sbin/automount).

K70cron

Terminates the cron daemon (/usr/bin/cron).

K75nfs.client

Disables client NFS.

K76nscd

Kills the name service cache daemon (/usr/sbin/nscd).

K85rpc

Disables remote procedure call (rpc) services (/usr/sbin/rpcbind).

Boot Sequence

Booting the kernel is a straightforward process, once the operating system has been successfully installed. The Solaris kernel can be identified by the pathname /platform/ PLATFORM_NAME/kernel/unix where PLATFORM_NAME is the name of the current architecture. For example, sun4u systems boot with the kernel /platform/sun4u/ kernel/.

Note 

Kernels can be alternatively booted from a CD-ROM drive or through a network connection (by using the boot cdrom and boot net commands from the OpenBoot PROM monitor, respectively).

When a SPARC system is powered on, the system executes a series of basic hardware tests before attempting to boot the kernel. These power-on self tests (POSTs) ensure that your system hardware is operating correctly. If the POST tests fail, you will not be able to boot the system.

Once the POST tests are complete, the system will attempt to boot the default kernel using the path specified in the firmware. Alternatively, if you wish to boot a different kernel, you can press STOP+a, enter boot kernel/name and boot the kernel specified by “kernel/name.” For example, to boot a kernel called newunix, you would use the command boot kernel/newunix.

Systems either boot from a UFS file system (whether on the local hard disk or a local CD-ROM drive) or across the network. Two applications facilitate these different boot types: ufsboot is responsible for booting kernels from disk devices, while inetboot is responsible for booting kernels using a network device. While servers typically boot themselves using ufsboot, diskless clients must use inetboot.

The ufsboot application reads the bootblock on the active partition of the boot device, while inetboot performs a broadcast on the local subnet, searching for a trivial FTP (TFTP) server. Once located, the kernel is downloaded using NFS and booted. Once located, a bootable image is downloaded from the TFTP server and the bootparam server sends information on where to find the NFS mount point for the kernel.



Part I: Solaris 9 Operating Environment, Exam I