System Logging

System Logging

Syslog is a centralized logging facility that provides different classes of events that are logged to a log file, as well as providing an alerting service for certain events. Because syslogd is configurable by root, it is very flexible in its operations. Multiple log files can exist for each daemon whose activity is being logged, or a single log file can be created. The syslog service is controlled by the configuration file /etc/syslog.conf, which is read at boot time or whenever the syslog daemon receives a HUP signal. This file defines the facility levels or system source of logged messages and conditions. Priority levels are also assigned to system events recorded in the system log, while an action field defines what action is taken when a particular class of event is encountered. These events can range from normal system usage, such as FTP connections and remote shells, to system crashes.

The source facilities defined by Solaris are for the kernel (kern), authentication (auth), daemons (daemon), mail system (mail), print spooling (lp), and user processes (user). Priority levels are classified as system emergencies (emerg), errors requiring immediate attention (attn), critical errors (crit), messages (info), debugging output (debug), and other errors (err). These priority levels are defined for individual systems and architectures in <<sys/syslog.h>>.


It is easy to see how logging applications, such as TCP wrappers, can take advantage of the different error levels and source facilities provided by syslogd.

On the Solaris platform, the syslog daemon depends on the m4 macro processor being present. m4 is typically installed with the software developer packages, and it is usually located in /usr/ccs/bin/m4. This version has been installed by default since Solaris 2.4. Users should note that the syslogd supplied by Sun has been error-prone in previous releases. With early Solaris 2.x versions, the syslog daemon left behind zombie processes when alerting logged-in users (for example, notifying root of an emerg).


If syslogd does not work, check that m4 exists and is in the path for root, and/or run the syslogd program interactively by invoking it with a –d parameter.

Examining Log Files

Log files are fairly straightforward in their contents, and you can stipulate what events are recorded by instructions in the syslog.conf file. Records of mail messages can be useful for billing purposes and for detecting the bulk sending of unsolicited commercial e-mail (spam). The system log will record the details supplied by sendmail: a message ID, when a message is sent or received, a destination, and a delivery result, which is typically “delivered” or “deferred.” Connections are usually deferred when a connection to a site is down.


sendmail will usually try to redeliver failed deliveries in 4-hour intervals.

When using TCP wrappers, connections to supported Internet daemons are also logged. For example, an FTP connection to a server will result in the connection time and date being recorded, along with the hostname of the client. A similar result is achieved for telnet connections.

A delivered mail message is recorded as

Feb 20 14:07:05 server sendmail[238]: AA00238: message-id=<<>>
Feb 20 14:07:05 server sendmail[238]: AA00238: from=<<>>,
size=1551, class=0, received from (
Feb 20 14:07:06 server sendmail[243]: AA00238: to=<<>>,
 delay=00:00:01, stat=Sent, mailer=local

whereas a deferred mail message is recorded differently:

Feb 21 07:11:10 server sendmail[855]: AA00855: message
Feb 21 07:11:10 server sendmail[855]: AA00855: from=<<>>,
 size=1290, class=0, received from (
Feb 21 07:12:25 server sendmail[857]: AA00855:,
 delay=00:01:16, stat=Deferred: Connection timed out during user open with, mailer=TCP

An FTP connection is recorded in a single line,

Feb 20 14:35:00 server in.ftpd[277]: connect from

in the same way that a telnet connection is recorded:

Feb 20 14:35:31 server in.telnetd[279]: connect from

Logging Disk Usage

For auditing purposes, many sites generate a df report at midnight or during a change of administrator shifts, to record a snapshot of the system. In addition, if disk space is becoming an issue, and extra volumes need to be justified in a systems budget, it is useful to be able to estimate how rapidly disk space is being consumed by users. Using the cron utility, you can set up and schedule a script using crontab to check disk space at different time periods and to mail this information to the administrator (or even post it to a web site, if system administration is centrally managed).

A simple script to monitor disk space usage and mail the results to the system administrator (root@server) looks like this:

#!/bin/csh -f
df | mailx –s "Disk Space Usage" root@localhost

As an example, if this script were named /usr/local/bin/monitor_usage.csh, and executable permissions were set for the nobody user, you could create the following crontab entry for the nobody user to run at midnight every night of the week:

0 0 * * * /usr/local/bin/monitor_usage.csh

Or, you could make the script more general, so that users could specify another user who would be mailed:

#!/bin/csh -f
df | mailx –s "Disk Space Usage" $1

The crontab entry would then look like this:

0 0 * * * /usr/local/bin/monitor_usage.csh remote_user@client

The results of the disk usage report would now be sent to the user remote_user@client instead of root@localhost.

You can find further information on the cron utility and submitting cron jobs in Chapter 8.

Another way of obtaining disk space usage information with more directory-by-directory detail is by using the /usr/bin/du command. This command prints the sum of the sizes of every file in the current directory and performs the same task recursively for any subdirectories. The size is calculated by adding together all of the file sizes in the directory, where the size for each file is rounded up to the next 512-byte block. For example, taking a du of the /etc directory looks like this:

# du /etc

14      ./default
7       ./cron.d
6       ./dfs
8       ./dhcp
201     ./fs/hsfs
681     ./fs/nfs
1       ./fs/proc
209     ./fs/ufs
1093    ./fs

2429    .

Thus, /etc and all its subdirectories contain a total of 2,429KB of data. Of course, this kind of output is fairly verbose and probably not much use in its current form. If you were only interested in recording the directory sizes, in order to collect data for auditing and usage analysis, you could write a short Perl script to collect the data, as follows:

# reads in directory size for current directory
# and prints results to standard output
@du = `du`;
for (@du)
($sizes,$directories)=split /\s+/, $_;
print "$sizes\n";

If you saved this script as in the /usr/local/bin/directory and set the executable permissions, it would produce a list of directory sizes as output, like the following:

# cd /etc
 # /usr/local/bin/


Because you are interested in usage management, you might want to modify the script to display the total amount of space occupied by a directory and its subdirectories, as well as the average amount of space occupied. The latter is very important when evaluating caching or investigating load-balancing issues:

# reads in directory size for current directory
# and prints the sum and average disk space used to standard output
@ps = `du -o`;
for (@ps)
  ($sizes,$directories)=split /\s+/, $_;
print "Total Space: $sum K\n";
print "Average Space: $count K\n";

Note that du -o was used as the command, so that the space occupied by subdirectories is not added to the total for the top-level directory. The output from the command for /etc now looks like this:

# cd /etc
 # /usr/local/bin/
Total Space: 4832 K
Average Space: 70 K

Again, you could set up a cron job to mail this information to an administrator at midnight every night. To do this, first create a new shell script to call the Perl script, which is made more flexible by passing the directory to be measured, and the user to which the mail will be sent as arguments:

#!/bin/csh -f
cd $1
/usr/local/bin/ | mailx –s "Directory Space Usage" $2

If you save this script to /usr/local/bin/checkdirectoryusage.csh and set the executable permission, you could then schedule a disk space check of a cache file system. You could include a second command that sends a report for the /disks/junior_developers file system, which is remotely mounted from client, to the team leader on server:

0 0 * * * /usr/local/bin/checkdirectoryusage.csh /cache squid@server
1 0 * * * /usr/local/bin/checkdirectoryusage.csh /disks/junior_developers

Tools may already be available on Solaris to perform some of these tasks more directly. For example, the du –s command will return the sum of directory sizes automatically. However, the purpose of this section has been to demonstrate how to customize and develop your own scripts for file system management.


You will be required to interpret scripts in the exam.

The syslog.conf File

The file /etc/syslog.conf contains information used by the system log daemon, syslogd, to forward a system message to appropriate log files and/or users. syslogd preprocesses this file through m4 to obtain the correct information for certain log files, defining LOGHOST if the address of “loghost” is the same as one of the addresses of the host that is running syslogd.

The default syslogd configuration is not optimal for all installations. Many configuration decisions depend on the degree to which the system administrator wishes to be alerted immediately should an alert or emergency occur, or whether it is sufficient for all auth notices to be logged and a cron job run every night to filter the results for a review in the morning. For noncommercial installations, the latter is probably a reasonable approach. A crontab entry like this,

0 1 * * * cat /var/adm/messages | grep auth | mail root

will send the root user a mail message at 1:00 A.M. every morning with all authentication messages.

A basic syslog.conf should contain provision for sending emergency notices to all users, as well as altering to the root user and other nonprivileged administrator accounts. Errors, kernel notices, and authentication notices probably need to be displayed on the system console. It is generally sufficient to log daemon notices, alerts, and all other authentication information to the system log file, unless the administrator is watching for cracking attempts, as shown here:

*.alert                                           root,pwatters
*.emerg                                           *
*.err;kern.notice;auth.notice                   /dev/console
daemon.notice                         /var/adm/messages
auth.none;kern.err;daemon.err;mail.crit;*.alert   /var/adm/messages                                         /var/adm/authlog

Part I: Solaris 9 Operating Environment, Exam I