Backup and restore software falls into three different categories:
Standard Solaris tools like tar, dd, cpio, ufsdump and ufsrestore. These tools are quite adequate for backing up single machines, with multiple backup devices.
Centralized backup tools like AMANDA and Legato Networker, which are useful for backing up multiple machines through a single backup server.
Distributed backup tools like Veritas NetBackup, which are capable of remotely managing storage for multiple machines.
In this section, we will examine the standard Solaris backup and restore tools that are generally used for single machines with one or two backup devices. In addition, these tools are often useful for normal users to manage their own accounts on the server. For example, users can create 'tape archives' using the tar command, whose output can be written to a single disk file. This is a standard way of distributing source trees in the Solaris and broader UNIX community. Users can also make copies of disks and tapes using the dd command. It is also possible to back up database files in combination with standard Solaris tools. For example, Oracle server is supplied with an exp utility, which can be used to take a dump of the database while it is still running
exp system/manager FULL=Y
where system is the username for an administrator with DBA privileges, and manager is the password. This will create a file called expat.dmp, which can then be scheduled to be backed up every night using a cron job like the following:
0 3 * * * exp system/manager FULL=Y
Some sites prefer to take full dumps every night. This involves transferring an entire file to a backup medium, which is a small system overhead if the file is only a few megabytes. But for a database with a tablespace of 50 gigabytes, this would place a great strain on a backup server, especially if it was used for other purposes. Thus, it might be more appropriate to take an incremental dump, which only records data that has changed. Incremental dumps will be discussed in the section on ufsdump.
The tar command is used to create a "tape archive," or to extract the files contained in a tape archive. Although tar was originally conceived with a tape device in mind, in fact, any device can hold a tar file, including a normal disk file system. This is why users have adopted tar as their standard archiving utility, even though it does not perform compression like the Zip tools for PCs. Tape archives are easy to transport between systems using FTP or secure copy in binary transfer mode, and are the standard means of exchanging data between Solaris systems.
As an example, let's create a tar file of the /opt/totalnet package. Firstly, check the potential size of the tape archive by using the du command:
server% cd /opt/totalnet server% du 4395 ./bin 367 ./lib/charset 744 ./lib/drv 434 ./lib/pcbin 777 ./lib/tds 5731 ./lib 5373 ./sbin 145 ./man/man1 135 ./man/man1m 281 ./man 53 ./docs/images 56 ./docs 15837 .
The estimated size of the archive is therefore 15387 blocks. To create a tape archive in the /tmp directory for the whole package, including subdirectories, execute the following command:
server# tar cvf /tmp/totalnet.tar * a bin/ 0K a bin/atattr 54K a bin/atconvert 58K a bin/atkprobe 27K a bin/csr.tn 6K a bin/ddpinfo 10K a bin/desk 17K a bin/ipxprobe 35K a bin/m2u 4K a bin/maccp 3K a bin/macfsck 3K a bin/macmd 3K a bin/macmv 3K a bin/macrd 3K a bin/macrm 3K a bin/nbmessage 141K a bin/nbq 33K a bin/nbucheck 8K a bin/ncget 65K a bin/ncprint 66K a bin/ncput 65K a bin/nctime 32K a bin/nwmessage 239K a bin/nwq 26K a bin/pfinfo 70K a bin/ruattr 122K a bin/rucopy 129K a bin/rudel 121K a bin/rudir 121K a bin/ruhelp 9K a bin/u2m 4K a bin/rumd 120K a bin/rumessage 192K a bin/ruprint 124K a bin/rurd 120K a bin/ruren 121K
To extract the tar file's contents to disks, execute the following command:
server# cd /tmp server# tar xvf totalnet.tar x bin, 0 bytes, 0 tape blocks x bin/atattr, 54676 bytes, 107 tape blocks x bin/atconvert, 58972 bytes, 116 tape blocks x bin/atkprobe, 27524 bytes, 54 tape blocks x bin/csr.tn, 5422 bytes, 11 tape blocks x bin/ddpinfo, 9800 bytes, 20 tape blocks x bin/desk, 16456 bytes, 33 tape blocks x bin/ipxprobe, 35284 bytes, 69 tape blocks x bin/m2u, 3125 bytes, 7 tape blocks x bin/maccp, 2882 bytes, 6 tape blocks x bin/macfsck, 2592 bytes, 6 tape blocks x bin/macmd, 2255 bytes, 5 tape blocks x bin/macmv, 2866 bytes, 6 tape blocks x bin/macrd, 2633 bytes, 6 tape blocks x bin/macrm, 2509 bytes, 5 tape blocks x bin/nbmessage, 143796 bytes, 281 tape blocks x bin/nbq, 33068 bytes, 65 tape blocks x bin/nbucheck, 7572 bytes, 15 tape blocks x bin/ncget, 66532 bytes, 130 tape blocks x bin/ncprint, 67204 bytes, 132 tape blocks x bin/ncput, 65868 bytes, 129 tape blocks x bin/nctime, 32596 bytes, 64 tape blocks x bin/nwmessage, 244076 bytes, 477 tape blocks x bin/nwq, 26076 bytes, 51 tape blocks x bin/pfinfo, 71192 bytes, 140 tape blocks x bin/ruattr, 123988 bytes, 243 tape blocks x bin/rucopy, 131636 bytes, 258 tape blocks x bin/rudel, 122940 bytes, 241 tape blocks x bin/rudir, 123220 bytes, 241 tape blocks x bin/ruhelp, 8356 bytes, 17 tape blocks x bin/u2m, 3140 bytes, 7 tape blocks x bin/rumd, 122572 bytes, 240 tape blocks x bin/rumessage, 195772 bytes, 383 tape blocks x bin/ruprint, 126532 bytes, 248 tape blocks x bin/rurd, 122572 bytes, 240 tape blocks x bin/ruren, 123484 bytes, 242 tape blocks
Tape archives are not compressed by default in Solaris. This means that they should be compressed with normal Solaris compression:
server% compress file.tar
This will create a compressed file called file.tar.Z. Alternatively, the GNU gzip utility often achieves better compression ratios than the standard compress command, so it should be downloaded and installed. When executed, it creates a file called file.tar.gz:
server% gzip file.tar
Although Solaris does come with tar installed, it is advisable to download, compile, and install GNU tar, because of the increased functionality that it includes with respect to compression. For example, to create a compressed tape archive file.tar.gz, use the z flag in addition to the normal cvf flags:
server% tar zcvf file.tar *
cpio is used for copying file archives, and is much more flexible than tar, because a cpio archive can span multiple volumes. cpio can be used in three different modes:
Copy in mode, executed with cpio -i, extracts files from standard input, from a stream created by cator a similar utility.
Copy out mode, denoted by cpio -o, obtains a list of files from standard input, and creates an archive from these files, including their path name.
Copy pass mode, performed by cpio -p, is equivalent to copy out mode, except that no archive is actually created.
The basic idea behind cpio for archiving is to generate a list of files to be archived, print it to standard output, and then pipe it through cpio in copy out mode. For example, to archive all of the text files in one's home directory and store them in an archive called myarchive in the /staff/pwatters directory, use this command:
server% find . -name '*.txt' -print | cpio -oc >> /staff/pwatters/myarchive
When the command completes, the number of blocks required to store the files is reported:
The files themselves are stored in text format, with an identifying header, which we can examine with cat or head:
server% head myarchive 0707010009298a00008180000011fc0000005400000001380bb9b600001e9b0000005500000 00000000000000000000000001f00000003Directory/file.txtThe quick brown fox ju mps over the lazy dog.
Recording headers in ASCII is portable, and is achieved by using the -c option. This means that files can be extracted from the archive by using the cat command:
server% cat myarchive | cpio -icd "*"
This extracts all files and directories as required (specified by using the -d option). It is just as easy to extract a single file: to extract Directory/file.txt, we use this command:
server% cat myarchive | cpio -ic "Directory/file.txt"
If you are copying files directly to tape, it is important to use the same blocking factor when you retrieve or copy files from the tape to the hard disk as you did when you copied files from the hard disk to the tape. If you use the defaults, there should be no problems, although you can specify a particular blocking factor by using the -B directive.
dd is a program that copies raw disk or tape slices block by block to other disk or tape slices: it is like cp for slices. It is often used for backing up disk slices to other disk slices and/or to a tape drive, and for copying tapes. To use dd, it is necessary to specify an input file 'if' and an output file 'of,' and a block size. For example, to copy the root partition '/' on /dev/rdsk/c1t0d0s0 to /dev/rdsk/c1t4d0s0, you can use this command:
server# dd if=/dev/rdsk/c1t0d0s0 of=/dev/rdsk/c1t4d0s0 bs=128k
To actually make the new partition bootable, you will also need to use the installboot command after dd. Another use for dd is backing up tape data from one tape to another tape. This is particularly useful for re-creating archival backup tapes that may be aging. For example, to copy from tape drive 0 (/dev/rmt/0) to tape drive 2 (/dev/rmt/2), use this command:
server# dd if=/dev/rmt/0h of=/dev/rmt/1h
It is also possible to copy the contents of a floppy drive, by redirecting the contents of the floppy disk and piping it through dd:
server# dd << /floppy/floppy0 >> /tmp/floppy.disk
ufsdump and ufsrestore are standard backup and restore applications for UNIX file systems. ufsdump is often set to run from cron jobs late at night to minimize the load on server systems. ufsrestore is normally run in single-user mode after a system crash (that is, when restoring a complete file system). ufsdump can be run on a mounted file system, but it may be wise to unmount it first, perform a file system check (using fsck), remount it, and then perform the backup.
The key concept in planning ufsdumps is the 'dump level' of any particular backup. The dump level determines whether or not ufsdump performs a full or incremental dump. A full dump is represented by a dump level of zero, while the numbers 1-9 can be arbitrarily assigned to incremental dump levels. The only restriction on the assignment of dump-level numbers for incremental backups is their numerical relationship to each other: a high number should be used for normal daily incremental dumps, followed once a week by a lower number that specifies that the process should be restarted. This approach uses the same set of tapes for all files, regardless of which day they were recorded on. For example, Monday through Saturday would have a dump level of 9, while Sunday would have a dump level of 1. After cycling through incremental backups during the weekdays and Saturday, the process starts again on Sunday.
Some organizations like to keep a day's work separate from other days in a single tape. This makes it easier to recover work from an incremental dump where speed is important, and/or whether or not backups from a particular day wish to be retrieved. For example, someone may wish to retrieve a version of a file that was edited on a Wednesday and the following Thursday, but they want the version just prior to the latest (that is, Wednesday). The Wednesday tape can then be used in conjunction with ufsdump to retrieve the file. A weekly full dump is scheduled to occur on Sunday, when there are few people using the system. Thus, Sunday would have a dump level of 0, followed by Monday, Tuesday, Wednesday, Thursday, and Friday with dump levels of 5, 6, 7, 8, and 9, respectively. To signal the end of a backup cycle, Saturday then has a lower dump level than Monday, which could be one of 1, 2, 3, or 4.
Prior to beginning a ufsdump, it is often useful to estimate the size of a dump to determine how many tapes will be required. This estimate can be obtained by dividing the size of the partition by the capacity of the tape. For example, to determine how many tapes would be required to back up the /dev/rdsk/c0t0d0s4 file system use:
server# ufsdump S /dev/rdsk/c0t0d0s4 50765536
The approximately 49MB on the drive will therefore easily fit onto a QIC, DAT, or DLT tape. To perform a full dump of an x86 partition (/dev/rdsk/c0d0s0) at level 0, we can use the following approach:
# ufsdump 0cu /dev/rmt/0 /dev/rdsk/c0d0s0 DUMP: Writing 63 Kilobyte records DUMP: Date of this level 0 dump: Mon Feb 03 13:26:33 1997 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/rdsk/c0d0s0 (solaris:/) to /dev/rmt/0. DUMP: Mapping (Pass I) [regular files] DUMP: Mapping (Pass II) [directories] DUMP: Estimated 46998 blocks (22.95MB). DUMP: Dumping (Pass III) [directories] DUMP: Dumping (Pass IV) [regular files] DUMP: 46996 blocks (22.95MB) on 1 volume at 1167 KB/sec DUMP: DUMP IS DONE DUMP: Level 0 dump on Mon Feb 03 13:26:33 1997
The parameters passed to ufsdump include 0 (dump level), c (cartridge: blocking factor 126), and u (updates the dump record /etc/dumpdates). The dump record is used by ufsdump and ufsrestore to track the last dump of each individual file system:
server# cat /etc/dumpdates /dev/rdsk/c0t0d0s0 0 Wed Feb 2 20:23:31 2000 /dev/md/rdsk/d0 0 Tue Feb 1 20:23:31 2000 /dev/md/rdsk/d2 0 Tue Feb 1 22:19:19 2000 /dev/md/rdsk/d3 0 Wed Feb 2 22:55:16 2000 /dev/rdsk/c0t0d0s3 0 Wed Feb 2 20:29:21 2000 /dev/md/rdsk/d1 0 Wed Feb 2 21:20:04 2000 /dev/rdsk/c0t0d0s4 0 Wed Feb 2 20:24:56 2000 /dev/rdsk/c2t3d0s2 0 Wed Feb 2 20:57:34 2000 /dev/rdsk/c0t2d0s3 0 Wed Feb 2 20:32:00 2000 /dev/rdsk/c1t1d0s0 0 Wed Feb 2 21:46:23 2000 /dev/rdsk/c0t0d0s0 3 Fri Feb 4 01:10:03 2000 /dev/rdsk/c0t0d0s3 3 Fri Feb 4 01:10:12 2000
ufsdump is very flexible, because it can be used in conjunction with rsh (remote shell) and remote access authorization files (.rhosts and /etc/hosts.equiv) to remotely log in to another server and dump the files to one of the remote server's backup devices. However, the problem with this approach is that using .rhosts leaves the host system vulnerable to attack: if an intruder gains access to the client, he or she can then remotely log in to a remote backup server without a username and password. The severity of the issue is compounded by the fact that a backup server that serves many clients has access to most of that client's information in the form of tape archives.
A concerted attack on a single client, leading to an unchallenged remote login to a backup server, can greatly expose an organization's data.
A handy trick often used by administrators is to use ufsdump to move directories across file systems. A ufsdump is taken of a particular file system, which is then piped through ufsrestore to a different destination directory. For example, to move existing staff files to a larger file system, use these commands:
server# mkdir /newstaff server# cd /staff server# ufsdump 0f - /dev/rdsk/c0t0d0s2 | (cd /newstaff; ufsrestore xf -)
The larger file system can then be backed up, thus closing the security loophole.
After backing up data using ufsdump, it's easy to restore the same data using the ufsrestore program. To extract data from a tape volume on /dev/rmt/0, use this command:
# ufsrestore xf /dev/rmt/0 You have not read any volumes yet. Unless you know which volume your file(s) are on you should start with the last volume and work towards the first. Specify next volume #: 1 set owner/mode for '.'? [yn] y
ufsrestore then extracts all of the files on that volume. However, you can also list the table of contents of the volume to standard output, if you are not sure of the contents of a particular tape:
# ufsrestore tf /dev/rmt/0 1 ./openwin/devdata/profiles 2 ./openwin/devdata 3 ./openwin 9 ./lp/alerts 1 ./lp/classes 15 ./lp/fd 1 ./lp/forms 1 ./lp/interfaces 1 ./lp/printers 1 ./lp/pwheels 36 ./lp 2 ./dmi/ciagent 3 ./dmi/conf 6 ./dmi 42 ./snmp/conf
ufsrestore also supports an interactive mode, which has online help to assist you in finding the correct volume to restore from:
# ufsrestore I ufsrestore >> help Available commands are: ls [arg] - list directory cd arg - change directory pwd - print current directory add [arg] - add `arg' to list of files to be extracted delete [arg] - delete `arg' from list of files to be extracted extract - extract requested files setmodes - set modes of requested directories quit - immediately exit program what - list dump header information verbose - toggle verbose flag (useful with ``ls'') help or `?' - print this list If no `arg' is supplied, the current directory is used ufsrestore >>
Since Veritas and Legato are software packages in their own right, coverage is beyond the scope of the current volume.