In general terms, a cache is a place where important material can be placed so that it can be quickly retrieved. The location of the cache may be quite different from the normal storage location for the specified material. For example, field commanders in the army may store ammunitions in local caches so that their forces can obtain their required materials quickly in case of war. These ammunitions would normally be stored securely well away from the battlefield, but must be “highly available” when required. The state of the battlefield may make it difficult to access outside sources of ammunition during live fire, so a sizable cache of arms is always wise.
This analogy can be easily extended to client/server scenarios, where an unreliable or slow data link may give rise to performance issues. A cache, in this case, can be created to locally store commonly used files, rather than retrieving them each time they are requested from a server. The cache approach has the advantage of speeding up client access to data. However, it has the disadvantage of data asynchronization, where a file is modified on the server after it has been stored in the cache. Thus, if a local file retrieved from the cache is modified before being sent back to the server, any modifications performed on the server’s copy of the file would be overwritten.
Cached data may be out-of-date by the time it is retrieved by the local client, meaning that important decisions could be made based on inaccurate information.
Many Internet client/server systems, involved in the exchange of data across an HTTP link, use a cache to store data. This data is never modified and sent back to the server, so overwriting server-side data is never an issue. Small ISPs with limited bandwidth often use caches to store files that are commonly retrieved from a server. For example, if the ISP has 1,000 customers who routinely download the front page of the Sydney Morning Herald each morning, it makes sense to download the file once from the Sydney Morning Herald web site, and store it locally for the other 999 users to retrieve. Since the front page only changes from day to day, the page will always be current as long as the cache purges the front page file at the end of each day. The daily amount of data to be downloaded from the Sydney Morning Herald web site has been reduced by 99.9 percent, which can significantly boost the ISPs performance in downloading other noncached files from the Internet and reduce the overall cost of data throughput.
Solaris provides a cache file system (CacheFS) that is designed to improve NFS client/server performance across slow or unreliable networks. The principles underlying CacheFS are exactly the same as the two examples listed previously: locally stored files that are frequently requested can be retrieved by users on the client system without having to download them again from the server. This approach minimizes the number of connections required between an NFS client and server to retrieve the same amount of data, in a manner that is invisible to users on the client system. Users will notice that their files are retrieved more quickly than before the cache was introduced.
Improving speed of access and retrieval is critical for many users.
CacheFS seamlessly integrates with existing NFS installations, with only simple modifications to mount command parameters and /etc/vfstab entries required to make use of the facility. The first task in configuring a cache is to create a mount point and a cache on a client system. If a number of NFS servers are to be used with the cache, it makes sense to create individual caches underneath the same top-level mount point. Many sites use the mount point /cache to store individual caches. In this example, we’ll assume that a file system from the NFS server yorktown will be cached on the local client system midway, so the commands to create a cache on midway are
midway# mkdir /cache midway # cfsadmin -c /cache/yorktown
Here, we’ve used the cfsadmin command to create the cache once the mount point /cache has been created. Now, let’s examine how we would force the cache to be used for all accesses from midway to yorktown for the remote filesystem /staff, which is also mounted locally on /staff:
midway# mount -F cachefs -o backfstype=nfs,cachedir=/cache/Yorktown yorktown:/staff /staff
Once the yorktown:/staff file system has been mounted in this way, users on midway will not notice any difference in operation, except that file access to /staff will be much quicker.
It is possible to check the status of the cache by using the cachefsstat command. In order to verify that /cache/yorktown is operating correctly, the following command would be used:
midway# cachefsstat /cache/yorktown
Alternatively, the cfsadmin command can be used:
# cfsadmin -l /cache/Yorktown cfsadmin: list cache FS information maxblocks 80% minblocks 0% threshblocks 75% maxfiles 80% minfiles 0% threshfiles 75% maxfilesize 12MB yorktown:_staff:_staff
Note the last line, which is the current cache ID. You will need to remember the cache ID if you ever want to delete the cache. If a cache needs to be deleted, the cfsadmin command can be used with the -d option:
midway# umount /staff midway# cfsadmin -d yorktown:_staff:_staff /cache/yorktown
Here, we’ve unmounted the /staff volume on midway locally, before attempting to remove the cache by providing its ID along with its mount point.