10.9 Cache Array Routing Protocol

CARP is an algorithm that partitions URI-space among a group of caching proxies. In other words, each URI is assigned to one of the caches. CARP maximizes hit ratios and minimizes duplication of objects among the group of caches. The protocol consists of two major components: a Routing Function and a Proxy Array Membership Table. Unlike ICP, HTCP, and Cache Digests, CARP can't predict whether a particular request will be a cache hit. Thus, you can't use CARP with siblingsonly parents.

The basic idea behind CARP is that you have a group, or array, of parent caches to handle all the load from users or child caches. A cache array is one way to handle ever-increasing loads. You can add more array members whenever you need more capacity. CARP is a deterministic algorithm. That is, the same request always goes to the same array member (as long as the array size doesn't change). Unlike ICP and HTCP, CARP doesn't use query messages.

Another interesting thing about CARP is that you have the choice to deploy it in a number of different places. For example, one approach is to make all user-agents execute the CARP algorithm. You could probably accomplish this with a Proxy Auto-Configuration (PAC) function, written in JavaScript (see Appendix F). However, you're likely to have certain web agents on your network that don't implement or support PAC files. Another option is to use a two-level cache hierarchy. The lower level (child caches) accept requests from all user-agents, and they execute the CARP algorithm to select the parent cache for each request. However, unless your network is very large, many caches can be more of a burden than a benefit. Finally, you can also implement CARP within the array itself. That is, user-agents connect to a random member of the cache array, but each member forwards cache misses to another member of the array based on the CARP algorithm.

CARP was designed to be better than a simple hashing algorithm, which typically works by applying a hash function, such as MD5, to URIs. The algorithm then calculates the modulus for the number of array members. It might be as simple as this pseudocode:

N = MD5(URI) % num_caches;

next_hop = Caches[N];

This technique uniformly spreads the URIs among all the caches. It also provides a consistent mapping (maximizing cache hits), as long as the number of caches remains constant. When caches are added or removed, however, this algorithm changes the mapping for most of the URIs.

CARP's Routing Function improves on this technique in two ways. First, it allows for unequal sharing of the load. For example, you can configure one parent to receive twice as many requests as another. Second, adding or removing array members minimizes the fraction of URIs that get reassigned.

The downside to CARP is that it is relatively CPU-intensive. For each request, Squid calculates a "score" for each parent. The request is sent to the parent cache with the highest score. The complexity of the algorithm is proportional to the number of parents. In other words, CPU load increases in proportion to the number of CARP parents. However, the calculations in CARP have been designed to be faster than, say, MD5, and other cryptographic hash functions.

In addition to the load-sharing algorithm, CARP also has a protocol component. The Membership Table has a well-defined structure and syntax so that all clients of a single array can have the same configuration. If some clients are configured differently, CARP becomes less useful because not all clients send the same request to the same parent. Note that Squid doesn't currently implement the Membership Table feature.

Squid's CARP implementation is lacking in another way. The protocol says that if a request can't be forwarded to the highest-scoring parent cache, it should be sent to the second-highest-scoring member. If that also fails, the application should give up. Squid currently uses only the highest-scoring parent cache.

CARP was originally documented as an Internet Draft in 1998, which is now expired. It was developed by Vinod Valloppillil of Microsoft and Keith W. Ross of the University of Pennsylvania. With a little searching, you can still find the old document out there on the Internet. You may even be able to find some documentation on the Microsoft sites. You can also find more information on CARP in my O'Reilly book Web Caching.

10.9.1 Configuring Squid for CARP

To use CARP in Squid, you must first run the ./configure script with the enable-carp option. Next, you must add carp-load-factor options to the cache_peer lines for parents that are members of the array. The following is an example.

cache_peer neighbor1.host.name parent 3128 0 carp-load-factor=0.3

cache_peer neighbor2.host.name parent 3128 0 carp-load-factor=0.3

cache_peer neighbor3.host.name parent 3128 0 carp-load-factor=0.4

Note that all carp-load-factor values must add up to 1.0. Squid checks for this condition and complains if it finds a discrepancy. Additionally, the cache_peer lines must be listed in order of increasing load factor values. Only recent versions of Squid check that this condition is true.

Remember that CARP is treated somewhat specially with regard to a neighbor's alive/dead state. Squid normally declares a neighbor dead (and ceases sending requests to it) after 10 failed connections. In the case of CARP, however, Squid skips a parent that has one or more failed connections. Once Squid is working with CARP, you can monitor it with the carp cache manager page. See Section for more information.

    Appendix A. Config File Reference