Load Balancing

A load-balancing system or device is a device that can redirect incoming traffic to one or more real servers. In the case of one destination, it resembles a simple relay of the forwarder. The commercial solutions, such as Cisco Local Director, offer advanced features and a choice of algorithms that enable you to choose a suitable server, including surveillance of candidate availability via polling or agents/clients (see Figure 12-4). Because of their implementation, they are often referred to as reverse-NAT engines or reverse proxies.

Figure 12-4. Integrated Load-Balancing Architecture

[View full size image]

Various approaches to load balancing/distribution exist:

  • Round-robin

  • Weighted round-robin

  • Static weight

  • Least load

  • Measured response time

  • (Weighted) Least connections/users/sessions

  • Least network traffic

  • NAT/PAT-based commercial approaches

  • Server agents (probes)

Most Layer 3/4 load balancers use reverse NAT/PAT in combination with dynamic algorithms to determine load and probe candidate operational status to make forwarding decisions. Load balancing directs traffic to the most available (feedback control via agents/polling) server farm (through intelligent DNS/distributed traffic direction) or the optimal server (through local traffic redirection). Availability systems monitor the health and responsiveness of websites or shared services and direct traffic accordingly.

According to Cisco.com, server clustering and traffic load balancing offer the following major advantages:

  • Scalability

  • Nondisruptive growth

  • Load distribution

  • Continuous availability

Scalability and nondisruptive growth are somewhat related in that they refer to the cluster's property, that a particular user's session remains bound to a particular cluster constituent. This allows addition of new cluster members or maintenance of existing ones without service interruption. Load distribution refers to equally spread load without underutilizing or overwhelming physical cluster resources. Finally, continuous availability is ensured by instant switchover to other systems if failure of a cluster constituent occurs.

Firewall Load-Balancing Approaches

Linux iptables, BSD ipfilter, FreeBSD ipfw, and OpenBSD packet filter pf provide features for load balancing incoming requests (see Examples 12-5 and 12-6).

Example 12-5. OpenBSD pf Load Balancer

web_servers = "{,, }"

rdr on ne5_if proto tcp from any to any port 80 -> $web_servers round-robin

Example 12-6. FreeBSD ipfilter Load Balancer

### Single-Port Redirects (one_port-to-one_port) ###

rdr ne5 port 23 -> port 23 tcp

### NAT Load Balancer (round-robin) ###

rdr ne5 port 80 -> port 8000 tcp round-robin

red ne5 port 80 -> port 8000 top round-robin

rdr ne5 port 80 -> port 8000 tcp round-robin

Note that pf address pools and load balancing are explained at http://www.openbsd.org/faq/pf/pools.html. Also note that OpenBSD pf provides four methods for using an address pool:

  • Bitmask

  • Random

  • Source hash

  • Round-robin

HighUpTime Project loadd Daemon

loadd is a load-balancing daemon of the HighUpTime (HUT) Project (http://www.bsdshell.net/hut_loadd.html) that depends on the FreeBSD ipfw firewall and works via the generic DIVERT socket. (Recompile your kernel with DIVERT and IPFIREWALL options). You can configure a divert rule easily with ipfw and redirect packets into the loadd system (see Example 12-7).

The loadd daemon checks the destination service and does reverse NAT on matching traffic according to a choice of two algorithms: round-robin algorithm and intelligence load sharing (balancing according to real-time load as reported via the lmd client on real servers). The daemon communicates with server agents (lmd) and needs to be compiled on the real server platforms.

According to the author of loadd, it currently includes the following capabilities:

  • Support of TCP (no UDP yet)

  • Support of multiple IP services (except FTP and SSL for the moment)

  • Choice of IP aliasing

  • Choice of port number to load balance

  • Support of multiple daemons on the same host without conflict

  • Support of lmd client modules (HTTP module testing service is provided with this package)

Example 12-7. FreeBSD ipfw+loadd Example

[root@castor:~#] ipfw add divert 8670 tcp from any to any

[root@castor:~#] /usr/local/libexec/loadd -h

LOADD - Load Balancing Daemon for FreeBSD and ipfw

Choose one of the following:

-f : specify a path to a configuration file

-p : specify a port number for loadd

-b : specify a method for load balancing (roundrobbin, loadsharing, intlloadsharing)

     !! Caution !! Due to active development load sharing may be

     broken at this time. The best load-balancing method is intlloadsharing.

-d : run loadd in daemon mode, this is not the default at this time

-v : run loadd in verbose mode

-t : transparent proxy support (default is on and off doesn't work yet)

-h : this screen ;)

[root@castor:~#] /usr/local/libexec/loadd ?p 8670 ?f /usr/local/etc/loadd.conf

[root@castor:~#] cat loadd.conf

servers =,,

ipaliasing =

ports = 80

verbosemode = yes

balancingmode = intlloadsharing

daemon = yes

transparent_proxy = yes

[root@castor:~#] cat lmd.conf

loaddservers =

# Just set loaddservers to the IP of your loadd server. You can specify multiple

# loadd servers with the ',' separator.

Pure Load Balancer

Pure Load Balancer (PLB, http://plb.sunsite.dk/) is a performance-optimized user-space plain round-robin load balancer for the HTTP and SMTP protocols only. It provides failover abilities while operating as a reverse HTTP proxy. When a back-end server goes down, it automatically removes it from the server pool and tries to bring it back to life later.

PLB has full IPv6 support and works on OpenBSD, NetBSD, FreeBSD, Mac OS X, and Linux. As an added performance benefit, PLB accepts client sessions and buffers their initial request. And only after the full request has been received does it establish a connection to a back-end web server.

PLB can be started via /usr/local/bin/plb ?daemonize --config /etc/plb.conf &. Example 12-8 shows a configuration derived from the one presented at http://plb.sunsite.dk/.

Example 12-8. Pure Load Balancer Example Configuration

# This is the IP address and port that the load balancer answers on.

# To listen to all interfaces, just use for the IP address.


listen_port                  80

# Bind family. 0 for IPv4, 1 for IPv6.

bind_ipv6                    0

# Protocol to balance : HTTP (SMTP is not implemented yet)

protocol                     HTTP

# IP addresses of the real web servers. Use space as a separator.

# IPv4 and IPv6 addresses are allowed.


servers_port                 80

# After binding ports, the load balancer chroots to an empty (recommended)

# directory and drops privileges.

user                         nobody

group                        nobody

chroot_dir                   /var/empty

# Timeouts to avoid clients use unneeded slots on your servers

# with idle connections. Values are in seconds.

timeout_header_client_read   30

timeout_header_client_write  30

timeout_header_server_read   30

timeout_header_server_write  30

timeout_forward_client_read  30

timeout_forward_client_write 30

timeout_forward_server_read  30

timeout_forward_server_write 30

# When a server goes down, the load balancer will try to probe it at regular

# intervals to bring it back to life.

# This is the delay, in seconds, between probes.

timeout_cleanup              15

# Really mark a server down after this many consecutive failures.

server_retry                 5

# The total maximum number of clients to allow

max_clients                  1000

# The backlog. Try something like (max_clients / 10) for extreme cases.

backlog                      100

# The log file verbosity.

# 0 => everything, including debugging info (not recommended)

# 1 => all errors, all warnings, all common notifications

# 2 => all errors, all warnings

# 3 => quiet mode, fatal errors only

# Default is 2. Leave this commented if you want to override it with

# the -d command-line switch.

# log_level                    2

# log_file                     /var/log/plb.log

The PEN Load Balancer

PEN (http://siag.nu/pen/) is another load balancer for "simple" TCP-based protocols such as HTTP. It implements a stateful connection-integrity regime that maintains correlation of a client session to the initially chosen real server behind a virtual service.

Super Sparrow

Super Sparrow (http://www.supersparrow.org/) handles geographically weighted load balancing and direction by means of accessing BGP routing information (for instance, from a route server), which is a major advantage over DNS-based approaches. This enables the tool to reliably and effectively identify the site closest to the requesting client. The concept is intriguing; I am not sure how actively maintained the project is, so form your own opinion if you want to deploy it. It's definitely worth looking at the architecture.

Cisco Gateway Load Balancing Protocol (GLBP)

GLBP works similarly to HSRP/VRRP approaches for servers. It protects a network from failing gateway nodes and circuits with the added capability of load sharing among a group of redundant routers. GLBP implements a weighted round-robin approach to load sharing for a group of gateways. This feature was introduced in Cisco IOS 12.2 (S/T) releases.

A requirement for GLBP is the support of multiple MAC addresses on physical router interfaces. The notable difference to HSRP/VRRP is the capability of the protocol to actively load share featuring a virtual first-hop IP router. Hence, only one default route needs to be configured on the client workstations.


In a lot of features, the protocol has inherited proven approaches from HSRP/VRRP. For configuration details, consult the Cisco.com document "GLBP?Gateway Load Balancing Protocol."