In Windows Server 2003, there are two clustering solutions, Network Load Balancing (NLB) and Cluster Service. Both systems were available in Windows, but they have been updated in Windows Server 2003. Although both are classed as cluster solutions, they work in different ways and have different advantages and potential uses:
Cluster Service can be used to provide machine-level backup to a system in the event of failure. Typically, it's used within datacenters and enterprise server configurations where you need 100% availability. Clusters can be configured in a number of different ways, but really with one goal in mind?for one machine to take over the responsibilities of another in case it fails.
Network Load Balancing is a software only solution for distributing requests over a number of servers within an NLB cluster. This provides basic failover support by redirecting a request only to a currently active machine and also load balancing by spreading the requests among the machines to make the best of the overall horsepower.
A third technology exists that can also be added to Windows Server 2003 through Microsoft Application Server 2000, which is called Component Load Balancing (CLB). Unlike the other technologies that provide support for clusters irrespective of the specific applications you might be supporting, CLB works at the application level.
Using CLB, individual Common Object Model+ (COM+) components reside on a number of separate servers within a COM+ cluster. This enables you to distribute the workload of an application across multiple servers running a single business application. CLB automatically routes calls to individual COM+ components within the COM+ cluster. It can also be used with a combination of NLB and Cluster Service to provide an additional tier of load balancing within a large Web farm. Refer to the documentation on Microsoft's site for more information on Application Server 2000.
Table A.1 shows the cluster services supported by the different operating systems. Note that with Windows Server 2003, there is not a huge amount of disparity between the versions. If you need true clustering services, you need Enterprise or Datacenter Editions, whereas NLB is supported by all versions.
This is a marked change over Windows 2000 and Windows NT. Previously, you required the Advanced (now Enterprise) Edition of Windows 2000 to gain NLB. This change again shows that Microsoft is responding to the market demands of lower-level installation and server farms, where NLB would be useful, but the additional features of the Enterprise Edition would be wasted.
Not all services within Windows Server 2003 can be clustered, and in many cases it doesn't make sense for some elements to be supported by the cluster services. For example, remote access is not a critical service, so providing fail-over support is not required. For load distribution, generally the number of physical modems connected to a server will be the limiting factor.
Instead, cluster services concentrate on two main areas:
Internal services, such as distributed file systems, DHCP, and WINS
Public services, such as IIS and message queuing
In addition, the clustering types supported by each service are dependent on how the individual service is normally used. For example, with IIS it makes sense to support the cluster service, to provide resiliency, and to provide NLB for request distribution. File services, however, are only supported by the cluster service because there is no way to reliably exchange information about open files between two machines, even if the files are on a shared device.
For a full list of the services that can be clustered, see Table A.2.
Internet Information Service
Distributed File Service Roots
Distributed Transaction Coordinator
See following note
Volume Shadow Copy Service Tasks
LOAD BALANCED PRINTERS
You can, technically, cluster print spools by having two servers that both print to the same network attached print device. In practice of course, you've still only got one device actually handling the printing, so the benefits are never really fully realized.
Although IIS is supported in clustering, there are few instances when it makes sense to do so. It makes more sense to use multiple IIS servers and NLB?you'll get both redundancy and load balancing.
Network Load Balancing provides both failover and load balancing for IIS. The system works through a standard network connection on each machine. Each member of the cluster is configured to use one or more shared IP addresses, in addition to its personal IP address. This means that all members of the cluster receive the request from a client, but only one member responds.
The decision for which machine should respond is based on a set of internal rules and customizable affinity rules. All members of the cluster exchange system load information, which is used by the NLB system to choose the member to process the request.
Failover support is provided by NLB through this exchange of information. Any members that have not communicated their status are removed from the equation.
The primary improvement in Windows Server 2003 for Network Load Balancing is the move to a single administration application, called the NLB Manager. This greatly simplifies the setup of an NLB cluster. Unlike Windows 2000, you no longer have to set up each machine individually.
Instead, you create the NLB cluster on one machine. Individual members of the cluster can be added from within NLB Manager without the need to visit each machine.
The NLB Manager handles all aspects of the configuration for all members within the cluster, automatically propagating changes to each member. Furthermore, NLB Manager enables you to manage multiple clusters simply by connecting to an existing cluster.
You can see the NLB Manager in action, showing the status of a newly created cluster, in Figure A.1.
Two other features affect the way NLB works compared to Windows 2000, Virtual Clustering, and multiple NIC support.
Past versions of NLB spread requests across a cluster according to the IP address and port address range on a global basis. Although technically this made administration of the cluster easier, it also limited a cluster to a very specific range of Web sites. In particular,
Each member of the cluster was limited to supporting the set of traffic defined by the cluster.
All members of the cluster had to support traffic for all the Web sites or applications they hosted, even if you didn't want all Web sites to be load balanced.
You could only block all applications on a cluster member, not just specific applications.
To address these problems, Windows Server 2003 includes a new feature called Virtual Clusters. Virtual Clusters take into account the preceding problems and provide a number of solutions:
Cluster IP addresses can be configured with different port address ranges, allowing one cluster IP address to redirect to a particular application being hosted on a specific port on each member. For example, IP address 192.168.1.20 could refer to port 80 hosted Web sites on the cluster members, whereas 192.168.1.21 refers to port 8080 sites.
Traffic for a Web site or application can be filtered out on a cluster member basis, allowing upgrades on a single member with the cluster to take place without shutting down all other applications on that member.
Cluster member level affinity allows you to assign different hosts within the same cluster to handle specific Web sites or applications. For example, AppOne could be hosted by members one, two, and four, whereas AppTwo is handled by members two, three, and five.
Within Windows 2000, it was not possible to bind the NLB service to any more than one network card (NIC). This limited clusters to handling a specific set of Web sites within a given IP address and hardware environment.
In Windows Server 2003, each NIC is attached individually to the cluster, allowing you either to connect multiple NICs to the same cluster or to configure multiple clusters using the same machines but different NICs.
Cluster service provides failover capability for two or more servers within a given cluster. It cannot be used to improve the performance or response times for an application.
Typically, the nodes in a cluster are connected to the same shared storage solution, such as a RAID device, which is used not only to store user data, but also to share the quorum, which contains information about the cluster and how it operates. Nodes are not attached to each other except through the network and the shared storage device.
Each machine in the mode communicates a heartbeat to the other nodes, which indicates the node's availability. The moment the heartbeat of the primary node dies, the next available node in the network takes over the services the primary node was handling.
Most OEMs will set up and provide clusters using approved hardware, preconfigured and tested to support the application you have in mind. Personally, I'd recommend this as the best solution for creating a new cluster, because getting it wrong can lead to data corruption and unstable services that might lead to a more unreliable installation.
The biggest change in the Cluster service is that both Enterprise Edition (previously Advanced Server) and Datacenter Edition support 8-node clusters. However, the Cluster service will only create clusters of nodes running the same OS edition?that is, all nodes in a cluster must be running either Enterprise or Datacenter Editions; you cannot mix and match the two.
Other new enhancements, briefly, are
64-bit memory support in both Enterprise and Datacenter Editions, allowing for up to 4TB of memory per node, particularly useful for SQL Server installations.
Terminal Server support, although active sessions cannot be migrated during a failover.
Majority Node Set (MNS) clusters enable clusters to be set up without using a shared storage device. Instead, Microsoft supplies the quorum resource. This allows for geographically dispersed clusters?for example, two database servers in different locations, cities, or even countries. You can use the same system in installations in which the ultimate storage requirement is not critical?for example, in a network-oriented system where data is ultimately transferred to or logged to another device. However, because there is no shared storage device, it's not possible to share user data across the cluster.
The Cluster service is also now installed by default, but not activated; you don't need to separately install the Cluster service.
Remote Administration enables all aspects of the cluster to be configured remotely. Changes to drive letters and physical disks are also replicated to active terminal server client sessions.
The command line tool, cluster.exe, enables scripting and automation for cluster management.
Support for a larger quorum, 4MB instead of just 64KB, allows for more file or printer shares.
Active Directory (AD) integration. Clusters can now be registered in AD as a single computer; clusters can be identified, published, administered, and accessed by their cluster name, not their individual node names. Because a cluster is a single node, Kerberos authentication can be enabled on the cluster.
Network status is taken into account when deciding which node to switch to in the event of failover. Previously, if a node lost network communication, it would retain control of the cluster even though other nodes in the cluster couldn't connect to it. Now, a node must have an active public network interface before gaining control of a cluster.
Rolling upgrades allow nodes to be taken offline and upgraded while the other nodes continue to provide failover support, meaning that there is less downtime.
WANT TO KNOW MORE ABOUT CLUSTERING?
If you're interested in learning more details about how clustering has changed in Windows Server 2003, check out Microsoft Windows Server 2003 Delta Guide (ISBN: 0-7897-2849-4).