As soon as there is more than one terminal server in a corporate environment, the issue of load balancing at the server level arises. How can the system be set up such that users do not need to directly log on to one of the available terminal servers without knowing how heavy its load is at the time?
In an environment with several terminal servers, these servers can be grouped into logical units known as server farms. First, these server farms serve to create a logical connection between the individual servers to make it easier to manage them jointly. Second, load-balancing mechanisms can be established in server farms. A farm represents an individual logical unit with a unique name for one client. The point responsible for balancing the load within a farm uses an adequate algorithm to redirect the connection to the most suitable server in the load-balancing network.
There are several different technologies connected with load balancing on terminal servers. Each addresses different requirements relating to availability, scalability, and supporting special functions.
Important? |
Most load-balancing mechanisms only function appropriately when the terminal servers in a farm are configured with identical software and hardware. If they are not identical, users experience inconsistent performance when using several differently configured terminal server connections. The installation of Windows Server 2003 and applications for terminal servers was described in detail in Chapter 2 and Chapter 5. In addition, for terminal servers to function in a load-balancing network, user data, user profiles, and home directories must be stored on dedicated file servers. None of the production terminal servers should assume additional tasks, such as those of Web servers, database servers, or print servers. Only then will the environment have the optimum configuration for a terminal server farm. |
By default, the first port of call when looking for a solution for load balancing between terminal servers is Microsoft. After all, the producer of Terminal Services ought to have an intimate knowledge of the elements required for establishing a server farm. Regrettably, however, there is no explicit function to fulfill this purpose. Only when you take a more “creative” approach to looking for a solution do you come across the Microsoft Windows cluster technology, which, at first glance, does not seem to have anything to do with terminal servers. On closer inspection, though, this technology offers at least a minimum solution for a terminal server farm.
The Windows cluster technology is an integrated option for raising the availability and scalability of system services. The various constellations of Windows Server 2003 products and versions contain three different cluster options. However, only the first of the options listed here is truly relevant for terminal servers:
Network Load Balancing Service (NLBS) Available in all versions of Windows Server 2003. The maximum number of servers is 32. It is commonly used for load balancing for TCP and UDP data traffic for terminal servers, Web servers, Internet Security and Acceleration (ISA) servers, Windows Media Servers, and Mobile Information Servers.
Component Load Balancing (CLB) Available in Microsoft Application Center 2000. The maximum number of servers is 12. This technology is used to establish an individual configuration and administration point for Web server farms. Nonetheless, the technology is irrelevant for terminal servers.
Server clusters Available in Windows Server 2003, Enterprise Edition or later. The maximum number of servers is eight. Special hardware components can be used to connect Microsoft SQL Servers, Microsoft Exchange Servers, file servers, or print servers so that, should one server fail, another will assume all of the required processes. Processes continue in the same status as prior to the failure. The cluster service is not compatible with the terminal server service and can only be used around the periphery of terminal servers.
The concept of server farms is extremely important for terminal servers—even though the Network Load Balancing Service is more effective with Web servers. Still, terminal servers installed in an identical manner can be integrated into one server farm using NLB. An incoming client request is then distributed to one server in the farm. The appropriate load for the individual servers in the farm can be configured in relatively coarse terms. If the total operational load increases over time, additional servers can be added to the farm. This reduces the load on the individual servers and means easy scalability for the entire environment.
If a server fails or is shut down unexpectedly, the respective user sessions are often lost. But, if they log on again, users are immediately redirected to another server available on the load-balancing network, where they can carry on working. Of course, for this to function, the user data must not be stored on the terminal servers; it must be managed by specialized data servers that are attached to the system, for example, a file server or database server. Only then will all of the data entered before the last time the save function was used be available when the user logs back on. That is why it is always advisable when installing applications to make sure that the automatic save function is enabled for user data and set at relatively short intervals.
Note? |
In a traditional client/server model, both the failover capabilities of the user interface components and the administration complexity of the application layer are increased on servers in a Network Load Balancing Service environment. Generally terminal servers or Web servers are clustered with Microsoft Network Load Balancing Service. The data layer servers, on the other hand, are usually configured as server clusters. These two cluster solutions, that is, load-balanced servers and clustered servers, must not be confused. See Chapter 1 for common client/server models. |
The mechanisms of the Microsoft Network Load Balancing Service are only relevant for terminal servers at the time of user logon. However, the actual load on the individual servers at the time is not measured by the Network Load Balancing Service, so a more appropriate name for the procedure might be connection balancing. Once the user session has been established via the conditional protocols, RDP or ICA, the communication between terminal server and Terminal Services client takes place without any further load balancing. The relevant dedicated network connection usually remains the same for the whole duration of the session. The user session therefore stays on the same server for its entire lifecycle, even if there happens to be other servers in the farm experiencing lower load at certain times; which would make them more suitable for more active client connections. Users who log off and log back on might be connected to another server by the Network Load Balancing Service.
An exception to this rule occurs when the connection is intentionally or unintentionally interrupted. In line with the preset configuration, the respective client session might remain open on its terminal server and be used again once the connection is re-established. Obviously, load balancing does not make sense here, because it is highly likely that a different server will be selected by the load-balancing service when the connection is re-established. That would result in a new user session being started on another terminal server while the interrupted user session remains active in the memory of another server. In that case, it would be impossible to open documents from the second user session even with the appropriate permissions if they were still being accessed in the first, no longer used, user session. To solve this problem, Microsoft developed the Session Directory with Windows Server 2003. (See later in this chapter for more details.)
With Windows Server 2003, the Network Load Balancing Service is executed with the default network driver. The latest version was designed to use Ethernet adapters at 10 megabits per second (Mbps), 100 Mbps, and 1 gigabit per second. It is not compatible with asynchronous transfer mode (ATM) or Token Ring. For optimum performance and easier configuration, it is advisable to install a second network adapter on each server. The first network adapter processes the usual network traffic using the Network Load Balancing Service and the cluster IP address, or the respective logical name of the server farm. The second network adapter enables direct communication for the other terminal servers in the farm, the application data layer servers and administrators, using the separate physical IP address.
The Network Load Balancing Manager, which is found under the Start\Administrative Tools menu group, is used to establish and enable a Network Load Balancing Service (NLBS) cluster. The cluster parameters include the virtual IP address, the subnet mask, and the full Internet name of the cluster. The port rules that determine the port area, protocols, and filter mode are also configured from this tool. After these settings are properly configured, the connection is established with the servers that are to become part of the new NLBS cluster.
If this configuration work is carried out on a server that has only one network card (Unicast mode), the Network Load Balancing Manager cannot configure and manage other servers from this server. Adding a new server to an existing cluster must then be done locally on that server.
If the configuration is to be done without the Network Load Balancing Manager, the properties of the Internet protocol (TCP/IP) are modified manually for the network adapter of each terminal server under Start\Control Panel\Network connections\LAN connections. In the process, an additional IP address is added, which is the virtual address of the cluster. Additionally, the Network Load Balancing Service for the network adapter must be enabled and properly configured. This must include the cluster parameters described earlier (such as virtual IP address, subnet mask, and Internet name), host parameters, and port rules.
Note? |
The network adapter of a server that is to become a member of a Network Load Balancing Service cluster is not permitted to receive its IP address from a Dynamic Host Configuration Protocol (DHCP) server. The IP address must be statically assigned to guarantee that the connection will be made using the load-balancing mechanism. |
The port rules in a Network Load Balancing Service cluster determine values for affinity and load weight within the filtering mode. Using the Single or Class C affinity options for multiple hosts, you can determine a rule requiring multiple requests of a particular client IP address to be always redirected to the same server in a cluster. Obviously, this is not very useful for terminal servers. With terminal servers, the None option with regard to affinity should be used to select the server experiencing the lightest load.
The load weight setting in the host properties determines the relative share of network traffic for each individual server on the network. The permissible values lie between 0 and 100. In this way, even servers with different performance levels can be included in a NLBS cluster. The actual share of data flow assigned to each server is calculated as the local load weight divided by the total load weight in the cluster.
Tip? |
To make sure that the latest data is displayed after any configuration modifications, the cluster needs to be refreshed in the Network Load Balancing Manager. |
If you study the descriptions of the Network Load Balancing Service in more detail, you will find that it was not developed for use with terminal servers, but for balancing stateless Web server connections. So for large terminal server farms, the Microsoft Network Load Balancing Service is clearly not the best solution. This is partly due to the NLBS only supporting up to 32 nodes. Another reason is that available hardware solutions or terminal server-specific software products for load balancing are more powerful than the general load-balancing functions of Windows Server 2003.
The Session Directory is a new function introduced in Windows Server 2003. It allows users of a load-balanced terminal server farm to reconnect with a disconnected session in a manner that is reproducible and secure. For this reason, the Session Directory is compatible with the Network Load Balancing Service in Windows Server 2003. It also supports load-balancing technologies from third-party manufacturers, such as F5 Networks, Alteon, or Radware. These clearly hold greater potential than using Microsoft’s Network Load Balancing Service.
Note? |
In principle, the Session Directory service can be used with all versions of Windows Server 2003. However, to be able to participate in a Session Directory, Windows Server 2003 Enterprise Edition or Datacenter Edition must be installed on the target platform. This is true for both the 32- bit and the 64-bit versions. |
From a technical point of view, the Session Directory is a database. It manages a list of user names in correlation with the sessions in a terminal server farm. The database can be located on a terminal server in the farm or on a separate server on the network.
Following user authentication in the terminal server farm, the Session Directory is searched for the relevant user’s logon name. If the database already contains a session for this user, the user will be redirected to the server that was holding the disconnected session. This remedies at least the most obvious weakness of a Network Load Balancing Service cluster in this situation. However, where this function becomes truly interesting is in connection with hardware solutions for network load balancing, where it is relatively easy to link the respective products from other manufacturers with the functions of a terminal server.
Two components are required to be able to use the Session Directory in a terminal server farm:
Session Directory server This is the server where the Session Directory service runs. It does not need to be a terminal server. The Session Directory service works with all editions of Windows Server 2003.
Client servers These are all terminal servers that request data from a Session Directory server. Client servers must be configured so that they link up with the Session Directory server. Only terminal servers running Windows Server 2003, Enterprise Edition or Datacenter Edition, can use the Session Directory.
Tip? |
If very high availability on the part of the Session Directory server services is required, it is advisable to set it up as a separate server cluster with two nodes. The Session Directory server service is compatible with Microsoft cluster technology in this respect. In this way, the probability of service failure can be considerably reduced. Additional information is available in the White Paper “Windows Server 2003: Session Directory and Load Balancing Using Terminal Server” which can be found on the companion CD- ROM of this book |
Tssdis.exe, the initially disabled Terminal Services Session Directory service, is installed on Windows Server 2003 by default. All that is required to make the function available permanently is for an administrator to set the start type on the selected Session Directory server to Automatic. For the initial configuration, it can be started manually, thus avoiding restarting the whole server.
After the Session Directory service has been launched, both the Session Directory server and the client server need to be configured. The first time the Session Directory service is launched, the empty, local security group named Session Directory Computers is automatically created on the Session Directory server, if it does not already exist. All terminal servers that need to be able to access the Session Directory must be in this group. Consequently, you need to include each of the servers concerned in this group, using the Local Users and Groups in the Computer Management tool.
Tip? |
If the Session Directory is launched on a domain controller, the group created, that is, the Session Directory Computers security group, will be a local group across the entire domain. This results in the configuration being assigned across all domain controllers, which is not recommended. |
On the client servers, the configuration is carried out using the Terminal Services Configuration tool or Group Policies. The first option requires that the following properties be set under the Server Settings\Session Directory menu point:
Join Session Directory Activates the Session Directory for a terminal server. If this option is selected, the cluster name and the server name for the Session Directory must also be entered.
Cluster name Name of the Network Load Balancing Service cluster resolved through DNS.
Server name for Session Directory Name of the server on which the Session Directory service was launched.
Session Directory for network adapter and IP addresses should divert user to: Chooses the network adapter that the user’s request for a new connection should be redirected to. This is required for terminal servers with more than one network adapter.
IP address redirection Provides the option to support load-balancing products from other manufacturers. Many of these products act as load balancer and router simultaneously. Where this is the case, it might no longer be possible to contact a terminal server through its direct IP address, and a routing token might be required, which will need to be redirected.
Figure 11-5: Session Directory Settings in Terminal Services Configuration.
All of the settings relating to the Session Directory can also be conducted through Group Policies. These can either be valid within an organizational unit in the Active Directory or for the local server only. If the Session Directory is to be used in a large- scale environment, it is recommended that the Active Directory options be configured.
The four relevant entries are located along the following path: Computer Configuration\Administrative Templates\Windows Components\Terminal Services\Session Directory. The settings here correspond to the configuration options in Terminal Services Configuration:
Merge Session Directory Corresponds to Join Session Directory in Terminal Services Configuration.
Cluster name of the Session Directory Corresponds to Cluster name in Terminal Services Configuration.
Session Directory server Corresponds to Server name for Session Directory in Terminal Services Configuration.
Terminal server IP address redirection Corresponds to IP address redirection in Terminal Services Configuration.
Figure 11-6: Conducting the settings for the Session Directory using Group Policies.
The Microsoft Network Load Balancing Service and the new Session Directory enable smaller terminal server farms to be established. The combination of the Session Directory and hardware products for load balancing expands the possibilities still further. But with these solutions, the effort required for the apparently simple task of load balancing is still very high. So what alternatives are available?
With the MetaFrame XP Presentation Server, Citrix also offers a very powerful component for load balancing, which integrates itself seamlessly into the concept of published applications, desktops, and content. The Citrix load evaluators allow the load to be balanced among different servers in a farm. The respective rules are set, monitored, and adjusted by the corresponding list object in the Management Console for MetaFrame XP.
In the default configuration, the Management Console for MetaFrame XP contains two load analysis programs, one named Standard and one named Advanced. They both contain predefined rules and neither can be modified or removed. However, other load evaluators can be defined, with specific sets taken from a total of 12 rules based on conditions or performance counters objects. The rules observe the following parameters:
Number of users that access a certain published application
Context switches on a processor when it changes from one process to another
Utilization of the processor or the processors of a server
Input and output of hard drive data volume
Number of hard disk operations per second
Range of the IP addresses where an accessing ICA client is located
Number of available licenses on a server
Proportion of a server’s main memory utilization
Number of page faults when a server accesses physical memory that has been flushed to disk
Number of page swaps per second on a server when physical memory is moved to virtual memory on disk
Weekly days and hours when a server should be available to the load-balancing network
Number of users that access a server
These rules can be used in any combination to form a new load evaluator. Many rules allow you to determine threshold values for defining the condition of full load and no load. The number of users, processor utilization, page swaps, and the amount of memory used are the most important criteria for load balancing.
A load evaluator can be assigned either to a server or to a published application. To do this, go to Actions\Load Manager\Load Manage Server or Actions\Load Manager\Load Manage Application, providing the server or the application has already been activated. In this way, different published applications or desktops can be configured in a highly individual manner in terms of load-balancing behavior. However, it is sufficient for most environments to apply the two predefined load evaluators.
The configuration of the load evaluators with their rules is stored in the MetaFrame server farm’s IMA data store, from where each server obtains the necessary data for the appropriate load-balancing settings when starting up. Taking the rules as the basis, the zone data collector selects a suitable server when a client requests a connection and returns the result to the client. The client is therefore able to connect to the server with the lightest load. If a session connection has already been established, each subsequent connection will be directed to the same server to allow applications to communicate.
The load-balancing feature in a Citrix MetaFrame server farm is extremely powerful and can easily incorporate several hundred servers. This is why the solution is used in the majority of production terminal server farms.
Very large environments often require the use of several farms with different applications available on them. This might be for organizational or technical reasons. But problems can arise when a user works with several farms simultaneously and has a user profile that is stored on a central file server. Whenever the user closes the last published application or desktop in a farm, the user’s profile is written back to the profile store on a file server. If this happens for several farms, data from the first farm might be overwritten by contradictory settings on a different farm. As an example, the settings for the default printer that a user modified might no longer be in the user’s profile if the user closed the applications in the “wrong” order. An application on another farm would then have the old settings in the profile simply because it was closed there at a later time. At the next logon, the user will be surprised and annoyed to find that the modified printer settings have disappeared.
A common solution to this problem is to establish a separate profile folder for each server farm on the file server. These folders could be named Farm1, Farm2, and Farm3, for example, if there were three server farms in the load-balancing network. The servers in all farms would then require an environment variable (such as %FarmID%) containing details of which farm they belong to. The %FarmID% environment variable would then contain the value Farm1 for the first farm, Farm2 for the second farm, and Farm3 for the third farm. Through the properties of the user account, each user would then be assigned a profile path containing the name of the variable, e.g. \\File server\Share\%FarmID%\User name. The drawback here is that three profiles need to be stored for each user. This solution takes up a large amount of the hard disk resources and often causes the system to behave in a way that is confusing the users.
Another alternative is to use mandatory profiles. (See Chapter 4.) However, this data needs to be handled specially for authorized system modifications by the user to be stored and loaded at the next logon. Logon and logoff scripts are used for this purpose; some of them even requiring registry access. When a big number of different applications are concerned and the user is to have the greatest possible freedom to adjust settings, it takes a lot of effort to manage all user-specific changes to the runtime system regardless of the standard concept of server-based profiles.
For this reason, Citrix and certain system integrators working with terminal servers have, over time, developed a set of support tools that support a mixture of mandatory profiles and user-specific system data. The tools are generally complicated, are hard to explain, and are normally too much to handle for inexperienced administrators. They should therefore be seen as integration aids rather than products.
Adjusting a terminal server environment to the concepts of these hybrid profiles, flex profiles, jumping profiles, or advanced profiles requires precise knowledge of all application accesses to the registry database and the system files during startup and shutdown. Obviously the same requirement holds for common settings that apply to all applications. The task of keeping the respective configuration data up-to-date should not be underestimated, but in many cases excellent maintenance does enable significantly better user logon times and consistent configuration across farm boundaries.
A further improvement in logon times in large-scale environments with several terminal server farms can be achieved with the use of local policies instead of Group Policies. It is important to bear the one considerable drawback in mind, however, which is that it will no longer be possible to manage local policies centrally. In large- scale environments, this can only be seen as a workable solution if the policies can be distributed fully automatically to all terminal servers in a farm at the same time as the system and application installation.