To enable Windows Server 2003 to support multiple simultaneous users, the operating system must meet several basic prerequisites. This includes the standard components of Windows Server 2003 such as memory management and very specific Windows components and services, such as Windows Terminal Services. The following provides a detailed description of the system architecture with a view toward support of multiple users.
The Windows Server 2003 architecture still corresponds to the first model of Windows NT. It combines the attributes of several operating system models, including a layer-oriented approach and a client/server concept. The internal system structure of Windows Server 2003 is divided into a series of components. Some parts of the operating system run in a highly privileged and protected mode (Executive), whereas others run in an application mode. It is extremely interesting to look closely at the individual modes to gain a better understanding of terminal servers and how they support simultaneous users.
The structure of Windows Server 2003 is based on a kernel architecture that is divided into a privileged core mode and the less protected user mode. This characteristic feature of Windows architecture is also evident in each mode’s strictly separate address area and is based on the processor’s ring structure. This design allows each program module to be categorized by priority level.
An Intel CPU (as of generation 80386) or compatible processors normally offer four priority levels. The top priority, that is, the highest level of protection with permissions to execute privileged operations and to communicate with the hardware, is assigned priority level zero. Windows Server 2003 uses only ring (level) zero for the core mode (core and system services) and ring three for the user mode (all other processes). Rings one and two are not used at all.
If we compare Windows Server 2003 with other operating systems, we encounter numerous terms that serve to characterize as well as differentiate. Let’s look briefly at the terminology and how it relates to Terminal Services to establish a common basis for the following chapters.
Cooperative multitasking This type of operating system allows the simultaneous execution of different applications. The applications must cooperate with each other by frequently relinquishing processor control to another application. This concept is the basis of older Windows systems, for example, Windows for Workgroups 3.11. On a modern terminal server under Windows Server 2003, we can now run programs developed for cooperative multitasking. However, many features of the historic run-time environment must be simulated and extended to enable the simultaneous execution of the corresponding applications. These applications often tend to consume vast resources on the host system because they were never designed for such a high degree of process control.
Preemptive multitasking This type of operating system provides strict allocation control of applications that run simultaneously. The time allowed for accessing the processor or processors can be determined exactly and does not depend on the current status of each application. The application status is saved when the application relinquishes processor control. The status is restored when the application resumes control. A terminal server is optimized to run programs developed for this kind of operating system.
Memory protection Windows Server 2003 includes mechanisms that successfully keep an application from accessing the memory of the operating system or another application. When using Terminal Services, it is also important to strictly separate processes of one user from those of another to avoid mutual interference.
Multiprocessing This is the ability of an operating system to use several processors (CPUs) within one computer at the same time. This feature is crucial for a terminal server because it directly influences the terminal server’s scalability in terms of the number of simultaneous users. Unlike many other server scenarios, a terminal server often needs to execute many simultaneous processes, which makes its ability to evenly distribute the load among several processors extremely important.
Multithreading This is the ability of an operating system to execute several program threads (subprocesses) within one application. If a system has the corresponding equipment, the programs can be run on different processors, contributing to the scalability of the system. However, this only works if highly resource-intensive processes of a server application (such as a database or a Web server) are divided into smaller units (threads) allowing distribution to several processors. This is usually not the case with terminal servers because relatively small user applications are normally run on them. Their performance is enhanced because all threads of an application instance are then bound on one processor, which reduces the thread synchronization efforts required for executing a common task.
The term hyperthreading has been introduced in connection with modern Intel processor generations. Hyperthreading refers to separating one physical processor into two logical processors. In certain scenarios, this technology improves the execution of parallel tasks, especially at the thread level.
Processes running in core mode have the task of ensuring that the basic functions of the operating system work. Therefore, they need to be prioritized according to their importance and complexity. High-priority kernel processes control access to the hardware, manage the available memory, and provide resources to processes running in user mode.
Here are the executive-mode components in ascending order.
The hardware abstraction layer is a component named Hal.dll. It must be modified when Windows Server 2003 is ported to a different processor, bus architecture, or any other computer architecture. The Terminal Services functions do not require a modified hardware abstraction layer.
The kernel is a highly protected component of the operating system and is considered the central monitoring instance of the operating system. The kernel processes interruptions, handles exceptions, determines the runtime of threads (subprocesses), allocates processor time, synchronizes processors, and makes objects and interfaces available.
The so-called “scheduler” is an important part of the kernel. It allocates processor time and determines the sequence in which threads are executed. The next thread in line receives a certain amount of time. When its time is up, the scheduler checks if another thread with the same priority level needs to be run. If so, all attributes of the current thread are saved and the next thread and its attributes proceed. Threads are grouped by priority level and handled accordingly by the scheduler.
The object manager creates, manages, removes, and protects operating system objects. These objects usually include interfaces, memory, processes, threads, files, directories, and so on. All objects have their own properties, methods, access information and identification. Processes under Windows Server 2003 use object identification to manipulate the objects.
The object manager also monitors the Windows Server 2003 namespace. The namespace is responsible for identifying objects in the local computer environment in a hierarchical organization. Object names are saved to a location depending on the object type. Only selected object types are visible for user applications.
Because of Terminal Services, there are two namespaces for creating objects in a system running Windows Server 2003. Regardless of the session context that created an application, the system-wide namespace is visible to all applications across the system. The user-specific namespace, however, manages the objects related only to the applications that originated in the same session.
To simplify the management of namespaces under Windows Server 2003, the system-wide namespace and the console-session namespace (mostly, but not necessarily, session ID 0) are linked. All objects created in the console session are automatically assigned to the system-wide namespace. In this way, Windows services or generally accessible applications do not need special handling for all users to access them. They only need to be installed or started within the console session.
The process manager manages process and thread objects. A process is defined as an address space, a set of resources, and a set of threads that run within the context of the process. A thread is the smallest time-controlled unit that can run within the system. The process manager provides all standard services for creating and using processes and their threads within the context of a subsystem.
Local Procedure Calls (LPC) are used for inter-process communication on a local computer running Windows Server 2003. Local Remote Procedure Calls (RPC) are built on LPC. Processes can directly access each others address spaces so long as the security on a target process object allows it. All processes running under a given user account can access each others memory and many applications rely on shared memory for inter-process communication.
The Virtual Memory Manager manages memory and swap files. It is a popular method of expanding physical memory by reserving space on the disk for physical memory areas not currently in use so that other programs can use it. Swapping allows the total memory required by all the applications to exceed actual physical memory.
The 32-bit variant of Virtual Memory Manager under Windows Server 2003 occupies a default virtual address space of 4 GB for each process started on a terminal server. The virtual addresses are then mapped to the physical pages of the main memory. Two GB each of virtual address space are reserved for user-specific and system-specific data. The user-specific part offers an individualized view of the memory area for the process. This allows a thread to access its own memory within a process, without allowing access to the memory of a different process. The system-specific part is available to all processes, allowing consistent access to all kernel services.
The information on available memory focuses on the use of “normal” application programs and does not apply to the special server applications, such as relational database systems. Standard applications for end users usually do not have routines that require using memory resources in this way. Therefore, all special techniques used to virtually expand the 32 bit address space and modify the memory allocation between user- specific and system-specific data are not discussed here.
The common system-specific portion of memory is naturally problematic where multiple simultaneous user sessions are concerned. Each session needs its own subsystems and drivers. Windows Server 2003 provides an individual kernel address space for each user session, the so-called session space. This is where the image of each session’s Window Manager and the graphics and printer drivers are stored.
To manage this allocation, the Virtual Memory Manager assigns each new user session an identification—the session ID. Each process is now linked to its session space through its session ID. In this way, an application does not differentiate between a terminal server session and a Windows XP computer session. In a nutshell, a terminal server uses session space to create a virtual computer for each session.
The security reference monitor controls the security standards on the local computer. It provides its services for both the privileged system components and the subsystems running in user mode. Whenever a user or a process attempts to open an object, the security reference monitor checks for the required permissions. If the user or process ID indicates that the required permissions appear in the access control lists (ACLs), the object can be opened and used.
The security reference monitor also generates security-related administrative messages. These messages are stored in the event log.
The input/output system coordinates and manages the data streams received and sent by Windows Server 2003. It’s main task is to link different input and output drivers with standardized interfaces.
Generally, the input/output system consists of the following components:
Device driver Supports all peripheral devices such as printers, hard disks, mice, and scanners.
Network driver Connects network cards and protocols. Also provides a redirector mechanism enabling access to resources such as files or printer queues via the network.
File system driver Provides access to various file systems such as NTFS, FAT32, or FAT. Combined with the redirector, it allows connections to other file systems in the network, such as Novell NetWare.
Cache manager Handles intermediate storage of frequently used files in the main memory for the input/output system.
The Windows Server 2003 graphical output system is based on a window manager. The window manager represents the system components that display all graphical screen elements and manage the windows (filename: Win32k.sys). The Graphics Device Interface (GDI) provides the functions required to display graphical elements for unmanaged code (that is, traditional 32-bit Windows programs) on the monitor and to communicate with printers. For managed code, as deployed in .NET, the graphical interface to create local, Windows-based applications is called GDI+. The operating system manages both GDI and GDI+ in parallel. The GDI-DLL is located in the System32 subdirectory of the installation folder (such as C:\Windows). The components of GDI+ (filename: GdiPlus.dll) can be found in the WinSxS sub-directory of the installation folder. The .NET runtime environment is able to support several versions in parallel.
Under Windows NT or Windows 2000, the graphical elements of the user interface could be modified only slightly. A Windows Server 2003 with a Windows XP base, however, offers more options. For instance, by selecting alternative “themes,” the user can customize the basic appearance of windows and other graphical elements. This does not alter the function of the graphical elements, just their attributes, such as shape, color, and position.
When executed, all applications call Win32 or .NET standard functions for graphical display. These functions are forwarded as requirements to the window manager. The window manager responds by invoking the corresponding internal graphical functions. The graphical system then communicates with the corresponding operating system drivers without needing to know anything about the physical hardware. It is actually the drivers that modify the data stream, enabling the data to display on the output device. The GDI and GDI+ drivers form a layer between the applications in user mode and the graphics drivers in privileged mode.
OpenGL and Microsoft DirectX define two additional graphical interfaces for Windows Server 2003. They are independent of GDI/GDI+ and handle special tasks that do not usually play key roles on terminal servers. Conceptually, both interfaces can be used on terminal servers, but they would consume too many resources and significantly slow the output speed on remote clients. We only mention them to round out the list.
OpenGL Functional interface to create professional 3-D applications (CAD, Virtual Reality, etc.).
DirectX Direct and very quick addressing of multimedia input and output devices for applications with real-time character (particularly games).
A terminal server significantly increases the complexity of a graphical output system. Because multiple interactive users are supported, each corresponding output must be treated differently. Therefore, there is not just one output system; several (virtual) output channels to clients must be established.
In user mode, Windows Server 2003 provides several closed subsystems for executing applications. They are part of the operating system and communicate with the kernel components in the layer underneath. Their screen output is regulated via the Win32 graphical interface.
Win32 This subsystem (called Csrss.exe) controls the execution of 32-bit Windows processes and threads. It also includes the (Windows-on-Windows (WoW) module. This module represents a 16-bit Windows system that runs corresponding programs. Another module is the Virtual DOS Machine (VDM), which runs DOS programs. However, direct access to the hardware is not granted.
Security A subsystem to authenticate users and monitor the degree of security of the other subsystems (filename: Lsass.exe).
In addition to the subsystems, there are always a number of other system processes in user mode on a Windows Server 2003.
These are the most important additional system processes:
Windows administration This process controls the graphical interface presented to the user after logon (filename: Explorer.exe). It positions the individual application programs on the desktop. Along with the underlying software layers of the graphical system, this process determines how the user moves through the window system and the functions that open, change, move, and refresh windows and their contents.
Session manager This process manages sessions and is the first process in user mode created after system start. It takes care of several initialization activities related to, for example, the local procedure calls, environment variables, the window manager, subsystems, and the logon process (filename: Smss.exe). While the system runs, the session manager also handles the creation of new user sessions.
Logon process This process controls the interactive user logon and communicates with the security subsystem (filename: WinLogon.exe).
Service controller or service manager The administrative instance for background processes that run even if no user is logged on to the system (filename: Services.exe). Several Windows components are realized as background processes, such as printer spoolers, event protocols, remote procedure calls, and many network services.
To better understand the system’s behavior and Windows Server 2003’s multiuser capability, we need to take a detailed look at several key components and how they interact. On the one hand, there is the connection of keyboards, mice, and monitors on remote clients. On the other hand, a strictly separate session for each user on the terminal server needs to be managed.
As described in the introduction to the Virtual Memory Manager, each user session has its own address space within the system. This space is used to virtualize the required kernel components of the 32-bit subsystem (Win32k.sys) and the system drivers for each user. The operating system was optimized for terminal services so that several instances of adjusted kernel components can be started. All processes still need to be linked to a user session, which also affects the administration of the virtual memory. The central system resources (memory, CPU, and kernel objects) are allocated according to individual users.
In general, the 32-bit subsystem (Win32k.sys) is suitable for multiple-user operation, even if Windows Server 2003 has not been configured as a terminal server in application mode.
The Windows Server 2003 multiple-user function is primarily based on a special Windows service (Terminal Services) and on the corresponding device drivers. Terminal Services allows users to establish an interactive connection to a remote computer. Remote desktop, remote support, and terminal servers work only when supported by this Windows service.
If a client is connected via Terminal Services, it receives an individual virtualized user session. This session has its own Csrss and WinLogon processes in user mode and access to both the kernel and the display driver. The monitor, mouse, and keyboard respond via the network instead of locally. The following drivers are installed on the system to make this work:
Termdd.sys General terminal server driver
Rdpcdd.sys RDP Miniport
Rdpdr.sys RDP device redirection
Rdpwd.sys RDP stack driver for TCP/IP
The configuration of each user session on a remote client enables it to load its own drivers to connect the monitor, keyboard, and mouse. Mouse and keyboard drivers communicate with the network protocol via a multiple-use instance of the general terminal server driver and of the RDP drivers. All client sessions provided through the RDP protocol are either available (waiting) or interactive (bound). The waiting thread of a potential RDP session listens via TCP port 3389 for a connection request from the client side.
The following processes are started for each user session:
Winlogon.exe Manages the users’ logon information. (The process runs within the system context.)
Csrss.exe Handles the individualized graphical output. (The process runs within the system context.)
Explorer.exe Manages the graphical output in user mode. (The process runs within the context of the interactive user.)
What makes the graphics system of a terminal server special is that the graphics requests of a user session are not forwarded to the console’s display driver (unless the user is working directly at the console). Instead, the requests are sent to the virtual graphics driver that can communicate with the client via the RDP protocol.
Together with Terminal Services, the session manager (Smss.exe) handles the individual connections between a terminal server and its clients. The two generate and dispose of session objects that are responsible for the individual copies of Csrss.exe and WinLogon.exe. This concept is completely independent of the communications protocol used.
Of particular interest are the priority levels of processes and threads related to system responses and started applications. Processes in user mode can have six different priority levels: low, below normal, normal, above normal, high, and real-time. Within these process classes, the individual threads can take on seven different levels: idle, low, below normal, normal, above normal, high, and real-time. With regard to the system, a thread has a basic priority of 0 through 31, which is a combination of process class and thread class priorities.
In most cases, the basic priority of a process is selected so that it runs in one of the standard priorities. These priorities range from 24 (real-time), 13 (high), 10 (above normal), 8 (normal), 6 (below normal) to 4 (low). Some system processes, however, have a slightly higher priority on start-up to optimize overall system behavior. This goes for all current foreground processes, that is, those applications with a focus on input. On a terminal server, these are always multiple applications when multiple users are logged on interactively. In the Windows Server 2003 default setting, foreground processes have priority level 8, as do background processes.
The Smss session manager establishes new user sessions. Therefore, its performance is key to the terminal server, thus its high priority level of 11. The Services.exe service manager that handles background processes (the Windows services) also has a slightly higher priority level (9).
The Csrss Win32 subsystem and the WinLogon process are created individually for each user session. Together with the Lsass security subsystem, they are the critical components in terms of a terminal server’s ability to respond. Therefore, these components have a high priority level (13).
Up to this point, we have considered Windows Server 2003 solely as the successor to Windows 2000 Server, with some additional functions and improvements. But some of the puzzle pieces are still missing, such as the link to the .NET Framework. What exactly is the basic idea behind the .NET concept for Windows Server 2003?
Looking at .NET from the operating system perspective in providing applications, there is a fundamental difference between it and its predecessors: .NET allows a different type of program execution. While the previous 32-bit Windows-based applications were able to communicate directly with the operating system, the new Framework applications require an intermediate layer—the .NET runtime environment (common language runtime). The runtime environment represents the instance where all .NET applications are executed. Only the runtime environment communicates with the operating system once it has translated the .NET application’s byte code, and then it controls execution. To display a windows-oriented application (Windows Form), the .NET runtime environment basically requests the same graphics information from the operating system as a 32-bit application.
Any compiler that writes Microsoft Intermediate Language Code (MSIL code) can be used to create a Framework application. The .NET runtime environment executes the MSIL code on application start-up. Code that is executed within the .NET runtime environment is called managed code, while code that is executed beyond these limits is called unmanaged code.
What is the advantage of this type of construct? The .NET runtime environment serves as an abstraction layer for a virtual machine that represents the only relevant target environment for the developer. If the .NET runtime environment is available for different hardware platforms, a program created once does not need to be modified. The runtime environment provides the translated requests to the operating system. It also carries out other important tasks, such as monitoring security guidelines, isolating memory areas, managing memory resources, and handling exceptions. .NET programs can be developed in all languages that adhere to a standard schema for the definition of data types and whose compiler generates valid MSIL code.
How do Framework applications behave on terminal servers? They behave no differently than 32-bit programs optimized for running on terminal servers. The .NET runtime environment is perfectly able to handle multiple users who simultaneously use Framework applications on the server. Thus, Windows Server 2003 with Terminal Services activated and running in application server mode is able to execute both managed and unmanaged code for multiple users and to send the graphical output of the applications to the corresponding clients via RDP.
If Windows Server 2003 with Terminal Services activated is started in application server mode, all components need to interact seamlessly. Only then is multiple-user operation with all its functions possible. So what exactly happens between boot-up of the terminal server and connection to a remote user?
When the terminal server is booted up, some system components are individually initialized.
The console session of the server is started. This includes connecting local resources, such as keyboard, mouse, and monitor, via the corresponding device drivers.
The Windows Service Terminal Services is started. It manages future user sessions.
Terminal Services initiates the session manager that handles all user sessions except the console session.
When the session manager start-up phase is complete, monitoring threads are generated for each communications protocol and every network card so configured.
The connection request of a client is received by the thread in charge and forwarded to the session manager. The thread immediately resumes listening for connection requests.
Upon a connection request, the session manager and the Virtual Memory Manager generate a user session with a unique ID. The user session has an individual WinLogon.exe, Csrss.exe, and Explorer.exe. The Terminal Services drivers redirect input and output of keyboard, mouse, and monitor.
The user sees the logon screen for Windows Server 2003 on the client. Logging on enables interactive use of the desktop during the session.
When a user starts an application, the process manager is always able to map it appropriately based on the unique session ID.
This description, however, is greatly simplified. Some things are still missing, such as all the processes for negotiating protocol parameters, assigning session IDs, and licensing. All these details are discussed in the chapters that follow.