Time may be relative to the observer, but keeping accurate and consistent time provides a critical frame of reference for applications running on a local server and across the network. For example, imagine a database application that records bank balances in a temporary database for an Internet banking system. The most recent bank balance for each account, when updated, would be always inserted into the column “balance” of the table “accounts,” with two primary keys to identify the balances: account_name and timestamp. Whenever the latest balance is to be entered, a new row is inserted into the accounts table with the account_name, balance, and timestamp. All other transactions, such as withdrawals, require that the most recent balance be determined from the timestamp (no updates of rows are permitted for security reasons). If dates and times are not maintained consistently on the system, there is the potential for the true bank balance at the present time to be missed for selection based on the incorrect timestamp it may have been assigned.
This disparity would clearly render the application useless. Figure 39-1 demonstrates this scenario in action—two balances have been inserted into the table accounts for account_name 95639656 with $18,475.90 being the balance at January 1st, 2002, 18:54:21, and $17,475.90 being the balance at January 1st, 2002, 18:59:22. This set of entries indicates that a withdrawal of $1,000 occurred one second after the first transaction. What if the system clock did not have accuracy to within one second? The incorrect balance of $18,475.90 might then be reported when future queries are run.
While most systems are capable of maintaining millisecond accuracy for time, a more complex situation arises when high availability and clustering becomes involved, and different systems in the cluster have different times and dates. For example, imagine that a single database server receives updates from six Java 2 Enterprise Edition (J2EE) application servers on six different machines. These servers process requests from clients in a round-robin fashion, and all update the same table on the database server. This allows each application server to always retrieve the most up-to-date information for each client. However, if each server has a different date and time, they would be writing balances into the accounts table with varying timestamps. Again, the fact that the timestamps varied would prevent the application from being used seriously.
Figure 39-2 again demonstrates this scenario—two balances have been inserted into the table accounts for account_name 95639656 by server1, with $18,475.90 being the balance at January 2002 1st, 18:54:21, and server2 with $17,475.90 being the balance at January 2002 1st, 18:59:21. This set of entries indicates that a withdrawal of $1,000 occurred one second after the first transaction. If the two clocks of system1 and system2 were not synchronized with millisecond accuracy, we would never know which balance ($18,475.90 or $17,475.90) was actually correct. What if a leap second was observed on one server and not another? Clearly, there is a need for systems to be able to regularly synchronize their clocks to ensure consistency in enterprise applications.
One solution that solves the accuracy problem for single systems and for networks is the Network Time Protocol (NTP). The current version of NTP is v3, specified in RFC 1305, which allows time to be synchronized between all systems on a network by using multicast, and also permits high-precision external hardware clock devices to be supported as authoritative time sources. These two approaches ensure that potential data consistency problems caused by timestamps do not hamper online transaction processing and other real-time data processing applications. By using a master-slave approach, one server on the network can be delegated the authority for timekeeping for all systems on that network.
Using a master-slave approach ensures that multiple, potentially conflicting sources of authoritative time do not interfere with each other’s operation.
NTP v3 provides a number of enhancements over previous versions. It supports a method for servers to communicate with a set of peer servers to average offsets and achieve a more accurate estimation of current time. This is a similar method to that used by national measurement laboratories and similar timekeeping organizations. In addition, network bandwidth can be preserved because the interval between client/server synchronizations has been substantially increased. This improvement in efficiency has been achieved because of the improvements to the local-clock algorithm’s accuracy. In any case, NTP uses UDP to communicate synchronization data, minimizing any network overhead. In order for clients to access server data, the IP address or hostname of the server must be known—there is no mechanism for automatic discovery of a time server defined by NTP.
While NTP has a simple client/server interface, individual servers also have the ability to act as secondary servers for external, authoritative time sources. For example, a network might have a designated time server from which all clients retrieve the correct time—which time the server in turn authoritatively receives from a national measurement laboratory. In addition, hardware clocks can be used as a backup in case of network failure between the local network and the measurement laboratory. When a connection is reestablished, the local server’s time can simply be recalibrated with the authoritative time received from the laboratory.
In this chapter we will examine how to configure NTP servers and clients to synchronize their timekeeping and examine strategies for maintaining accurate time on the server side.