The delivery of quality of service in IP networks became a very hot topic in the second half of the 1990s, because of the phenomenal explosion of IP-based service delivery in the telecommunications industry that took place after the introduction of the World Wide Web (which is the interface to the Internet most people use and has therefore now become synonymous with the Internet itself).
Differentiation of packet forwarding based on traffic classes (including voice, residential Web browsing, mission-critical, and business) or aggregates of traffic that would share common forwarding behavior within routers has proved to be the most reasonable and scalable approach to offer end-to-end QoS in the Internet. After the Integrated Services (IntServ) initiative—which defined per packet flow end-to-end services based on flow identification within each router traversed in the end-to-end path—was proved not to scale in the high-speed IP core, the IETF decided to opt for a less ambitious, but nonetheless reasonable and effective, per traffic class QoS differentiation. This approach described in [RFC2475] later became known as the Differentiated Services (DS), or DiffServ.
In this section, after a brief introduction to the DiffServ principles, we address the topic of providing QoS in VPNs based on IP tunnels and MPLS. We believe that this should be sufficient to get exposure to the kind of operational issues that a provider needs to face when running a business-grade, and possibly interdomain, VPN service.
DiffServ is based on packet classification at each hop—that is, at each node traversed in the end-to-end path—relying on the simple lookup of a particular field in the IP Header, called Differentiated Services Code Point (DSCP) field. After packets have been classified, a DiffServ-compliant node is expected to deliver a standard (or proprietary) PHB. Each node must be configurable to associate a PHB to a particular value of the DSCP field So far, the IETF has defined a number of standard PHBs:
Default PHB—also known as Best Effort—provides for the normal packet treatment the Internet offers, defined in [RFC2474].
Class Selector PHB defines up to eight configurable behaviors, which include the Default PHB. This PHB defines a backward-compatibility scheme to the precedence class mechanism [RFC791] as described in [RFC2474].
Assured Forwarding PHB (AF PHB—[RFC2597]) defines four classes of service with three drop-precedence levels each, allowing for differentiation of different classes of traffic and modulation of loss and delay performance figures in each class based on the buffer space, buffer management, scheduling policy, and bandwidth assigned to each of them. The drop precedence may be changed to a higher drop precedence if, upon metering, a particular AF class traffic exceeded an agreed traffic profile. Note that an administrator may decide to mark traffic coming from different users with different drop precedence, even before the AF traffic is metered and checked against the negotiated traffic profile. For instance, it may be an administrator policy to mark employee traffic with higher drop precedence than vice presidents and higher-level personnel. The traffic conditioning agreements for AF traffic may be defined so that for each traffic class and for some drop precedence within a traffic class there is an allowed traffic level, and rules are in place to move traffic from one drop precedence to another. [RFC2697] (Single Rate Three Color Marker) and [RFC2698] (Two Rate Three Color Marker) provide examples of how these could be defined
Expedited Forwarding PHB (EF PHB) guarantees a bound on the delay variation at each hop, thus allowing for a service that is suitable for applications such as circuit emulation over IP.
The delivery of predictable service in a differentiated services domain is based on well-defined rules for traffic admission to a differentiated services domain. Traffic exchanged with other domains is policed, shaped, and marked according to traffic conditioning agreements that are defined bilaterally between the domains' administrative entities. If all networks comply with the traffic conditioning agreements, and if resources within the nodes of each traversed differentiated domain are provisioned adequately, then it is possible to obtain predictable end-to-end QoS.
When an IP packet is tunneled, it can traverse multiple DiffServ domains, stay within a DiffServ domain, or even transit non-DiffServ domains. These boundary conditions need to be taken into account when you are designing a tunnel-based service and QoS based on differentiated services is offered. A detailed discussion on this topic is provided in the informational [RFC2983].
In short, [RFC2983] describes two basic models:
In the "transparent wrapper" model, the tunnel simply happens to be a transparent wrapper from a DiffServ point of view, in that the DSCP field of the inner packet is copied to the DSCP field of the outer packet header at tunnel ingress point. Then the outer IP header DSCP field value is copied to the inner IP packet header DSCP field at tunnel egress.
In the "pipe model" the tunnel is considered a bearer service with a given QoS profile, and the DSCP field of the header of IP packets sent over it is not copied to the outer IP header upon forwarding (and, likewise, the DSCP field of the outer IP header is not copied to the inner IP header DSCP field when the packet is received at the tunnel egress). The pipe model, therefore, can be regarded as a (virtual) circuit characterized by a service profile determined by the Differentiated Services class the outer IP packet header belongs to.
To satisfy the QoS requirements of all IP flows being transported over a pipe model tunnels-based VPN, the Differentiated Services class that needs to be negotiated with the service provider must meet the most stringent QoS requirements of all IP flows carried over the pipe. This may turn out to be quite an expensive setting if the bulk of the traffic is best effort and only a small fraction of the traffic transported over the tunnel requires a high level of QoS. In this case, it may be advisable to define a bundle of pipe model tunnels-based VPN, instead of using a single tunnel-based site-to-site connectivity. You might consider this as a solution close to the transparent wrapper model. On the contrary, we want to stress that even if we had a one-to-one mapping between the user DSCP field and the outer header DSCP field at the ingress point, such mapping may change at the egress. Most importantly, the inner IP header information, such as the AF drop precedence marked at the ingress, would not be changed at the intermediate nodes. The latter property is very important, for instance, when drop precedence information needs to be preserved from the ingress to the egress of the tunnel. This ensures that the most critical traffic always has less likelihood to be dropped.
Figure 2.12 provides a synopsis of what we discussed in the preceding paragraph. The pipe model applies especially to IPSec tunnels, where for many reasons—like avoidance of traffic-analysis-based attacks—it is desirable not to copy the inner IP header DSCP field. For instance, if an attacker knew that mission-critical traffic for a particular network was marked with a given value of the DSCP field, by copying inner IP packet header DSCP field to the outer header, the attacker could discover that some packets must be collected for further analysis of mission-critical transactions. The pipe model is also useful when information such as the AF drop precedence must not be lost (note that the AF drop precedence could be lost in certain situations, such as when an intermediate domain converts all the traffic to EF). In some cases the tunnel ingress point must apply some traffic conditioning before tunneling, such as when remarking is required before tunneling packets over a pipe mode tunnel, because, for instance, the destination domain is not DiffServ-capable and only accepts packets marked according to the IP precedence model defined [RFC791].
QoS differentiation in an MPLS-compliant network may be obtained in two ways. Using the Experimental field (EXP field) in the label stack encapsulation header allows for the differentiation of up to eight traffic classes within a single LSP. In this E-LSP approach (that is, EXP-inferred packet-scheduling class LSP), the EXP value-label value pair for incoming packets at the input interface of the LSR determines the per-hop behavior. The other approach is based on simply associating a PHB to an MPLS label value. This mode is known as L-LSP (or Label-only-inferred packet-scheduling class LSP). There is no difference in the level of QoS that can be delivered using these two different approaches. However, the ability to use the EXP field for packet classification and scheduling requires special node implementation and cannot be expected to be there in plain-vanilla MPLS QoS-capable nodes.
In many cases, such as ATM switches controlled via MPLS signaling, it is possible to reuse features such as the ATM Cell Loss Priority (CLP) bit to differentiate classes of traffic with different drop priorities within an MPLS LSP. In a sense, this can be regarded as a particular case of the E-LSP approach, although the terminology in this case would not be the best fit.