Methods for Generating the Core Traffic Matrix

As explained in the preceding section, the core traffic matrix is the first requirement for network-wide capacity planning. Note that the core traffic matrix is relevant for capacity planning and also for traffic engineering. Traffic engineering is the process of traffic optimization for a fixed network design. Before traffic can be optimized, it must be analyzed, resulting in the core traffic matrix. Before introducing the different solutions described in the following subsections, this section discusses the considerations when generating the core traffic matrix: granularity, internal versus external traffic matrix, and retrieving the data.

What are the criteria for selecting a specific method for generating the core traffic matrix? The answer to the following questions naturally include or exclude some of the methods presented in this section.

First, you need to define the granularity of the core traffic matrix:

  • The entry points of the core traffic matrix can be a PoP, a router, a specific interface of a router, or even a source prefix.

  • The exit points can be a PoP, a BGP next hop or MPLS Forwarding Equivalent Class (FEC), or a destination prefix.

The lowest granularity is a classification according to the incoming PoP, resulting in high traffic aggregation. The highest granularity creates more entries in the core traffic matrix, which leads to better capacity planning analysis at the cost of generating more traffic and processing more data. In their paper "A Distributed Approach to Measure IP Traffic Matrices," Konstantina Papagiannaki, Nina Taft, and Anukool Lakhina calculated the number of flows for the different granularities per observation period. Even though the paper assumes specific network characteristics in terms of number of PoPs, routers per PoP, and customer interfaces per router, the comparison of the different scales is interesting:

  • PoP to PoP matrix: 351 flows

  • Router to router matrix: 729 flows

  • Link to link matrix: 6561 flows

  • Source prefix to destination prefix: 5.5 million flows

These numbers imply that the number of flows is proportional to the granularity of the core traffic matrix.

If the core traffic matrix requires classification per Class of Service (CoS), all numbers are to be multiplied by the number of existing Differentiated Services Code Point (DSCP) values.

Secondly, do you want the capacity planning analysis to be combined for all traffic, or do you want to separate it per class of service?

Next, you should evaluate whether you need the external traffic matrix, which also contains information on where the traffic comes from when entering your network and where it goes when exiting your network. Typically, the external core traffic matrix requires the previous/next BGP AS in the path, or the source/destination BGP AS, or the source/destination IP address or prefix. "Hot potato routing" optimizes the routing decisions by identifying the ISP's nearest exit point.

In the same way as you pass a hot potato quickly to avoid burning your fingers, an ISP hands over the traffic to another ISP as quickly as possible, reducing network utilization in its own network. In Figure 14-4, the traffic from PoP1 to BGP AS3 takes Path1, exiting this way via the nearest BGP, Router 1. To optimize all traffic and lower the link utilization, a possible alternative is to send the best-effort traffic from PoP1 to the BGP AS3 via the suboptimal Path2, exiting via the BGP Router 3, while the high-importance traffic linked to the SLA takes the optimal Path1. As explained with this example, knowledge of the external traffic matrix allows more flexibility to optimize the traffic by tuning the BGP exit routers.

Figure 14-4. External Core Traffic Matrix

[View full size image]

Is the core network a pure IP network or an MPLS network? Some monitoring features allow monitoring of only IP packets, and others are dedicated to MPLS packet monitoring. For example, because only the MPLS labels are relevant in the middle of an MPLS core, it is unnecessary to determine the source and destination prefix of the IP packets encapsulated in the MPLS packet.

Which mechanism is required to collect the core traffic matrix? The first solution is a push model that exports the accounting information from the network element. A typical push model example is the NetFlow export, in which UDP and SCTP are possible transport protocol options. The second solution is the pull model, in which the accounting information is retrieved from the network element, as done by the SNMP protocol. SNMP MIB polling offers the advantage that the polling interval can be configured, effectively retrieving the information only when needed.

The next sections describe both the push model with NetFlow and the pull model with SNMP. It covers four different mechanisms: NetFlow BGP Next Hop type of service (ToS) aggregation, MPLS-Aware NetFlow, BGP passive peer on the NetFlow Collector, and BGP policy accounting. The answers to the preliminary questions in this section will lead to a natural selection of one or maybe two features for a specific scenario.

NetFlow BGP Next Hop ToS Aggregation

Because the BGP next hop is the network's exit point, NetFlow BGP Next Hop ToS aggregation is a simple approach to export the core traffic matrix with NetFlow records.

Table 14-1 lists the NetFlow BGP Next Hop ToS key fields for the core traffic matrix. The inbound interface offers a high level of granularity (potentially the previous router in case of point-to-point links). The BGP next hop presents the network's exit point. The source and destination BGP AS deliver the external traffic matrix. The ToS offers the classification per class of service. Finally, the outbound interface could be useful in case of load balancing within the provider network, because two flow records are created, each with its respective outgoing interface.

Table 14-1. Key Fields and Nonkey Fields for the BGP Next Hop ToS Aggregation
Key FieldsNonkey Fields
Inbound interfaceNumber of flows
Outbound interfaceNumber of packets
BGP next hopNumber of bytes
Source BGP ASFlow start sysUpTime
Destination BGP ASFlow end sysUpTime
ToS 


The NetFlow BGP Next Hop ToS aggregation is enabled at the BGP routers, on all interfaces toward the core. In Figure 14-1, it would be on all CE-facing interfaces of the PE routers. This mechanism, which observes the IP packet, is suited for both a pure IP network and an MPLS backbone.

Because the traffic is observed as it enters the router, the reported ToS values are related to ingress. In case of "recoloring" (changing the ToS value) at the router, the new ToS values are not reported.

The biggest drawback with this mechanism is that only the prefixes in the BGP table are monitored. Indeed, a route not known in the BGP routing protocol would report 0.0.0.0 as the BGP next hop.

For further details, such as configuration examples, refer to the "BGP Next-Hop Information Element" section in Chapter 7, "NetFlow."

Flexible NetFlow

BGP Next Hop TOS aggregation can be improved using Flexible NetFlow. Indeed, some of the key fields from Table 14-1 are not essential for the generation of the core traffic matrix:

  • The outbound interface is of interest only in the case of load balancing. Even then, interest is limited, because the capacity planning tool would simulate the IGP routing and deduce the load balancing.

  • The source BGP AS gives some more granularity to the core traffic matrix. However, tuning the BGP parameters to force the entry point for a specific BGP AS is not realistic in practice.

Reducing the number of key fields for the outbound interface and the source BGP AS would reduce the number of flows, the router CPU utilization, and the bandwidth requirements for exporting. In addition, this reduces the NetFlow Collector's workload.

The bandwidth requirements for exporting flow records could be reduced further by excluding some of the nonkey fields from Table 14-1:

  • The number of flows is not required by capacity planning.

  • The number of packets is of limited interest, because the number of bytes is the factor of choice for accounting applications.

After applying the simplification according to Table 14-1, the key fields and nonkey fields are reduced to the ones listed in Table 14-2. This is a concrete example for the flexibility provided by Flexible NetFlow.

Table 14-2. Key Fields and Nonkey Fields with Flexible NetFlow
Key FieldsNonkey Fields
Inbound interfaceNumber of bytes
BGP next hopFlow start sysUpTime
Destination BGP ASFlow end sysUpTime
ToS


Further reduction in the key fields is possible, but it depends on the network characteristics and the required granularity of the core traffic matrix. For example, if no QoS is implemented in the network, the ToS key field is superfluous. For example, the classification per inbound interface might not be required if the router exporting the flow records sufficiently identifies the entry point in the core traffic matrix.

The selection of key fields and nonkey fields in Table 14-2 results in the following configuration:

flow record traffic-matrix-record
   match routing destination as
   match interface input
   match ipv4 dscp
   match routing next-hop address ipv4 bgp
   collect counter bytes long
   collect timestamp sys-uptime first
   collect timestamp sys-uptime last

flow monitor traffic-matrix-monitor
   record traffic-matrix-record
   cache entries 10000
   cache type normal
   exporter capacity-planning-collector

interface pos3/0
   ip flow monitor traffic-matrix-monitor

MPLS-Aware NetFlow

In case of MPLS backbones, the MPLS-Aware NetFlow feature is well suited for gathering the core traffic matrix. It monitors the MPLS traffic entering the P routers that is, (router in the previous core) and can offload busy PE routers. An alternative to the ingress collection at the P router is MPLS egress collection at the PE router.

MPLS-Aware NetFlow offers several monitoring flavors, such as monitoring MPLS packets and IP packets, monitoring the underlying IP fields of the MPLS packets, and monitoring multiple labels in the stack. The relevant feature for capacity planning is MPLS-Aware NetFlow Top Label Aggregation, which is chosen for the rest of this section. MPLS-Aware NetFlow Top Label Aggregation monitors only the top label (not any other labels) and does not monitor any fields from the underlying IP packet.

Table 14-3 illustrates the important key fields for the core traffic matrix: the inbound interface offers a high level of granularity (potentially the PE router in case of point-to-point links), the FEC field provides the IP address of the network exit point, and the top incoming "label" represents the class of service inserted in the 3 EXP bits. The "label," as exported by MPLS-Aware NetFlow, consists of the 24 most significant bits of the 32-bit quantity, referred to as the "label stack entry" in RFC 3032, MPLS Label Stack Encoding. It contains the 20 bits of the MPLS label, the 3 EXP bits for experimental use, and the S bit for the bottom of the stack. Note that the 20 bits of MPLS label are not useful as such, because only the corresponding FEC is a field of interest.

Table 14-3. Key Fields and Nonkey Fields for the MPLS-Aware NetFlow Top Label Aggregation
Key FieldsNonkey Fields
Outbound interfaceForwarding Equivalent Class of the top label
The top incoming "label"Number of flows
 Number of packets
 Number of bytes
 Flow start sysUpTime
 Flow end sysUpTime
 Type of the top label (LDP, BGP, VPN, etc.)
 Output interface


MPLS-Aware NetFlow is typically enabled on the P routers. In Figure 14-1 it would be on all PE-facing interfaces of the P routers. This mechanism is suitable for only MPLS backbones.

The biggest advantage of this method is the low overhead, because only a few records are exported to a NetFlow Collector: one record per P router multiplied by the number of interfaces where NetFlow is enabled, multiplied by the number of FECs in the network.

However, this method cannot produce the external traffic matrix, because only the correlation with the exiting PE allows the determination of the packet's destination, because the P routers do not know this information.

For further details, such as configuration examples, refer to the "MPLS-Aware NetFlow" section in Chapter 7.

BGP Passive Peer on the NetFlow Collector

The NetFlow records have been augmented with many new information elements, resulting in a more complex NetFlow metering process and increased resource consumption. The BGP passive peer feature, introduced in Cisco NetFlow Collector 5.0, establishes a BGP connection to one or several routers in the ISP network. The BGP peer listens to all the BGP routing updates without injecting any updates into its peers—hence the name "passive." After receiving the NetFlow records, the NetFlow Collector looks up the destination IP address in the retrieved BGP routing table, exactly as the NetFlow metering process would do locally on the router to determine the BGP route. It also adds new BGP-related information elements to the NetFlow flow records at the NetFlow Collector.

As shown in Table 14-4, the new nonkey fields added to the flow records consist of any information that can be retrieved from the BGP table by performing a lookup on the flow IP address or prefix.

Table 14-4. Key Fields and Nonkey Fields for the BGP Passive Peer in the NetFlow Collector
Key FieldsNonkey Fields
Any key fields used by the NetFlow metering process in the network elementsAny nonkey fields sent in the flow records
Any BGP-related key field (BGP next hop, source BGP AS, destination BGP AS, full AS path, BGP community, etc.)Any BGP-related key field (BGP next hop, source BGP AS, destination BGP AS, full AS path, BGP community, etc.)


The most interesting nonkey fields for the core traffic matrix are as follows:

  • BGP next hop— Provides the network exit point.

  • BGP AS (configured as source, destination, or full path)— Offers the external traffic matrix.

  • BGP community— In a network where BGP communities are deployed, a BGP community provides the network exit point.

Note that those extra BGP-related fields can also be used as new key fields, because the Cisco NetFlow Collector allows further flow record aggregation based on any information element present in the flow records.

Imagine that in Figure 14-1 the BGP community is set as a unique value per PoP: each generated internal BGP routing update from this PoP contains a specific BGP community value. Because a specific value of the BGP community represents a specific destination PoP, a classification per BGP community implies a classification per destination PoP. The destination IP address or prefix, exported in the NetFlow record, is looked up in the BGP table on the Cisco NetFlow Collector. From this lookup, the BGP community is extracted. At the Cisco NetFlow Collector, the flow records are first augmented with the BGP community value. Then the flow records are aggregated per BGP community value, producing entries for the core traffic matrix.

BGP Policy Accounting

The full configuration, show commands, and MIB variables of the BGP Policy Accounting feature are covered in detail in Chapter 8, "BGP Policy Accounting." Therefore, this section mainly compares BGP Policy Accounting with the other methods.

In Figure 14-1, BGP Policy Accounting would be enabled on the CE or PE routers. The only two constraints are that the routers must be running BGP and that only IP traffic is observed. Therefore, this feature does not apply to the monitoring of MPLS packets. The first constraint leads to the same limitation as the NetFlow BGP Next Hop Aggregation solution, where only the BGP traffic accounts for the generation of the core traffic matrix. A disadvantage of this solution is that the core traffic matrix cannot be generated per CoS, because the routing table does not contain different entries per ToS. However, the BGP Policy Accounting method presents a real advantage, because the data is retrieved via SNMP with the BGP-POLICY-ACCOUNTING-MIB. This reduces the overhead remarkably compared to NetFlow, because high SNMP polling times such as one hour, several hours, or one day are possible. The only constraint when selecting an extended polling time is to verify that the MIB counters do not wrap twice between two polling cycles.

Other Methods

Several research topics propose different approaches to deduce the core traffic matrix based on various mechanisms: MIB interface counter polling, partial collection of NetFlow and interface counters, IGP metric changes, and the gravity model.

Polling MIB interface counters permits estimation of the core traffic matrix based on the total interface traffic. This method provides a rough estimate only, because the granularity of the counters is an order of magnitude lower than the entries in the core traffic matrix. The addition of a NetFlow collection at specific elements in the network can improve the accuracy of the core matrix traffic estimation.

Some research papers specify that the traffic flows in the network can be observed by the delta of interface counters when modifying the IGP metrics. However, network administrators are reluctant to change the routing information, which may lead to suboptimal routing.

The gravity model explains that the migration between two cities is proportional to the product of the two cities' populations and is inversely proportional to the intervening distance. In other words, the higher the population and the closer the distance, the more people will commute. Taking into account the number of users in the Internet and their location, this theory could be applied to Internet traffic, deducing that the required bandwidth between two cities should be proportional to the product of the two cities' Internet users and inversely proportional to the intervening distance. However, the introduction of network vitualization such as data centers somehow breaks this theory. Indeed, even if they are composed of only a few devices, the data centers exchange high volumes of traffic.

Another research paper proposes to calculate the core traffic matrix for only a short period and then infer the changes by looking at the interface counters, which are directly proportional to the core traffic matrix.

Although all these methods tend to ease the generation of the core traffic matrix, the authors think that the introduction of Flexible NetFlow is currently the most accurate solution. Flexible NetFlow offers the best trade-off: the minimum number of flow records for a required level of granularity.

For completeness, three extra methods need to be mentioned: the switching counters in MPLS Label Distribution Protocol (LDP) network, the statistics in the MPLS Label Switching Router MIB (RFC 3813), and the direct measurement of byte counters in a full mesh of MPLS traffic engineering label-switched path.



Part II: Implementations on the Cisco Devices