New Features Supported with NetFlow Version 9

The flexibility and extensibility of the NetFlow version 9 protocol offers new possibilities for the metering process. This section lists a series of features enabled on top of NetFlow version 9.

SCTP Export

In the following example, shown in Figure 7-9, the router exports flow records for two different applications:

  • Flow records from the main cache for security purposes. The flow records are exported to 10.10.10.10 with partial reliability with a backup in failover mode to the host 11.11.11.11.

  • Flow records from the aggregation cache for billing purposes, which implies that flow records cannot be lost. They are exported with full reliability to 12.12.12.12, while the backup to 13.13.13.13 is configured in redundant mode.

Figure 7-9. Scenario: SCTP Export

The following CLI configures the scenario shown in Figure 7-9. The backup restore time for the billing scenario has been set to a minimum value. As a result, a smaller number of flow records are transferred to the primary collector in case of backup and restore. This is less critical for the monitoring case.

Router(config)# ip flow-export destination 10.10.10.10 9999 sctp
Router(config-flow-export-sctp)# reliability partial buffer-limit 100
Router(config-flow-export-sctp)# backup destination 11.11.11.11 9999
Router(config-flow-export-sctp)# backup fail-over 1000
Router(config-flow-export-sctp)# backup mode fail-over

Router(config)# ip flow-aggregation cache destination-prefix
Router(config-flow-cache)# export destination 12.12.12.12 9999 sctp
Router(config-flow-export-sctp)# backup destination 13.13.13.13 9999
Router(config-flow-export-sctp)# backup mode redundant
Router(config-flow-export-sctp)# backup restore-time 1
Router(config-flow-export-sctp)# exit
Router(config-flow-cache)# enabled

In the following show command, you see that the backup association to 11.11.11.11 is not connected (backup mode failover), while the backup association to 13.13.13.13 is connected because the selected mode is redundant:

Router# show ip flow export sctp verbose
IPv4 main cache exporting to 10.10.10.10, port 9999, partial
status: connected
backup mode: fail-over
104 flows exported in 84 sctp messages.
0 packets dropped due to lack of SCTP resources
fail-over time: 1000 milli-seconds
restore time:   25 seconds
backup: 11.11.11.11, port 9999
   status: not connected
   fail-overs: 0
   0 flows exported in 0 sctp messages.
   0 packets dropped due to lack of SCTP resources
destination-prefix cache exporting to 12.12.12.12, port 9999, full
status: connected
backup mode: redundant
57 flows exported in 42 sctp messages.
0 packets dropped due to lack of SCTP resources
fail-over time: 25 milli-seconds
restore time:   1 seconds
backup: 13.13.13.13, port 9999
   status: connected
   fail-overs: 0
   0 flows exported in 0 sctp messages.
   0 packets dropped due to lack of SCTP resources


					  

Sampled NetFlow

Due to the increasing interface speed and the higher density of ports on network elements, sampling becomes a very relevant feature in NetFlow. Without sampling, network elements gather so many flow records that these flows consumes a significant part of the total CPU utilization. In addition, the bandwidth requirements for the export link to the collector increase, and the collector requires extensive resources to process all exported flow records.

Sampled NetFlow significantly decreases CPU utilization. On average, at a Cisco 7500, sampling 1:1000 packets reduces the CPU utilization by 82 percent, and sampling 1:100 packets reduces the CPU utilization by 75 percent. The conclusion is that sampled NetFlow is a significant factor in reducing CPU utilization.

Even with NetFlow implementations in hardware ASIC on Cisco platforms such as the Catalyst 4500, 6500, 7600, 10000, and 12000 routers, sampled NetFlow offers advantages. Exporting flow records has a major impact on the CPU utilization on hardware-based NetFlow implementations.

Chapter 2 describes the different types of sampling in detail. Random packet sampling is statistically more accurate than deterministic packet sampling because it avoids any bias due to potential traffic repetitions and patterns.

Packet-Based Sampling on the Routers

NetFlow's ability to sample packets was first implemented as a feature called Sampled NetFlow. It uses deterministic sampling, which selects every n-th packet for NetFlow processing on a per-interface basis. For example, if you set the sampling rate to 1 out of 100 packets, Sampled NetFlow samples packets 1, 101, 201, 301, and so on. The Sampled NetFlow feature does not support random sampling and thus can result in inaccurate statistics when traffic arrives with fixed patterns.

Even though the Cisco 12000 router still offers Sampled NetFlow, the majority of the Cisco platforms offer Random Sampled NetFlow. Random Sampled NetFlow selects incoming packets based on a random selection algorithm so that on average one out of n sequential packets is selected for NetFlow processing. For example, if you set the sampling rate to 1 out of 100 packets, NetFlow might sample packets 5, 120, 199, 302, and so on. The sample configuration 1:100 provides NetFlow data on 1 percent of the total traffic. The n value is a configurable parameter, ranging from 1 to 65,535.

The Modular QoS Command-Line Interface (MQC) consists of three components:

  • The class map defines the traffic for inspection.

  • The policy map defines action on the classified traffic.

  • The service policy enables a policy at an interface.

This section offers a typical configuration example on a router, with three configuration steps.

Step 1: Defining a NetFlow Sampler Map

A NetFlow sampler map defines a set of properties (such as the sampling rate and the NetFlow sampler name) for NetFlow sampling. Each NetFlow sampler map can be applied to one or many subinterfaces, as well as physical interfaces. For example, you can create a NetFlow sampler map named mysampler1 with the following properties: random sampling mode and a sampling rate of 1 out of 100 packets. This NetFlow sampler map can be applied to any number of subinterfaces, each of which would refer to mysampler1 to perform NetFlow sampling. In this case, traffic from these multiple subinterfaces is merged into flows, which introduces even more randomness than sampling per single subinterface does.

router(config)# flow-sampler-map mysampler1
router(config-sampler)# mode random one-out-of 100

Step 2: Applying a NetFlow Sampler Map to an Interface

The following example shows how to apply a NetFlow sampler map called mysampler1 to Ethernet interface 1:

router(config)# interface ethernet 1/0
router(config-if)# flow-sampler mysampler1

Enabling Random Sampled NetFlow on a physical interface does not automatically enable it on all its subinterfaces. In addition, disabling Random Sampled NetFlow on a physical interface or subinterface does not enable full NetFlow. This restriction prevents the unwanted transition from sampling to full NetFlow. Instead, full NetFlow must be configured explicitly.

Router# show flow-sampler
Sampler : mysampler1, id : 1, packets matched : 10, mode : random sampling mode
  sampling interval is : 100


					  

Step 3: Checking the NetFlow Cache

The following example displays the NetFlow output of the show ip cache verbose flow command, in which the sampler, class-id, and general flags are set:

Router# show ip cache verbose flow
...
SrcIf          SrcIPaddress    DstIf          DstIPaddress     Pr TOS Flgs  Pkts
Port Msk AS                    Port Msk AS    NextHop               B/Pk  Active
BGP: BGP NextHop

Et1/0          8.8.8.8         Et0/0*         9.9.9.9          01 00  10       3
0000 /8  302                   0800 /8  300   3.3.3.3                100     0.1
BGP: 2.2.2.2         Sampler: 1  Class: 1  FFlags: 01


					  

The ID of the class that matches a packet is stored in the flow. The class ID is exported with version 9. A mapping of the class ID to the class name is sent to the collector using the options templates in NetFlow data export version 9. The collector maintains a mapping table from the class ID to the class name, and the table associates a class name with a flow at the collector so that you can determine which flow is filtered by a specific class.

For information, NetFlow flags (FFlags) that might appear in the show ip cache verbose flow command output are as follows:

  • FFlags: 01 (#define FLOW_FLAGS_OUTPUT 0x0001)—Egress flow

  • FFlags: 02 (#define FLOW_FLAGS_DROP 0x0002)—Dropped flow (for example, dropped by an ACL)

  • FFlags: 04 (#define FLOW_FLAGS_MPLS 0x0004)—MPLS flow

  • FFlags: 08 (#define FLOW_FLAGS_IPV6 0x0008)—IPv6 flow

  • FFlags: 10 (#define FLOW_FLAGS_RSVD 0x0010)—Reserved

Sending the Flow-Sampler Information

When the collector receives sampled flow records, a correlation between the sampled traffic and actual traffic that passed the device is required. An approximation is to multiply the sampled number of packet and bytes with the sampling rate. This implies that for each sampler ID the sampling rate is known to the collector. NetFlow version 9 exports the sampling rate in an option template field that matches the sampler ID of the exported flow records.

router(config)# ip flow-export version 9 options sampler

The previous CLI entry enables the export of an option containing random-sampler configuration, including the sampler ID, the sampling mode, and the sampling interval for each configured random sampler. In our example, the collector receives a flow record with the sampler ID equal to 1, along with an option template record containing sampler ID = 1, mode = random sampled NetFlow, and sampling rate = 100.

Flow-Based Sampled NetFlow on the Catalyst

On the Catalyst 6500/Cisco 7600, the NetFlow cache on the Policy Feature Card (PFC) captures statistics for flows routed in hardware. These platforms do not support packet sampling, because the collection process is implemented in ASICs and does not have any impact on the CPU utilization. Although the metering process is implemented in hardware, the flow export still requires software processing, which has a CPU impact. To reduce this, the PFC supports flow-based sampling, which decreases the CPU utilization, because only a subset of flow records is exported. As opposed to packet-based sampling, flow-based sampling is a post-processing feature, which means a sampling mechanism selects a subset of the existing flow entries for export to a collector.

With a Supervisor Engine 720, sampled NetFlow always uses the full-interface flow mask. With a Supervisor Engine 2, sampled NetFlow uses the full-interface or destination-source-interface flow masks. Sampled NetFlow per LAN port is supported with the full-interface flow mask or the destination-source-interface flow mask. This is a major enhancement compared to Supervisor Engine 2 functions and the other flow masks, where sampled NetFlow can be applied only at the device level instead of the interface level.

Two different flavors of flow-based sampling exist: time-based sampling and packet-based sampling.

Time-based sampling exports a snapshot of the NetFlow cache at certain intervals. The time-based sampling rate is specified in Table 7-5, which displays the corresponding sampling time and export interval for a given sampling rate.

Table 7-5. Time-Based Sampling Rate and Export Interval
Sampling Rate (R)Sampling Interval P (ms)Sampling Time ΔT (ms)Idle Time I (ms)
6481921288064
1288192648128
2568192328160
5128192168176
1024819288184
2048819248188
409616384416380
819232768432762


The sampling interval (P) is the time between two purging events of the NetFlow cache, where ΔT is the length of the sampling window in which a snapshot of all flows traversing the device is taken. The idle time (I) is the sampling interval minus the active sampling time (I = P – ΔT). At time 0, the table is cleared, and flow entries are added. At time 0 + ΔT, all flows are exported, the table is flushed again, and the idle time starts. At time 0 + P the cache is cleared without exporting any data, and all previous steps are repeated cyclically.

For time-based sampled NetFlow, the export interval cannot be configured; the sampling time is calculated as ΔT = P/R. Note that the sampling rate is globally defined for the entire Catalyst chassis.

For example, if you configure a rate of 64, flow-based sampled NetFlow meters traffic for the first 128 ms of a total interval of 4096 ms. If the rate is 2048, sampled NetFlow accounts traffic from the first 4 ms of a 8192-ms interval.

The following configuration enables time-based sampling on the fastethernet 5/12 interface, with a sampling rate of 1 in 64:

Catalyst(config)# mls sampling time-based 64
Catalyst(config)# interface fastethernet 5/12
Catalyst(config-if)# mls netflow sampling

Packet-based sampling allows the post-processing of flow records based on the number of packets observed in the flow. At each sampling interval (configured in milliseconds), NetFlow exports the flow records for which the number of packets is greater than the configured packet-based sampling rate. The user-configurable parameters are the packet rate and the sampling interval. If no sampling interval is specified, 8192 is used as a default.

Packet-based sampling uses the following formula to sample a flow: the number of times sampled is approximately the length divided by the rate (packets_in_flow/sampling_rate). For example, if the flow is 32768 packets long and the sampling rate is 1024, the flow is sampled approximately 32 times (32768/1024).

Catalyst(config)# mls sampling packet-based rate [interval]

  • rate is the packet-based sampling rate. Valid values are 64, 128, 256, 512, 1024, 2046, 4096, and 8192.

  • interval (optional) is the sampling interval. Valid values are from 8000 to 16000 milliseconds.

NetFlow Input Filters

NetFlow Input Filters provide traffic metering on a specific subset of traffic by creating input filters. For example, you can select traffic from a specific group of hosts. NetFlow Input Filters is another NetFlow preprocessing feature, similar to packet-based Random Sampled NetFlow.

For the NetFlow Input Filters feature, classification of packets can be based on any of the following: IP source and destination addresses, Layer 4 protocol and port numbers, incoming interface, MAC address, IP Precedence, DSCP value, Layer 2 information (such as Frame Relay DE bits or Ethernet 802.1p bits), and Network-Based Application Recognition (NBAR). First, the packets are classified (filtered) on these criteria, and then they are grouped into NetFlow flows.

The filtering mechanism uses the MQC to classify flows. You can create multiple filters with matching samplers per subinterface. You can also configure different sampling rates by defining higher sampling rates for high-priority traffic classes and lower sampling rates for low-priority traffic. Figure 7-10 shows a typical example. You probably have a tight SLA linked with the VoIP traffic; therefore, full packet monitoring is executed on this traffic class, while a sampling rate of 1:100 checks the SLA on the VPN traffic. Finally, for monitoring purposes, sampled NetFlow classifies the best-effort traffic with a 1:1000 sampling rate.

Figure 7-10. NetFlow Input Filters Example

[View full size image]


MQC offers multiple policy actions, such as limiting bandwidth rates and queuing management. These policies are applied only if a packet matches a criterion in a class map that is applied to the subinterface. A class map contains a set of match clauses and instructions on how to evaluate the clauses and acts as a filter for the policies. The NetFlow Input Filters feature combines NetFlow accounting with the MQC infrastructure. This implies that flow accounting is done on a packet format if it satisfies the match clauses.

NetFlow Input Filters require no additional memory. When comparing native NetFlow with NetFlow Input Filters, it enables a smaller number of NetFlow cache entries, because it can significantly reduce the number of flows. Accounting of classified traffic saves router resources by reducing the number of flows being processed and exported.

NetFlow Input Filters is supported in versions 5 and 9. The following four steps describe a configuration example combined with different sampling rates for different classifications:

Step 1.
Creating a class map for a policy map:

Referring to Figure 7-10, the VoIP traffic is classified with access list 101 and a precedence of 5 (DSCP value of 40), and the VPN traffic is classified with access list 102:

router(config)# class-map my_high_importance_class
router(config-cmap)# match access-group 101
router(config-cmap)# match dscp cs5
router(config)# class-map my_medium_importance_class
router(config-cmap)# match access-group 102

Step 2.
Creating a sampler map for a policy map:

In the following example, three sampler maps called my_high_sampling, my_medium sampling, and my_low_sampling are created for use with a policy map:

router(config)# flow-sampler-map my_high_sampling
router(config-sampler)# mode random one-out-of 1
router(config)# flow-sampler-map my_medium_sampling
router(config-sampler)# mode random one-out-of 100
router(config)# flow-sampler-map my_low_sampling
router(config-sampler)# mode random one-out-of 1000

Step 3.
Creating a policy containing NetFlow sampling actions:

The following example shows how to create a class-based policy containing three NetFlow sampling actions. In this example, a sampling action named my_high_sampling is applied to a class named my_high_importance_class, a sampling action named my_medium_sampling is applied to a class named my_medium_importance_class, and a sampling action named my_low_sampling is applied to the default class:

router(config)# policy-map mypolicymap
router(config-pmap)# class my_high_importance_class
router(config-pmap-c)# netflow-sampler my_high_sampling
router(config-pmap)# class my_medium_importance_class
router(config-pmap-c)# netflow-sampler my_medium_sampling
router(config-pmap)# class class-default
router(config-pmap-c)# netflow-sampler my_low_sampling

Step 4.
Applying a policy to an interface:

The following example shows how to apply a policy containing NetFlow sampling actions to an interface. In this example, a policy named mypolicymap is attached to interface POS1/0:

router(config)# interface POS1/0
router(config-if)# service-policy input mypolicymap

MPLS-Aware NetFlow

NetFlow can be used effectively in an MPLS network for VPN accounting and capacity planning. Ingress NetFlow can be used to account for traffic entering an MPLS VPN network from the customer site. The customer name can be linked to the associated VRF with the particular customer site by correlating it with the ifIndex value. MPLS-Aware NetFlow is a feature dedicated to the monitoring of MPLS flows, because it aggregates traffic per MPLS label within the MPLS core. This feature meters how much traffic is destined for a specific Provider Edge (PE) router in the network, allowing an operator to calculate a traffic matrix between PE routers in the MPLS network.

An MPLS flow contains up to three incoming MPLS labels of interest, with experimental bits (EXP) and the end-of-stack (S) bit in the same positions in the packet label stack. MPLS-Aware NetFlow captures MPLS traffic that contains Layer 3 IP and Layer 2 non-IP packets and uses the NetFlow version 9 export format.

When MPLS traffic is observed, MPLS-Aware NetFlow captures and reports up to three labels of interest and the label type and associated IP address of the top label, along with the normal NetFlow version 5 data record. Unlike NetFlow, MPLS-Aware NetFlow reports a 0 value for IP next hop, source and destination BGP Autonomous System numbers, or source and destination prefix masks for MPLS packets. A Label Switch Router (LSR) does not have routing information about the IP addresses in the MPLS packets payload. Other fields, such as source IP address, destination IP address, transport layer protocol, source application port number, destination application port number, IP type of service (ToS), TCP flags, input interface, and output interface, may be utilized as key-fields.

When you configure the MPLS-Aware NetFlow feature, you can select MPLS label positions in the incoming label stack that are of interest. You can capture up to three labels from positions 1 to 6 in the MPLS label stack. Label positions are counted from the top of the stack. For example, the position of the top label is 1, the position of the next label is 2, and so on. You enter the stack location value as an argument to the following command:

router(config)# ip flow-cache mpls label-positions [label-position-1 [label-
position-2
  [label-position-3]]] [no-ip-fields] [mpls-length]


					  

The label-position-n argument represents the position of the label on the incoming label stack. For example, the ip flow-cache mpls label-positions 1 3 4 command configures MPLS-Aware NetFlow to capture and export the first (top), third, and fourth labels. mpls-length reports the length of the MPLS packets, as opposed to the included IP packet length. If the no-ip-fields option is specified, the IP-related fields are reported with null values. With the introduction of Flexible NetFlow supporting MPLS fields, a more convenient solution would be the definition of a new template without those fields containing null values.

In Figure 7-11, some VPN traffic comes from the Customer Edge router 1 (CE1), enters the MPLS core via the Provider Edge router 1 (PE1), is metered with MPLS-Aware NetFlow at Provider router 1 (P1), and exits on PE2, PE3, or PE4.

Figure 7-11. NetFlow Input Filters Example


The following configuration enables MPLS-Aware NetFlow on the PE1-facing interface of P1. It uses the top MPLS label as the key-field (along with the 3 EXP bits and the bottom of the stack bit [S bit]). The bytes reported by NetFlow include the full MPLS packet lengths.

Router(config)#ip flow-cache mpls label-positions 1 no-ip-fields mpls-length


					  

The following show command illustrates that NetFlow cache contains this flow, with a top label of 486, with the 3 EXP bits value of 4, with a bottom-of-the-stack S bit value of 0, and, most importantly, with the Forwarding Equivalent Class (FEC) value of 10.10.10.3. The FEC value points to the loopback of the exit PE router, which is the exit point of the core network.

P1# show ip flow verbose cache
...
SrcIf  SrcIPaddress DstIf  DstIPaddress  Pr  TOS  Flgs  Pkts
Port  Msk  AS  Port  Msk  AS  NextHop  B/Pk  Active
PO2/0  0.0.0.0      PO3/0  0.0.0.0       00  00   10    1729
0000  /0   0   0000  /0   0   0.0.0.0   792  14.6
Pos:Lbl-Exp-S 1:486-4-0 (LDP/10.10.10.3)

Enabling MPLS-Aware NetFlow on all PE-facing interfaces of the P routers produces flow records that provide input to capacity planning tools to draw the core traffic matrix.

BGP Next-Hop Information Element

The BGP next hop is an important key-field in network-wide capacity planning where the core traffic is required, because the BGP next hop characterizes the network's exit point.

NetFlow adds the BGP next-hop information to the flow records to the main cache and—if aggregation is enabled—to the BGP Next-Hop ToS aggregation cache. The router performs a lookup on the destination IP address in the BGP table and adds the BGP next-hop information to each NetFlow flow record. This action adds an extra 16 bytes to the flow entry, resulting in a total of 80 bytes per entry.

Router(config)# ip flow-export version 9 [origin-as | peer-as] bgp-nexthop

This command enables the export of origin or peer AS information as well as BGP next-hop information from the NetFlow main cache. The origin-as option exports the flow's source and destination BGP AS, and peer-as exports the adjacent source and destination BGP AS.

Router# show ip cache verbose flow
...
SrcIf          SrcIPaddress    DstIf          DstIPaddress    Pr TOS Flgs Pkts
Port Msk AS                    Port Msk AS    NextHop              B/Pk   Active
BGP:BGP_NextHop
Et0/0/2        10.10.10.100    Et0/0/4        10.20.10.10     01 00  10   20
0000 /8  10                    0800 /8  20    10.10.10.6            100   0.0
BGP:10.10.10.1


					  

The following command enables the BGP Next-Hop ToS aggregation. It reports the BGP origin and destination BGP Autonomous System (AS) value, the input and output interfaces, the DSCP, and the BGP next hop. In addition, the following information elements are present in all aggregations: number of flows, number of packets, number of bytes, and the sysUpTime of the flow start and end time. Note that for ingress NetFlow, the DSCP information element value is the one contained in the observed packet, before any QoS operations are applied on the packet.

Router(config)# ip flow-aggregation cache bgp-nexthop-tos
Router(config-flow-cache)# enabled

Router# show ip cache flow aggregation bgp-nexthop-tos
...
Src If         Src AS  Dst If         Dst AS  TOS Flows   Pkts  B/Pk Active
BGP NextHop
Et0/0/2        10      Et0/0/4          20     00    9     36     40 8.2
BGP:10.10.10.1

Enabling the BGP Next-Hop ToS feature on all the CE-facing interface of the PE routers would produce flow records that, when combined, would produce the core traffic matrix as input to any capacity planning tools. The principles are similar to MPLS-Aware NetFlow. The differences are that NetFlow is enabled at different routers, that MPLS-Aware NetFlow might produce fewer flow records due to the MPLS top-label aggregation, and that the BGP Next-Hop ToS aggregation monitors only IP traffic.

NetFlow Multicast

NetFlow Multicast provides usage information about network traffic for a complete multicast traffic monitoring solution, because it allows metering multicast-specific data for multicast flows, along with the traditional NetFlow data records.

NetFlow Multicast offers three configuration choices:

  • Multicast ingress accounting— Multicast packets are counted as unicast packets with two additional fields: the number of replicated packets and the byte count. With multicast ingress accounting, the destination interface field and the IP next-hop field are set to 0 for multicast flows.

    The number of replicated packets (egress) divided by the number of observed packets (ingress) delivers the multicast replication factor. The total number of flows is limited in the case of multicast ingress accounting, because a new flow entry is generated for the senders only.

  • Multicast egress accounting— All outgoing multicast streams are counted as separate flows. Note that this option generates a higher number of flow entries as a separate flow for each IP Multicast receiver.

  • Multicast ingress and multicast egress accounting— Care should be taken not to duplicate the flow records by enabling ingress and egress accounting at the same device.

The following example shows how to configure multicast egress NetFlow accounting on the Ethernet 0/0 interface:

Router(config)# interface ethernet 0/0
Router(config-if)# ip multicast netflow egress

The following example shows how to configure multicast ingress NetFlow accounting on the ingress Ethernet 1/0 interface:

Router(config)# interface serial 2/1/1.16
Router(config-if)# ip multicast netflow ingress
Router# show ip cache verbose flow
...
SrcIf          SrcIPaddress    DstIf          DstIPaddress     Pr TOS Flgs  Pkts
Port Msk AS                    Port Msk AS    NextHop               B/Pk  Active
IPM:OPkts    OBytes
Et1/1/1        11.0.0.1        Null           227.1.1.1        01 55  10     100
0000 /8  0                     0000 /0  0     0.0.0.0                 28     0.0
IPM:  200    5600
Et1/1/1        11.0.0.1        Se2/1/1.16     227.1.1.1        01 55  10     100
0000 /8  0                     0000 /0  0     0.0.0.0                 28     0.0


					  

The IPM:OPkts column displays the number of IP multicast output packets, the IPM:OBytes column displays the number of IPM output bytes, and the DstIPaddress column displays the destination IP address for the IP Multicast output packets.

In this example, the first flow is monitored with Multicast NetFlow ingress. The replication factor is 2, as you can deduce from the 200 multicast output packets and the 100 observed packets. Note that the destination interface reports Null, because in this case, NetFlow does not have an information element suitable for reporting a set of multiple interfaces. The second flow in the cache reports the same multicast flow metered by Multicast NetFlow egress: the destination interface is correct, and the number of packets is 100, as observed on the outgoing interface. This flow does not report the IPM:OPkts and IPM:OBytes because unicast and multicast flows cannot be distinguished at the egress interface.

Figure 7-12 illustrates a typical multicast flow, coming from the source 10.0.0.2 and multicast to 224.10.10.100.

Figure 7-12. NetFlow Multicast Configuration Scenarios, Part 1


Figure 7-13 displays relevant IP Multicast information elements of the flow records, with the three possible configuration scenarios.

Figure 7-13. NetFlow Multicast Configuration Scenarios, Part 2

[View full size image]


Another interesting feature of NetFlow Multicast is the accounting for multicast packets that fail the Reverse Path Forwarding (RPF) check:

Router(config)# ip multicast netflow rpf-failure

In multicast, the primary purpose of the RPF check is to prevent loops, which could lead to multicast storms. NetFlow Multicast accounts the dropped packets and supplies relevant information for multicast routing debugging.

NetFlow Layer 2 and Security Monitoring Exports

The NetFlow Layer 2 and Security Monitoring Exports feature adds the ability for NetFlow to capture the values from several extra fields in Layer 3 IP traffic and Layer 2 LAN traffic to identify network attacks and their origin.

The following fields are reported:

  • Time-to-Live (TTL) field, extracted from the IP packet header. The TTL field is used to prevent the indefinite forwarding of IP datagrams. It contains a counter (1 to 255) set by the source host. Each router that processes this datagram decreases the TTL value by 1. When the TTL value reaches 0, the datagram is discarded.

  • Identification (ID) field, extracted from the IP packet header. All fragments of an IP datagram have the same value in the ID field. Subsequent IP datagrams from the same sender have different values in the ID field.

  • Packet length field, extracted from the IP packet header.

  • ICMP type and code, extracted from ICMP data.

  • Source MAC address field from received frames.

  • Destination MAC address field from transmitted frames.

  • VLAN ID field from received frames.

  • VLAN ID field from transmitted frames.

NetFlow reports the minimum and maximum values of TTL and packet length in a flow; note that these two attributes are non-key-fields. Reporting only the value of a specific packet in the flow (such as the first or the last) would not offer relevant information for security analysis.

To enable NetFlow version 9 to report the described functions, the following additional configuration command is required:

Router(config)# ip flow-capture {icmp | ip-id | mac-addresses | packet-length |
  ttl | vlan-id}


					  

A typical flow in the NetFlow cache looks like this:

Router# show ip cache verbose flow
...
SrcIf          SrcIPaddress    DstIf          DstIPaddress     Pr TOS Flgs  Pkts
Port Msk AS                    Port Msk AS    NextHop               B/Pk  Active
Et0/0.1        10.251.138.218  Et1/0.1        172.16.10.2     06 80   00      65
0015 /0  0                     0015 /0  0     0.0.0.0                840    10.8
MAC: (VLAN id)  aaaa.bbbb.cc03  (005)          aaaa.bbbb.cc06   (006)
Min plen:      840                            Max plen:        840
Min TTL:       59                             Max TTL:           59
IP id:            0


					  

Top Talkers

The NetFlow Top Talkers feature can be useful for analyzing network traffic in any of the following ways:

  • Security— List the top talkers to see if traffic patterns consistent with a denial of service (DoS) attack are present in your network.

  • Load balancing— Identify the heavily used network elements and paths in the network, and move network traffic to less-used routes in the network.

  • Traffic analysis— Consult the data retrieved from the NetFlow MIB and Top Talkers feature to assist in general traffic study and planning for your network. Because the top flow records can be retrieved via SNMP, this feature offers new possibilities in network baselining.

The Top Talkers feature allows the top flows to be sorted for easier monitoring, troubleshooting, and retrieval at the CLI level. Top talkers can be sorted by the following criteria:

  • Total number of packets in each top talker

  • Total number of bytes in each top talker

In addition to sorting top talkers, specific criteria provide a structured output. The match command, which acts as a filter, is used to specify the criteria, such as source IP address, destination IP address, application port, and many more.

The match command has the following syntax:

match {[byte-range [max-byte-number min-byte-number | max max-byte-number | min  min-
byte-number] | class-map map-name | destination [address ip-address [mask | /nn] | as
as-number | port [max-port-number min-port-number | max max-port-number | min  min-
port-number] | direction [ingress | egress] | flow-sampler flow-sampler-name |  input-
interface interface-type interface-number | nexthop-address ip-address [mask | /nn] |
output-interface interface-type interface-number | packet-range [ max-packets min-
packets | max max-packets | min min-packets] | protocol [protocol-number | udp | tcp] |
source [address ip-address [mask | /nn] | as as-number | port  max-port-number min-
port-number | max max-port-number | min min-port-number] | tos [tos-byte | dscp dscp |
precedence precedence]

Table 7-6 lists the different match statement options for the NetFlow Top Talkers feature.

Table 7-6. match Statement Options for the NetFlow Top Talkers Feature
OptionDescription
source addressSpecifies that the match criterion is based on the source IP address.
destination addressSpecifies that the match criterion is based on the destination IP address.
nexthop addressSpecifies that the match criterion is based on the next-hop IP address.
ip-addressIP address of the source, destination, or next-hop address to be matched.
MaskAddress mask in dotted-decimal format.
/nnAddress mask as entered in Classless Interdomain Routing (CIDR) format. An address mask of 255.255.255.0 is equivalent to a /24 mask in CIDR format.
source portSpecifies that the match criterion is based on the source port.
destination portSpecifies that the match criterion is based on the destination port.
port-numberSpecifies that the match criterion is based on the port number.
min portThe minimum port number to be matched. Any port number equal to or greater than this number constitutes a match. Range: 0 to 65535.
max portThe maximum port number to be matched. Any port number equal to or less than this number constitutes a match. Range: 0 to 65535.
min port max portA range of port numbers to be matched. Range: 0 to 65535.
source asSpecifies that the match criterion is based on the source autonomous system.
destination asSpecifies that the match criterion is based on the destination autonomous system.
as-numberThe autonomous system number to be matched.
input-interfaceSpecifies that the match criterion is based on the input interface.
output-interfaceSpecifies that the match criterion is based on the output interface.
interfaceThe interface to be matched.
tosSpecifies that the match criterion is based on the type of service (ToS).
tos-valueThe ToS to be matched.
dscp dscp-valueDifferentiated Services Code Point (DSCP) value to be matched.
precedence precedence-valuePrecedence value to be matched.
ProtocolSpecifies that the match criterion is based on the protocol.
Protocol-numberThe protocol number to be matched. Range: 0 to 255.
tcpThe protocol number to be matched as TCP.
udpThe protocol number to be matched as UDP.
flow-samplerThe match criterion is based on top talker sampling.
flow-sampler-nameName of the top talker sampler to be matched.
class-mapSpecifies that the match criterion be based on a class map.
ClassName of the class map to be matched.
packet-rangeThe match criterion is based on the number of IP datagrams in the flows.
byte-rangeThe match criterion is based on the size in bytes of the IP datagrams in the flows.
min-range-number max-range-numberRange of bytes or packets to be matched. Range 1 to 4,294,967,295.
min minimum-rangeMinimum number of bytes or packets to be matched. Range: 1 to 4,294,967,295.