Link Capacity Planning

The capacity planning case study is divided into two parts: link capacity planning and network-wide capacity planning. Each part uses different accounting features, as described in Part II. Starting simply, the link capacity planning scenario is addressed first. The most important part of this scenario describes network-wide capacity planning, which covers requirements and relationships with network performance monitoring, peering agreements, and traffic engineering. Some aspects and examples of capacity planning were explained in earlier chapters:

  • Several sections of Chapter 1, "Understanding the Need for Accounting and Performance Management," are related to capacity planning:

    - "Capacity Planning" justifies the need for capacity planning and explains the interaction between the core traffic matrix and capacity planning.

    - "Traffic Profiling and Engineering" is the companion section on performance optimization of traffic handling in operational networks, with the focus of the optimization being minimizing overutilization of capacity.

    - "Peering and Transit Agreement" is dedicated to the monitoring and capacity planning of BGP peering.

    - "Network Performance Monitoring" deals with the measurement of Service Level Agreement (SLA) data.

  • Chapter 4, "SNMP and MIBs," offers an example of link capacity planning with the EVENT and EXPRESSION MIBs.

Note

As an introduction to this chapter, read the sections "Capacity Planning" and "Traffic Profiling and Engineering" in Chapter 1.

Link capacity planning looks at the trending of the link utilization to determine when a bandwidth upgrade will be required.

As explained in the "Device and Link Performance" section of Chapter 13, "Monitoring Scenarios," polling the ifInOctets and ifOutOctets MIB variables allows the computation of the link utilization, assuming that the bandwidth statement is correctly configured for each interface. This parameter can be read and modified at the ifSpeed MIB variable. ifSpeed contains a default value per interface type, but it should be verified. If necessary, you can change it with the IOS bandwidth interface command.

Even if constant SNMP polling provides the trending of the link utilization, active trending analysis is recommended to help you find the right time to upgrade the link. The EVENT MIB and EXPRESSION MIB provide a proactive mechanism based on SNMP notifications at the network element. Indeed, as investigated in the "How to Create New MIB Objects" section in Chapter 4, the EXPRESSION MIB lets you create new MIB variables based on existing MIB variables. In this case, a new MIB variable was defined using the ifInOctets, ifOutOctets, and ifSpeed variables from the Interface-MIB, allowing direct monitoring of the interface utilization with a single MIB variable. Associated with the EVENT MIB, this new MIB variable can be monitored for threshold crossing, in which case an SNMP notification is generated. Here's a typical example: If the link utilization rises above 70 percent (or above another specified threshold) during one hour, a notification is sent to the network management station, indicating that it might be time to upgrade the link capacity or at least closely monitor link utilization, including protocols and applications. For further information, refer to Chapter 4, which explains this example in greater detail.

A mechanism based on a threshold-crossing alarm is adequate for short-term planning, but longer-term planning requires an extrapolation of the utilization based on current and future link usage. The input from service management systems is important information to determine the required capacity in the future.

Note that, when a link upgrade is required, the operations team should decide the specific time of the upgrade—so that the maintenance windows can be taken into account, for example.

Although capacity planning monitors link utilization, which is one of the key performance indicators, upgrading the link bandwidth is not always the solution for a better user experience. Indeed, protocols such as HTTP, FTP, and SMTP might consume the additionally available bandwidth very quickly and therefore compete with business-critical applications. The user experience is linked to the specific services used. They have different SLA requirements, with characteristic metrics such packet loss, delay, or delay variation. A proposed solution is to identify the active applications on the link, along with their respective bandwidth utilization, priorities, and SLA parameters (such as business-critical, real-time, and best-effort).

The next step is classifying traffic based on quality of service (QoS) parameters, where specific applications are individually prioritized and potentially rate-limited or blocked. If QoS is deployed, link capacity planning should be decomposed per Class of Service (CoS). In this case, the CISCO-CLASS-BASED-QOS-MIB is the MIB of choice, because it provides the necessary statistics, including summary counters and rates by traffic class before and after the enforcement of QoS policies. For a detailed explanation of MIB capabilities, refer to the Chapter 4 section "Technology-Specific MIBs for Accounting and Performance." Two monitoring solutions exist:

  • Immediate polling of the CISCO-CLASS-BASED-QOS MIB from the network management system via SNMP

  • Proactive fault management at the network element by setting thresholds on MIB variables at the CISCO-CLASS-BASED-QOS-MIB MIB in conjunction with the RMON event/alarm or the EVENT MIB



Part II: Implementations on the Cisco Devices