The MIBs discussed in the following sections are a collection of general-purpose information sources for accounting and performance management. They are not related to a specific technology, which means that they can be used in any type of network. There is a close relationship between performance and fault management, which especially the health-related MIBs clearly illustrate.
These MIBs do not provide direct performance counters. They indirectly monitor device and network performance by sending proactive notifications for potential fault sources, such as temperature indicators or failed system fans. For more details on fault and performance management, refer to Performance and Fault Management (Cisco Press, 2000). The following sections describe relevant accounting and performance management MIB details.
RFC 1213 provides the latest MIB-II definition, after multiple RFC iterations (the initial definition was RFC 1066, MIB-I). The concept is simple and effective: an interface is explained as ifEntry, which has a table structure (ifTable) that contains a sequence of objects. Some of these are listed here:
ifIndex is a unique value for each interface.
ifSpeed is an estimate of the interface's current bandwidth in bits per second.
ifInOctets is the total number of octets received on the interface, including framing characters.
ifInUcastPkts is the number of (subnet) unicast packets delivered to a higher-layer protocol.
ifInNUcastPkts is the number of nonunicast (subnet broadcast or subnet multicast) packets.
ifInErrors is the number of inbound packets that contained errors preventing them from being delivered to a higher-layer protocol.
ifInDiscards is the number of inbound packets that were discarded, even though no errors were detected, to prevent their being delivered to a higher-layer protocol. One possible reason for discarding such a packet could be that the existing buffer was occupied.
ifOutOctets, ifOutUcastPkts, ifOutNUcastPkts, ifOutErrors, ifOutDiscards are the same as the ifInDiscards object, except that they are for outgoing traffic instead. However, the reasons for errors and discards are probably different.
The Interface-MIB (RFC 2863) extends the distinction between unicast and nonunicast by introducing two new objects and deprecating the former nonunicast objects (ifInNUcastPkts, ifOutNUcastPkts):
ifInMulticastPkts is the number of packets, delivered by this sublayer to a higher (sub)layer, that were addressed to a multicast address at this sublayer.
ifInBroadcastPkts is the number of packets, delivered by this sublayer to a higher (sub)layer, that were addressed to a broadcast address at this sublayer.
In addition, RFC 2863, based on SMIv2, introduces the concept of 64-bit high-capacity counters to avoid quick counter wrap. Indeed, as the speed of network media increases, the minimum time in which a 32-bit counter wraps decreases. For example, a 10-Mbps stream of back-to-back, full-size packets causes ifInOctets to wrap in just over 57 minutes. At 100 Mbps, the minimum wrap time is 5.7 minutes, and at 1 Gbps, the minimum is 34 seconds. Requiring interfaces to be polled frequently to avoid missing a counter wrap becomes increasingly problematic.
ifTable is augmented with new counters, reflected by an extended syntax including HC (high capacity counter):
The Cisco private CISCO-IF-EXTENSION-MIB extends the IF-MIB even further by providing additional objects that are essential for identifying abnormal packets and conditions:
cieIfInRuntsErrs is the number of packets input on a particular physical interface that were dropped because they were smaller than the minimum allowable physical media limit.
cieIfInGiantsErrs is the number of input packets on a particular physical interface that were dropped because they were larger than the ifMtu (largest permitted size of a packet that can be sent/received on an interface).
cieIfInFramingErrs is the number of input packets on a physical interface that were misaligned or had framing errors.
cieIfInputQueueDrops, cieIfOutputQueueDrops indicate the number of packets dropped by the interface, even though no error was detected.
The CISCO-PING-MIB enables a Cisco network element to ping remote devices. This can be useful in distributed environments to reduce the overhead of central polling. A possible scenario would be a provider edge router (PE) in a PoP that polls unmanaged customer edge (CE) routers for availability in their respective VRFs. After completing the number of configured ping operations, the PE can optionally generate an event toward a central fault management application, which draws conclusions from the results of the ping operation.
The ciscoPingTable table offers the following read-write parameters (among others) for each entry:
ciscoPingAddress is the address of the device to be pinged.
ciscoPingPacketCount specifies the number of ping packets to send to the target in this sequence.
ciscoPingPacketTimeout specifies how long to wait for a response to a transmitted packet before declaring the packet dropped.
ciscoPingDelay specifies the minimum amount of time to wait before sending the next packet in a sequence after receiving a response or declaring a timeout for a previous packet.
ciscoPingTrapOnCompletion specifies whether a trap (ciscoPingCompleted) should be issued on completion of the sequence of pings.
If a management station wants to create an entry in the ciscoPingTable table, it should first generate a pseudo-random serial number to be used as the index to this sparse table.
The ciscoPingTable table offers the following read-only parameters (among others) for each entry:
ciscoPingMinRtt is the minimum round-trip time (RTT) of all the packets that have been sent in this sequence.
ciscoPingAvgRtt is the average RTT of all the packets that have been sent in this sequence.
ciscoPingMaxRtt is the maximum RTT of all the packets that have been sent in this sequence.
ciscoPingCompleted is set to true when all the packets in this sequence either have been responded to or have timed out.
A trap ciscoPingCompleted can be generated after a completed operation. It contains the following details: ciscoPingCompleted, ciscoPingSentPackets, ciscoPingReceivedPackets, ciscoPingMinRtt, ciscoPingAvgRtt, ciscoPingMaxRtt, and ciscoPingVrfName. The latter object has a valid value only if the ping was sent to a VPN address.
If enabled, the ciscoPingCompleted trap is sent after the operation completes, independent of the results (succeed or failed). There is no option to issue a trap only if the operations failed!
The CISCO-PROCESS-MIB collects statistics on the network element's CPU utilization, both globally for the device and additionally per process. The MIB provides a great level of detail for each process, such as allocated memory, the number of times the process has been invoked, and so on. From a device performance perspective, the following global device parameters are applicable. This MIB has no read-write objects. The relevant read-only MIB objects are as follows:
cpmCPUTotal5secRev is the overall CPU busy percentage in the last 5-second period.
cpmCPUTotal1minRev is the overall CPU busy percentage in the last 1-minute period.
cpmCPUTotal5minRev is the overall CPU busy percentage in the last 5-minute period.
cpmProcessPID contains the process ID.
cpmProcessName is the name associated with this process.
cpmProcessTimeCreated is the time when the process was created.
cpmProcExtRuntimeRev is the amount of CPU time the process has used, in microseconds.
cpmProcExtInvokedRev is the number of times that the process has been invoked since cpmProcessTimeCreated.
cpmProcExtUtil5SecRev provides a general idea of how busy a process caused the processor to be over a 5-second period. Similar objects exist for the process utilization over 1 and 5 minutes.
cpmProcExtPriorityRev is the priority level at which the process is running (critical, high, normal, low, not assigned).
The same results that the MIB offers can be retrieved at the CLI using the show process cpu command, as shown in Table 4-4.
|PID||Runtime (ms)||Invoked||uSecs||5 Seconds||1 Minute||5 Minute||TTY||Process|
Note that because of the indexing mechanism defined in the ENTITY-MIB (cpmCPUTotalPhysicalIndex specified in the CISCO-PROCESS-MIB points to entPhysicalEntry in the ENTITY-MIB), the CISCO-PROCESS-MIB can monitor the CPU of the processes running on the distributed system. A typical example is the CPU monitoring on line cards or VIP cards.
The CISCO-ENVMON-MIB collects environmental details, such as temperature, voltage conditions, and fan status. It contains various predefined thresholds and notifications to warn an operator about potential issues. Because these objects are not directly relevant for accounting and performance management, no further MIB objects are described here. However, it is advisable to enable the notifications and point them toward a fault management system. An automatic shutdown of a network element—such as one caused by overheating, less voltage, or malfunctioning fans—certainly has an impact on the network's overall performance.
The CISCO-HEALTH-MONITOR-MIB is similar to the CISCO-ENVMON-MIB; however, the health status is represented by a metric that consists of a set of predefined rules. An advantage of the CISCO-HEALTH-MONITOR-MIB is the configurable thresholds that can be adjusted by the operator in units of 0.01 percent.
The CISCO-MEMORY-POOL-MIB collects statistics on the memory utilization, such as used memory, amount of free memory, largest free blocks, and memory pool utilization over the last 1, 5, and 10 minutes. Similar to the health monitoring, memory monitoring should be enabled for proactive identification of critical situations. Because this MIB does not provide accounting and performance managed objects, no further details are presented here.
The CISCO-DATA-COLLECTION-MIB is a new approach to overcome the issue of SNMP for performance and accounting management. A Network Management System (NMS) needs to poll the network elements frequently—sometimes every 30 seconds—to get up-to-date information, get (close to) real-time information, and avoid counter wrapping.
This causes additional network utilization as well as CPU resource consumption at the NMS. During network connectivity issues, the NMS might not retrieve any information from the network element. As an alternative to regular MIB object polling from the NMS, the network element can collect its own MIB data and store it in a local file. This is the concept of the CISCO-DATA-COLLECTION-MIB. Locally storing bulk statistics also helps minimize data loss during temporary network outages, because network elements can store the collected data in volatile memory.
Another aspect is the CPU consumption at the network element concerning local versus remote polling. Having devices poll their own (locally) managed variables might reduce the CPU impact, because internal communication could be implemented more efficiently than handling SNMP requests from the NMS server.
Introduced in Cisco IOS 12.0(24)S and 12.3(2)T, this MIB module allows a management application to select a set of MIB object instances whose values need to be collected periodically. The configuration tasks consist of the following:
Specifying a set of instances (of the MIB objects in a data group) whose values need to be collected—that is, the grouping of MIB objects into data groups. All the objects in an object list have to share the same MIB index. However, the objects do not need to be in the same MIB and do not need to belong to the same MIB table. For example, it is possible to group ifInOctets and an Ethernet MIB object in the same schema, because the containing tables for both objects are indexed by the ifIndex.
Data selection for the Periodic MIB Data Collection and Transfer Mechanism requires the definition of a schema with the following information: name of an object list, instance (specific or wildcarded) that needs to be retrieved for objects in the object list, and how often the specified instances need to be sampled (polling interval). Wildcarding is an important benefit, because all indexed entries in a table can be retrieved by selecting the main object identifier (OID) with the wildcard function.
Collecting the required object values into local virtual files, either periodically or on demand. The collection period is configurable.
Reporting statistics and errors during the collection interval by generating an SNMP trap (cdcVFileCollectionError).
After the collection period ends (the default is 30 minutes), the virtual file is frozen, and a new virtual file is created for storing data. The frozen virtual file is then transferred to a specified destination. The network management application can choose to retain such frozen virtual files on the network element for a certain period, called the retention period.
Transferring the virtual files to specified locations in the network (such as to the Network Management System) by using FTP, TFTP, or RCP. The transfer status can be reported to the NMS by generating an SNMP trap (cdcFileXferComplete, which is either true or false) and logging a syslog error message locally at the network element. You can also configure a secondary destination for the file to be used if the file cannot be transferred to the primary destination.
Deleting virtual files periodically or on demand. The default setting is to delete the file after a successful transfer.
CISCO-DATA-COLLECTION-MIB introduces a number of new terms:
Base objects are MIB objects whose values an application wants to collect.
Data group is a group of base objects that can be of two types—object or table. An object type data group can consist of only one fully instantiated base object. A table type data group can consist of more than one base object, where each is a columnar object in a conceptual table. In addition, a table type data group can specify the instances of the base objects whose values need to be collected into the VFiles.
Virtual file (VFile) is a file-like entity used to collect data. A network element can implement a VFile as a simple buffer in the main memory, or it might use a file in the local file system. The file is called virtual because a network management application does not need to know the location of the VFile, because the MIB provides mechanisms to transfer the VFile to a location specified by the NMS application.
Current VFile points to the VFile into which MIB object data is currently being collected.
Frozen VFile refers to a VFile that is no longer used to collect data. Only frozen VFiles can be transferred to specified destinations.
Collection interval is associated with a VFile and specifies the interval at which the VFile is used to collect data. However, there are conditions under which a collection interval can be shorter than the specified time. For example, a collection interval is prematurely terminated when the maximum size of a VFile is exceeded or when an error condition occurs.
Polling period is associated with a data group. It determines the frequency at which the base MIB objects of a data group should be fetched and stored in a VFile.
A produced VFile looks like this:
Schema-def ATM2/0-IFMIB "%u, %s, %u, %u, %u, %u" epochtime ifDescr instanceoid ifInOctets ifOutOctets ifInUcastPkts ifInDiscards ATM2/0-IFMIB: 954417080, ATM2/0, 2, 95678, 23456, 234, 345 ATM2/0-IFMIB: 954417080, ATM2/0.1, 8, 95458, 54356, 245, 454 ATM2/0-IFMIB: 954417080, ATM2/0.2, 9, 45678, 8756, 934, 36756
The CISCO-DATA-COLLECTION-MIB is mainly targeted for medium to high-end platforms that have sufficient local storage (volatile or permanent) to store the VFiles.
A legacy alternative to the CISCO-DATA-COLLECTION-MIB is the CISCO-BULK-FILE-MIB. It provides similar functionality as the CISCO-DATA-COLLECTION-MIB, but it offers only a manual mode and has no timers for scheduled operations. Because this MIB has multiple advantages over the CISCO-BULK-FILE-MIB (such as more powerful and flexible data selection features and grouping of MIB objects from different tables into data groups), it will replace the CISCO-BULK-FILE-MIB in the future. It is suggested that you use the CISCO-DATA-COLLECTION-MIB instead of the CISCO-BULK-FILE-MIB if applicable.