Buffering Data

Frames must wait their turn for the central arbiter before being transmitted in shared bus architectures. Frames can also potentially be delayed when congestion occurs in a crossbar switch fabric. As a result, frames must be buffered until transmitted. Without an effective buffering scheme, frames are more likely to be dropped anytime traffic oversubscription or congestion occurs.

Buffers get used when more traffic is forwarded to a port than it can transmit. Reasons for this include the following:

  • Speed mismatch between ingress and egress ports

  • Multiple input ports feeding a single output port

  • Half-duplex collisions on an output port

  • A combination of all the above

To prevent frames from being dropped, two common types of memory management are used with Catalyst switches:

  • Port buffered memory

  • Shared memory

Port Buffered Memory

Switches utilizing port buffered memory, such as the Catalyst 5000, provide each Ethernet port with a certain amount of high-speed memory to buffer frames until transmitted. A disadvantage of port buffered memory is the dropping of frames when a port runs out of buffers. One method of maximizing the benefits of buffers is the use of flexible buffer sizes. Catalyst 5000 Ethernet line card port buffer memory is flexible and can create frame buffers for any frame size, making the most of the available buffer memory. Catalyst 5000 Ethernet cards that use the SAINT ASIC contain 192 KB of buffer memory per port, 24 kbps for receive or input buffers, and 168 KB for transmit or output buffers.

Using the 168 KB of transmit buffers, each port can create as many as 2500 64-byte buffers. With most of the buffers in use as an output queue, the Catalyst 5000 family has eliminated head-of-line blocking issues. (You learn more about head-of-line blocking later in this chapter in the section "Congestion and Head-of-Line Blocking.") In normal operations, the input queue is never used for more than one frame, because the switching bus runs at a high speed.

Figure 2-5 illustrates port buffered memory.

Figure 2-5. Port Buffered Memory

[View full size image]
graphics/02fig05.gif


Shared Memory

Some of the earliest Cisco switches use a shared memory design for port buffering. Switches using a shared memory architecture provide all ports access to that memory at the same time in the form of shared frame or packet buffers. All ingress frames are stored in a shared memory "pool" until the egress ports are ready to transmit. Switches dynamically allocate the shared memory in the form of buffers, accommodating ports with high amounts of ingress traffic, without allocating unnecessary buffers for idle ports.

The Catalyst 1200 series switch is an early example of a shared memory switch. The Catalyst 1200 supports both Ethernet and FDDI and has 4 MB of shared packet dynamic random-access memory (DRAM). Packets are handled first in, first out (FIFO).

More recent examples of switches using shared memory architectures are the Catalyst 4000 and 4500 series switches. The Catalyst 4000 with a Supervisor I utilizes 8 MB of Static RAM (SRAM) as dynamic frame buffers. All frames are switched using a central processor or ASIC and are stored in packet buffers until switched. The Catalyst 4000 Supervisor I can create approximately 4000 shared packet buffers. The Catalyst 4500 Supervisor IV, for example, utilizes 16 MB of SRAM for packet buffers. Shared memory buffer sizes may vary depending on the platform, but are most often allocated in increments ranging from 64 to 256 bytes. Figure 2-6 illustrates how incoming frames are stored in 64-byte increments in shared memory until switched by the switching engine.

Figure 2-6. Shared Memory Architecture

graphics/02fig06.gif