Ethernet has become the technology of choice for local area networks (LANs). Originally designed to transmit 3 Mbps, a base network interface using Ethernet can now transmit data at 10 Mbps. The latest Ethernet technology supports data transmission at 10 Gbps! Supported media for Ethernet includes thick and thin coaxial, fiber-optic, and twisted-pair cables.
The major reason for the success of Ethernet in industry was the adoption of the Ethernet standard (IEEE 802.3), allowing for interoperability between different vendor’s products (Carrier Sense Multiple Access with Collision Detection CSMA/CD, Access Method and Physical Layer Specifications). This specification allowed many different vendors to produce network interfaces and media that supported Ethernet.
Ethernet is a very flexible system because interfaces operating at different transmission rates can be connected to the same LAN.
There are three elements that comprise Ethernet:
Physical media segments, which are used to interconnect systems
The Media Access Control (MAC) rules that implement access to Ethernet channels
A frame that organizes data to be transmitted in a standard way
Systems connected to the Ethernet are technically known as stations. Every station on the network is independent—access is not centrally controlled, because the medium allows signaling to be received and interpreted by all stations. Transmission across Ethernet occurs in bitwise form.
When transmitting data, a station must wait for the channel to be free of data before sending a packet formatted as a frame.
If a station has to wait for the channel to be free before sending its own packets, you can appreciate the potential for traffic congestion and a “broadcast storm” if one station had a lot of data to send. However, after transmitting one packet, each station then competes for the right to transmit each subsequent frame. The MAC access control system prevents traffic congestion from occurring. It is quite normal, for example, for collision rates of 50 percent to exist without any noticeable impact on performance.
A more insidious problem occurs with so-called late collisions. These are collisions that occur between two hosts that are not detected because the latency for transmission between the two hosts exceeds the maximum time allowed to detect a collision. If this occurs at greater than 1 percent of the time, serious problems may emerge in terms of data throughput and potential corruption.
The mechanism for preventing packet collision is the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) method specified by the IEEE standard. Prior to data being transmitted, a station must enter Carrier Sense (CS) mode. If no data is detected on the channel, all stations have an equal opportunity to transmit a frame, a condition known as Multiple Access (MA). If two or more stations begin transmitting frames and detect that they are transmitting at the same time, a state known as Collision Detection (CD), then the stations halt transmission, enter the CS mode and wait for the next MA opportunity. Collisions can occur because there is a time difference between when two stations might detect MA, depending on their “distance” in the network. When a collision occurs, the frames must be re-sent by their respective parties. The process flow for CSMA/CD is shown in Figure 33-1.
When systematic problems emerge in a LAN, demonstrated by much lower than theoretical transmission rates, a design flaw in the network layout could be causing a large number of collisions. You might be wondering how, if a CD event occurs, two stations can prevent retransmitting at the same time in the future, thereby repeating their previous collision—the answer is that the delay between retransmission is randomized for each network interface. This prevents repetitive locking, and delivery of a packet will always be attempted 16 times before a failure occurs. When more stations are added to a single LAN, the number of collisions occurring will also increase. With high-speed networks, the delay caused by retransmission of a packet is usually in the order of microseconds rather than milliseconds. If the number of retransmission escalates, then there is a planned, exponential reduction in network traffic, affecting all stations, until stable operation is restored.
One of the important things to note about Ethernet, with respect to quality of service issues, is that Ethernet is not a guaranteed delivery system, unlike some other networking systems. This is because Ethernet operates on the principle of best effort, given the available resources. Ethernet is susceptible to electrical artifacts, interference, and a number of other problems that may interfere with data transmission. However, for most practical purposes, Ethernet performs very well. If assured delivery is required, higher-level application protocols (based on message queuing, for example) would need to be implemented across Ethernet to have guaranteed delivery.
Transport layer protocols like the Transmission Control Protocol (TCP) label each packet with a sequence number to ensure that all packets are received and reassembled in the correct order.
Ethernet has a logical topology, or tree-like structure, that is distinct from the set of physical interfaces that are interconnected using networking cable. One of the implications of this tree-like structure is that individual branches can be segmented in order to logically isolate structural groups. This structure also allows a large number of unrelated networks to be connected to each other, forming the basis of the Internet as we know it. Individual network branches can be linked together by using a repeater of some kind, such as a hub or a switch. In either case, the Ethernet channel can be extended beyond the local boundaries imposed by a single branch. A hub only connects interfaces on a single segment, while a switch can interconnect multiple LANs.