With the ever increasing demands for greater data transfer speeds (bandwidth) and the higher levels quality of service requirements being placed on our communications and networking infrastructure; particularly with the additional burdens placed on this infrastructure by the current rapid implementation of converged communications and networking services such as Voice over Internet Protocol (VoIP), it is now more crucial than ever that our data transport systems operate and maximal efficiency while exhibiting the lowest possible latency.
The major element here is proving to be the latency issues associated with data throughput processing. Due to the way communications and networking protocols work (TCP/IP included) the least number of times that an infrastructure device such as a router or switch needs to perform addressing related processing and analysis operations the more efficient the system and the lower the impact of administrative overhead on actual user data bandwidth and throughput.
Where routers relying on IP Addressing need to remove the Layer 2 header and trailer then examine the Layer 3 header for addressing information switches don’t. Instead they make their switching decisions based on Layer 2 header addressing information (MAC Addresses). It doesn’t take Einstein to work out that the fewer processing steps required to make a correct switching decision and send the frame on its way the better the overall system data throughput will be.
So it is that because a switch performs less processing per frame to do its job in comparison to a router they are capable of higher effective throughput per processing cycle.
Another critical traditional difference between switches and routers is that switches perform their Layer 2 switching functions in hardware (dedicated integrated circuits) that perform the same simple process over and over again. Today’s switches do this at mind numbing speeds. Routers need to implement their routing functions via software (an operating system). Yes your little el cheapo switch does the same basic function as a high end managed switch using pretty much the same type of hardware. The managed switch has more user definable features and additional capabilities.
Internal Switching Methods
The three main methods of internal switching used in production environment switches today in order of increasing latency are:
Cut-Through – Also referred to as Fast Forward by some including Cisco®. When in Cut-Through mode the switch waits for the destination Media Access Control (MAC) Address (also referred to as the hardware address) to be received. Once the MAC Address is known the switch refers to its MAC filter table to determine which interface (port) the frame should next be placed upon. Once the switch has determined which port to forward the frame through it immediately begins to do so; even before the entire frame has arrived.
Bearing in mind that the destination MAC Address will be contained in the first bits of the frame there is very little delay between the switch learning the destination MAC Address, making its switching (forwarding) decision and the start of forwarding of the frame. As there is generally little other processing involved it is easy to see why this method is also called Fast Forward. It is the fastest possible switching method of all discussed here as it has the lowest latency of all three methods.
Fragment Free – Also referred to as Modified Cut-Through and is the default mode used by Cisco® Catalyst® 1900 series switches. The switching method is the same as with Cut-Through except in when in Fragment Free mode the switch will check the first 64 bytes of every frame received for fragmentation.
The reasoning behind this is that statistically speaking the vast bulk of errors occurs within the first 64 bytes of a frame. These errors or frame fragments are known as runts and they are generally created as the result of collisions (Ethernet networks). When a runt is detected the switch will automatically drop the frame.
If all is well and there is no frame fragmentation the switch will then look up the destination MAC Address in its MAC filter table. Once again; as with the Cut-Through method, the switch will begin to transmit the frame immediately it knows which port to forward the frame through. The process of waiting to check the first 64 bytes of the frame for fragmentation adds to the latency of the Fragment Free switching method compared to the Cut-Through method which does not require this additional processing to be performed on every frame received.
In a fully switched Local Area Network (LAN) because there are only two devices per segment (the switch and a client device) no collisions can occur therefore most administrators will change to the Cut-Through switching method as its lower latency does add considerably to a network’s effective bandwidth.
Store-and-Forward – The third basic switching method is known as Store-and-Forward because when in Store-and-Forward mode the switch will always store the incoming data frame to its internal buffer. Once; the complete frame has been received and stored to buffer, the switch will then run a Cyclic Redundancy Check (CRC) against the frame. If the CRC passes the switch will then look up the destination MAC Address in its MAC filter table to learn which port the frame should be forwarded through. It then does so.