Connection Oriented Protocols

Google+ Pinterest LinkedIn Tumblr +

When looking at connection-oriented protocols from the big picture perspective, it is important to get the correct desired end-result regardless of the mechanics of how you do so. In so far as connection-oriented protocols are concerned the end-result is paramount. This is why they are referred to as “reliable delivery systems”.

Data Integrity – It is pointless transferring X Megabytes of data if the delivery is tainted by corruption. Ensuring the integrity of the data is one of the fundamental tenets of information security and without it the whole kit and caboodle will come crashing down on top of us.

Data Format and Presentation – We may also need to be able to view the information as its creator’s intended. This is after all the very reason that Adobe developed the Portable Document Format (PDF). Thus end-systems can; regardless of their local software environments, view the same document in the same way as every other end-system.

Internet Protocol Evolution

From the very earliest days of the forerunners to the Internet (e.g. ARPANET) it was realized that reliable delivery mechanisms were crucial to the effective guaranteed delivery of the message regardless of the path it takes on its journey from point A to wherever.

This is one aspect in which the military are particularly interested. In short the message must get through in a pristine state regardless of the physical network’s status (degradation due to the actions of an “enemy” etc.).

Since it was the US Department of Defense (DOD) who were funding this early internetworking project these ideals were built into the system in the form of the protocol processing stack we now know as TCP/IP.

Protocol Layered Model

Right from the get-go it was decided that a layered model would be best used in for the development and building of the desired communications/networking processing stack. The reasons for this were many but the fact that this allowed for each layer’s functionalities to be independently defined and developed as an entity apart from the others is high on the list.

Another benefit was that each layer could be developed in parallel meaning that different teams could concentrate on one aspect or layer while other teams worked on other aspects. This has the advantage of dramatically reducing the overall time required to develop a new technology especially one as complicated as a protocol processing stack.

When all of the constituent components (each of the layers) are recombined the resultant protocol processing stack functions as a single unified well-oiled machine. In this way changes to one layer can be made independently of the other layers without the fear of adversely affecting their functionality (upgrades on-the-fly if you will).

Transmission and Routing Separation

Separating the transmission mechanisms from the traffic routing mechanisms or as we now refer to them; the Transport Layer from the Networking Layer, has proven to be very important design and development decisions. The Network layer is responsible for going from point A to point B while the Transport Layer really couldn’t care how this happens since it is essentially concerned with end-point to end-point data transfer.

From the Transport Layer’s perspective; for as long as end-point A can talk to end-point B, it is irrelevant the path(s)/route(s) taken in so doing. After all; it is the Network Layer’s job to manage routing and addressing issues at the network level. If; for whatever reason, some intermediate point(s) become unavailable, the Network Layer simply finds an alternate path (route) and the conversation will proceed.

Reliable Delivery

End-point to end-point delivery and how it is handled is where the difference between a reliable delivery connection-oriented Transport Layer protocol such as the Transmission Control Protocol (TCP) and a non-reliable Transport Layer protocol like the User Datagram Protocol (UDP) lies. The choices are that either; the message MUST get through intact (TCP), or “near enough is good enough” (UDP).

From a military stand-point; the message MUST get through reliably option weighs the heaviest. Failure in delivery of the message could have catastrophic results; something explored by Stanley Kubrick in the movie “Dr. Strangelove”. To illustrate this point consider the importance of being able to abort a mission after it has been launched.

Recalling the nuclear weapons armed bombers of the Strategic Air Command (SAC) or disarming a missile’s nuclear warhead after launch were; during the “Cold War”, and still are today critical elements of a “nuclear deterrent”. We must be able to pull back from the brink of Armageddon. Coordinating troop movements is another complex task that is inherently dependent upon effective reliable communications.

Today we find that the Internet makes it possible for the individual packets comprising the same conversation to travel different paths (routes). The end-result is still the same; transfer of information from point A to point B; the message gets through.

Logical End-To-End Connectivity

Given that there is a viable and accessible transmission medium in place the first thing a connection-oriented protocol must do is gain access to that transmission medium. Once this is done and confirmed the connection-oriented protocol will then proceed to establish a logical end-to-end connection between sender and recipient. Remember that the Network and Data Link Layers have the responsibility for establishing the network-to-network (routing) and end node connectivity respectively.

The logical end-to-end connection(s) between the sending (local) host and the destination (remote) host established is usually in the form of either a Virtual Circuit (VC) or Permanent Virtual Circuit (PVC). It is the responsibility of the connection-oriented protocol to:

  • Establish the virtual end-to-end connection
  • Provide connection identification of the virtual end-to-end connection
  • Provide ongoing maintenance of the virtual end-to-end connection
  • Provide ongoing support services to the virtual end-to-end connection such as control information
  • Provide ongoing virtual end-to-end connection services such as data transfer, flow control and support services such as error detection, sequencing, segmentation, reassembly, acknowledgements and synchronization etc.
  • Implement graceful teardown of the virtual end-to-end connection when required

About Author

Leave A Reply