IP provides an unreliable service (i.e., best effort delivery). This means that the network makes no guarantees about the packet and none, some, or all of the following may apply:
- data corruption
- out of order (packet A may be sent before packet B, but B can arrive before A)
- duplicate arrival
- lost or dropped/discarded
In terms of reliability the only thing IP does is ensure the IP packet's header is error-free through the use of a checksum. This has the side-effect of discarding packets with bad headers on the spot, and with no required notification to either end (though an ICMP message may be sent).
To address any of these reliability issues, an upper layer protocol must handle it. For example, to ensure in-order delivery the upper layer may have to cache data until it can be passed up in order.
If the upper layer protocol does not self-police its own size by first looking at the Layer 2 Maximum Transmission Unit (MTU) size, and sends the IP layer too much data, IP is forced to fragment the original datagram into smaller fragments for transmission. IP does provide re-ordering of any fragments that arrive out of order by using the fragmentation flags and offset. Transmission Control Protocol (TCP) is a good example of a protocol that will adjust its segment size to be smaller than the MTU. User Datagram Protocol (UDP) and Internet Control Message Protocol (ICMP) are examples of protocols that disregard MTU size thereby forcing IP to fragment oversized datagrams.
The primary reason for the lack of reliability is to reduce the complexity of routers. While this does give routers carte blanche to do as they please with packets, anything less than best effort yields a poorer experience for the user. So, even though no guarantees are made, the better the effort made by the network, the better the experience for the user. Most protocols are built around the idea that error checking is best done at each end of the communication line, see End-to-end principle.
No comments:
Post a Comment