With each passing year, even ordinary internet users increasingly encounter the concept of Internet Protocols (IP). Let’s explore the fundamental principles and definitions associated with IP.
Firstly, protocols themselves have existed long before the World Wide Web or even electronics. A protocol can be defined as any agreement on the conditions that all participants in a two-way process must adhere to. A simple example of a protocol could be the seemingly primitive task of taking a bus. To reach the desired stop, a passenger must wait for the bus on the correct route, take a seat, purchase a ticket, and so on. Other participants in this process, such as the conductor and driver, also have specific tasks. The set of all actions by the passenger, bus driver, and conductor necessary for the passenger to travel from point A to point B is referred to as a protocol.
In the case of the internet, protocols refer to various standards that define the format of representation, procedures for transmitting and interpreting messages, and principles and rules for the coordinated functioning of diverse equipment within the network.
Internet protocols can be examined through a layered hierarchy, with IP (Internet Protocol) and TCP (Transmission Control Protocol) sitting at the lowest, foundational level. IP defines the methods for sending data packets to recipients, while TCP provides methods for controlling the acknowledgment of data received by the recipient. Higher-level protocols are responsible for implementing various internet services using TCP and IP.
IP’s underlying method of data transmission is packet-switching. It involves breaking down transmitted information into blocks of a specific length, formatting them, and then sending them to the recipient. Each data block, which has a size of 65 Kbytes in the modern fourth version of the IP protocol, includes service information: details about the version of the protocol used, an identifier (used by the recipient to “assemble” data from received packets), the number of allowed packet hops through routers, sender and recipient addresses, checksums, and other blocks. The structure resulting from this formatting (encapsulation) is called an IP packet, which is what is transmitted across the network.
However, a notable feature of the IP protocol is the lack of a guarantee of data delivery between nodes. In such a network, there is no centralized control over packet transmission or network state, simplifying network structure. Consequently, various errors are possible: packet damage, receiving packets in the wrong order, duplication, or complete loss. Thus, packets received in this way must be processed beforehand, arranged in the correct order, data extracted, and then passed on, for example, to an application. These functions are handled by higher-level protocols.
To ensure the reliable delivery of packets, the TCP (Transmission Control Protocol) is applied, which manages transmission. When using this protocol, establishing a connection between the sender and receiver precedes actual data transmission. If part of the data is lost, packets are re-requested, and duplicate detection is also possible. Therefore, TCP guarantees that data is received in the correct sequence and without loss.
For instance, in a typical HTML file transfer from a web server to a remote computer, according to TCP protocol rules, the file being sent is divided into segments for optimal routing in the network. TCP segments are blocks of transmitted information and service data, and they are formed using a principle similar to IP packets. Then, the server’s TCP segments are encapsulated into IP packets by IP-level software by adding a header and, including, the IP address of the recipient. In the network, such packets, intended for a single recipient, can still be transmitted via different routes. Packets arrive at the recipient’s computer in a certain order, where the client TCP-level program “assembles” the received segments, arranges them in the correct order, and sends the received file to the appropriate application.
In general, TCP, being a reliable protocol, is successfully used to solve many tasks. As a disadvantage of this protocol, it is worth noting the need to re-request unreceived or damaged packets, which may not be very convenient for transmitting information in real-time mode, where it is often necessary to transmit a large part of the data within a short period of time, and the receipt of all packets and in the correct order is a secondary task. This problem limits the use of the TCP protocol in internet radio, multiplayer real-time games, and television.
Nevertheless, despite some drawbacks and relative complexity in implementation, IP and TCP protocols are the foundation of data transmission on the internet, and the principles embedded in them will continue to be fundamental for a long time. Undoubtedly, with the development of protocols, many things will change, such as the sizes and methods of organizing packets and segments, but the fundamental principles of packet data transmission and routing will retain their high relevance.