IMTS PGDCA (Computer communication & netoworks)

Page 1

I ns t i t ut eo fMa na g e me nt & Te c hni c a lSt udi e s COMPUTERCOMMUNI CATI ON & NETOWORKS

PGDCA

www. i mt s i ns t i t ut e . c om


IMTS (ISO 9001-2008 Internationally Certified)

COMPUTER COMMUNICATION & NETWORKS

COMPUTER COMMUNICATION & NETWORKS

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION AND NETWORKS UNIT-01 01-29 Communication model - Data communications networking - Data transmission concepts and terminology UNIT-02 30-56 Transmission media - Data encoding -Data link control. UNIT-03 57-80 Protocol architecture - Protocols - OSI - TCP/IP UNIT-04 81-111 LAN architecture - Topologies - MAC – Ethernet - Fast Ethernet - Token ring FDDI UNIT-05 112-133 Wireless LANS - Bridges UNIT-06 134-164 Network layer - Switching concepts - Circuit switching networks - Packet switching - Routing - Congestion control - X.25 - Internetworking concepts and X.25 architectural models - IP - Unreliable connectionless delivery UNIT-07 165-190 Datagram - Routing IP datagram’s - ICMP. UNIT-08 191-219 Transport layer - Reliable delivery service - Congestion control - connection establishment - Flow control UNIT-09 220-244 Transmission control protocol - User datagram protocol. .

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


UNIT-10 245-270 Applications – Layer - Sessions and presentations aspects – DNS UNIT-11 Tell Net - R login – FTP - SMTP

271-289

UNIT-12 WWW Security – SNMP

290-309

UNIT QUESTIONS

310-312

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT – 1 INTRODUCTION TO DATA COMMUNICATION Structure 1.0 Introduction 1.1 Objectives 1.2 Definition 1.3 A Data Communication Model 1.3.1 The Communication Channel 1.3.2 Channel Characteristics 1.3.3 Communication Protocols 1.4 Data Communication and Networking 1.4.1 Performance 1.4.2 Consistency 1.4.3 Reliability 1.4.4 Recovery 1.4.5 Security 1.4.6 Types of communication networks 1.4.7 Types of communication interaction 1.4.8 Communication modes 1.4.9 Performance metrics 1.5 Data Transmission Concepts and Terminology 1.5.1 Transmission terminology 1.5.2 Communication channels 1.5.3 Data compression 1.5.4 Data Encryption 1.5.5 Data Transmission 1.5.6 Transmission Techniques 1.6 Summary 1.7 Keywords 1.8 Exercise and Questions 1.9 Check Your Progress 1.10 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

1


COMPUTER COMMUNICATION & NETWORKS

2

1.0 Introduction This lesson describes the introduction to the communication between the computers. It will explain the communication model, channel and characteristics of the communication model. Then we will look about the major criteria that meet the communication and networking, types of communication networks and types of communication interaction. This is followed by a data transmission and terminology and different types of transmission media will be discussed. 1.1 Objectives After reading this lesson you should able to    

Understand the brief introduction to communication Discuss the communication model Explain about the data communication and networking Describe the Data transmission concepts and terminology

1.2 Definition The distance over which data moves within a computer may vary from a few thousandths of an inch, as is the case within a single IC chip, to as much as several feet along the backplane of the main circuit board. Over such small distances, digital data may be transmitted as direct, two-level electrical signals over simple copper conductors. Except for the fastest computers, circuit designers are not very concerned about the shape of the conductor or the analog characteristics of signal transmission. 1.3 A Communication Model Communication is the conveyance of a message from one entity, called the source or transmitter, to another, called the destination or receiver, via a channel of some sort. A simple example of such a communication system is conversation; people commonly exchange verbal messages, with the channel consisting of waves of compressed air molecules at frequencies which are audible to the human ear. This is depicted in Figure

The conveyance of a message could be followed by a reciprocal response message from the original destination (now a source) to the original source (now a destination) to complete one cycle in a dialogue between corresponding entities. Depending on the application or need for the information exchange, either atomic one-way transactions or a two-way dialogue could be applied.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

3

Variations on a Theme The conveyance of a message could be direct between the corresponding entities or it could be indirect, with one or more intermediaries participating in the message transport. The presence or absence of an intermediary depends on the definition of the source and destination entities and the channel used to communicate; data communication between entities at one level might be considered to be direct and at another level to be indirect. Considering the directness or indirectness of the communicating entities simply depends on the relevance of any intermediaries to the discussion. The translator is important but should not really be a factor in the communication between the source and destination. Perhaps in a twist on the old parents' saying, a good translator is heard but not seen.

Communication can be from a source to a single destination, known as point-to-point or unicast, or to multiple destinations, known as point-to-multipoint or multicast. A special case of multicast is the conveyance of a message from a source to every possible destination, which is referred to as broadcast; the broadcast can be local or global in scope. The primary difference between multicast and broadcast is that multicast communication is targeted at specific destinations, regardless of location, while broadcast communication is targeted at all possible destinations within the range (location) of the source. Multicast and broadcast communications are typically one-way "best efforts" modes of communication which are unacknowledged. Communication can also be described in terms of the relative timeframes of the corresponding entities. Depending on the definitions of source, destination and channel, the communication could be asynchronous, synchronous or isochronous. In asynchronous communication, there is a minimal assumed timing relationship between the source and destination. In such a typically byteoriented system, each character or byte is transmitted and received individually as a message. Asynchronous protocols were predominant in the early days of data communications because of limited processing capability and low quality transmission infrastructure. In synchronous communication, the relative bit timing of the source and destination is similar, allowing transmission and reception of relatively large groups of bits in a single message; the source and destination must be "in sync." This bit-oriented mode of communication can be much more efficient than asynchronous communications, but places requirements on the source (processing), channel (quality) and destination (more processing). Synchronous data communications are predominant today.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

4

1.3.1 The Communications Channel A communication channel can be simplex, in which only one party can transmit, full-duplex, in which both correspondents can transmit and receive simultaneously, or half-duplex, in which the correspondents alternate between transmitting and receiving states (such as conversing adults). Even though the channel might be capable of supporting full-duplex communication, if the corresponding entities are not capable of transmitting and receiving simultaneously, the communications system will be half-duplex (as in the example of the conversing adults). Communication between two entities can be considered either in-band or out-of-band, depending on context. In-band communication is communication which occurs via the primary channel between the communicating entities. Out-of-band communication occurs via an alternative channel, which is not considered to be the primary channel between the entities. Which channel is primary and which is an alternate depends on context and the existence of an alternative channel. In the case of a conversation between two people, the primary channel could consist of verbal communication while the alternate channel consists of visual body language. 1.3.2 Channel Characteristics A communications channel may be described in terms of its characteristic properties. These channel characteristics include bandwidth (how much information can be conveyed across the channel in a unit of time, commonly expressed in bits per second or bps), quality (how reliably can the information be correctly conveyed across the channel, commonly in terms of bit error rate or BER ) and whether the channel is dedicated (to a single source) or shared (by multiple sources). Obviously a higher bandwidth is usually a good thing in a channel because it allows more information to be conveyed per unit of time. High bandwidths mean that more users can share the channel, depending on their means of accessing it. High bandwidths also allow more demanding applications (such as graphics) to be supported for each user of the channel. The capability of a channel to be shared depends of course on the medium used. A shared channel could be likened to a school classroom, where multiple students might attempt to simultaneously catch the teacher's attention by raising their hand; the teacher must then arbitrate between these conflicting requests, allowing only one student to speak at a time. Reliability of communication is obviously important. A low quality channel is prone to distorting the messages it conveys; a high quality channel preserves the integrity of the messages it conveys. Depending on the quality of the channel in use between communicating entities, the probability of the destination correctly receiving the message from the source might be either very high or very low. If the message is received incorrectly it needs to be retransmitted. If the probability of receiving a message correctly across a channel is too low, the system (source, channel, message, destination) must include mechanisms which overcome the errors introduced by the low quality channel. Otherwise no useful communication is possible over that channel. These mechanisms are embodied in the communication protocols employed by the corresponding entities. 1.3.3 Communication Protocols Protocols specify the rules for communicating over a channel, much as one person politely waiting for another to finish before they speak. Protocols coupled with channel characteristics determine the net efficiency of communications over the channel. Protocols can improve the effective channel quality. An example is an ARQ (automatic repeat request) protocol in which a source automatically retransmits a message if it fails to receive an acknowledgment from the destination within some predefined time period following the original transmission of the

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

5

message. The destination knows whether to acknowledge the message based on some error detection capability, which is typically based on redundant information added to the message, such as a parity code or cyclic redundancy check (CRC). The following Figure depicts a message ("Pick-up at 1:30 PM.") being transmitted from a source to a destination via "packets" which contain four characters at a time. Additional redundant information in each packet allows the destination to know whether or not it has received that packet correctly. Once the destination is satisfied that it has correctly received a packet, it sends an acknowledgment ("ack") message to the original packet source. When the source receives the acknowledgment, it may transmit the next packet in sequence. In an ARQ arrangement, failure to receive an acknowledgment within a specific time period causes the source to retransmit the packet which was not acknowledged. Only a single transmitted packet remains outstanding (i.e., unacknowledged) at a time.

Error detection and recovery mechanisms can be much more sophisticated than this simple ARQ scheme. One way to enhance the effective channel performance is to allow multiple packets to be outstanding at a time. Individual packets are assigned a sequence number reflecting their order in a sequence of packets flowing from a specific source to a specific destination, which allows them to be separately acknowledged or retransmitted in the event of a failure. This type of windowing scheme is commonly used when significant delays are involved in the end-to-end data transmission or when the channel has a relatively high quality. When an individual packet is not received correctly, the destination could request retransmission of either the individual bad packet, called selective packet rejection, or that packet plus all succeeding packets. Which of these modes is employed depends on the nature of the communication and the medium used. This Figure depicts such a windowing scheme applied to the packet-based transmission example from above, but with each packet individually numbered in sequence. In

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

6

this example, up to three packets may be outstanding at a time and the destination must notify the host of the next expected packet number, implicitly acknowledging all preceding packets. Unless a time-out occurs at the source, it will continue to transmit packets until the window-size of three outstanding unacknowledged packets is reached. The destination periodically acknowledges all received packets.

If sufficient redundant information is added to a message, it could enable the receiver of the message to not only detect an error but also correct it. Although this requires some additional processing by the destination, it could obviate the need for retransmission. This error correction capability is generally desirable in channels which are expensive, prone to distortion, or suffer from long latency in the dialogue cycle. Connection-Oriented and Connectionless Protocols Protocols can be either connection-oriented or connectionless in nature. In connection-oriented protocols, corresponding entities maintain state information about the dialogue they are engaged in. This connection state information supports error, sequence and flow control between the corresponding entities. The windowing scheme presented earlier is an example of a connectionoriented protocol. Error control refers to a combination of error detection (and correction) and acknowledgment sufficient to compensate for any unreliability inherent to the channel. Sequence control refers to the ability for each entity to reconstruct a received series of messages in the proper order in which they were intended to be received; this is essential to being able to transmit large files across dynamically-routed mesh networks. Flow control refers to the ability for both parties in a dialogue to avoid overrunning their peer with too many messages. Connection-oriented protocols

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

7

operate in three phases. The first phase is the connection setup phase, during which the corresponding entities establish the connection and negotiate the parameters defining the connection. The second phase is the data transfer phase, during which the corresponding entities exchange messages under the auspices of the connection. Finally, the connection release phase is when the correspondents "tear down" the connection because it is no longer needed. Networks may be divided into different types and categories according to four different criteria: 1. Geographic spread of nodes and hosts. When the physical distance between the hosts is within a few kilometers, the network is said to be a Local Area Network (LAN). LANs are typically used to connect a set of hosts within the same or a set of closely-located buildings). For larger distances, the network is said to be a Metropolitan Area Network (MAN) or a Wide Area Network (WAN). MANs cover distances of up to a few hundred kilometers and are used form interconnecting hosts spread across a city. WANs are used to connect hosts spread across a country, a continent, or the globe. LANs, MANs, and WANs usually coexist: closely-located hosts are connected by LANs which can access hosts in other remote LANs via MANs and WANs. 2. Access restrictions. Most networks are for the private use of the organizations to which they belong; these are called private networks. Networks maintained by banks, insurance companies, airlines, hospitals, and most other businesses are of this nature. Public networks, on the other hand, are generally accessible to the average user, but may require registration and payment of connection fees. Internet is the most-widely known example of a public network. Technically, both private and public networks may be of LAN, MAN, or WAN type, although public networks, by their size and nature, tend to WANs. 3. Communication model employed by the nodes. The communication between the nodes is either based on a point-to-point model or a broadcast model. In the point-to-point model, a message follows a specific route across the network in order to get from one node to another. In the broadcast model, on the other hand, all nodes share the same communication medium and, as a result, a message transmitted by any node can be received by all other nodes. A part of the message (an address) indicates for which node the message is intended. All nodes look at this address and ignore the message if it does not match their own address. 4. Switching model employed by the nodes. In the point-to-point model, nodes either employ circuit switching or packet switching. Suppose that a host A wishes to communicate with another host B. In circuit switching, a dedicated communication path is allocated between A and B, via a set of intermediate nodes. The data is sent along the path as a continuous stream of bits. This path is maintained for the duration of communication between A and B, and is then released. In packet switching, data is divided into packets (chunks of specific length and characteristics) which are sent from A to B via intermediate nodes. Each intermediate node temporarily stores the packet and waits for the receiving node to become available to receive it. Because data is sent in packets, it is not necessary to reserve a path across the network for the duration of communication between A and B. Different packets can be routed differently in order to spread the load between the nodes and improve performance. However, this requires packets to carry additional addressing information. 1.4 Data Communication and Networking The major criteria that a Data Communication Network must meet are: i. ii. iii.

Performance Consistency Reliability,

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS iv. v.

Recovery and Security

1.4.1

Performance

8

Performance is the defined as the rate of transferring error free data. It is measured by the Response Time. Response Time is the elapsed time between the end of an inquiry and the beginning of a response. Request a file transfer and start the file transfer. Factors that affect Response Time are: a. Number of Users: More users on a network - slower the network will run b. Transmission Speed: speed that data will be transmitted measured in bits per second (bps) c. Media Type: Type of physical connection used to connect nodes together d. Hardware Type: Slow computers such as XT or fast such as Pentiums e. Software Program: How well is the network operating system (NOS) written 1.4.2

Consistency

Consistency is the predictability of response time and accuracy of data. a. Users prefer to have consistent response times, they develop a feel for normal operating conditions. For example: if the "normal" response time is 3 sec. for printing to a Network Printer and a response time of over 30 sec happens, we know that there is a problem in the system! b. Accuracy of Data determines if the network is reliable! If a system loses data, then the users will not have confidence in the information and will often not use the system. 1.4.3 Reliability Reliability is the measure of how often a network is useable. MTBF (Mean Time Between Failures) is a measure of the average time a component is expected to operate between failures. Normally provided by the manufacturer. A network failure can be: hardware, data carrying medium and Network Operating System. 1.4.4 Recovery Recovery is the Network's ability to return to a prescribed level of operation after a network failure. This level is where the amount of lost data is nonexistent or at a minimum. Recovery is based on having Back-up Files. 1.4. 5 Security Security is the protection of Hardware, Software and Data from unauthorized access. Restricted physical access to computers, password protection, limiting user privileges and data encryption are common security methods. Anti-Virus monitoring programs to defend against computer viruses are a security measure. Applications of a data communication network The following lists general applications of a data communication network: i.

Electronic Mail (e-mail or Email) replaces snail mail. E-mail is the forwarding of electronic files to an electronic post office for the recipient to pick up.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS ii. iii. iv. v. vi. vii. viii. ix. x.

9

Scheduling Programs allow people across the network to schedule appointments directly by calling up their fellow worker's schedule and selecting a time! Videotext is the capability of having a 2 way transmission of picture and sound. Games like Doom, Hearts, distance education lectures, etc.. Groupware is the latest network application, it allows user groups to share documents, schedules databases, etc.. ex. Lotus Notes. Teleconferencing allows people in different regions to "attend" meetings using telephone lines. Telecommuting allows employees to perform office work at home by "Remote Access" to the network. Automated Banking Machines allow banking transactions to be performed everywhere: at grocery stores, Drive-in machines etc.. Information Service Providers: provide connections to the Internet and other information services. Examples are CompuServe, Genie, Prodigy, America On-Line (AOL), etc... Electronic Bulletin Boards (BBS - Bulletin Board Services) are dialup connections (use a modem and phone lines) that offer a range of services for a fee. Value Added Networks are common carriers such as AGT, Bell Canada, etc.. (can be private or public companies) who provide additional leased line connections to their customers. These can be Frame Relay, ATM (Asynchronous Transfer Mode), X.25, etc.. The leased line is the Value Added Network.

Check Your Progress: 1. Define Communication. 2. What is consistency in networks?

1.4.6 Types of communications networks Connection oriented and connectionless forwarding In connection oriented (CO) forwarding, all packets of a same flow follow the same path. That path is also called a connection or a circuit from source to destination. The telephone network is the classical example of a CO network. The advantage of a CO implementation is that the capacity or the amount of resources and the service can be guaranteed. There is hardly quality variation due to interference with other users in the CO-network. The disadvantage is cost: circuit costs are fixed, independent of whether the assigned capacity is actually used or not. Thus, in the telephone system, we pay per unit time, irrespective of whether we talk or are silent. Lastly, CO networking incurs a higher blocking probability for users than in connectionless networking due to the confinement to connections and due to the set-up time of a connection. Before a user is allowed to transmit data, the connection must be set-up. Only if a connection is available, the communication can start. Moore's Law Gordon Moore of Intel observed that the capacity (bits processed per unit time) and the number of memory cells in silicon technology increase from 1970 on exponentially with time (a doubling every year and a half – the fit corresponds to a doubling each two years) due to the down scaling of transistors. Intel believes that the number of transistors per chip will still continue to follow Moore’s law for the next 10 years: in 1999 (Mobile Pentium II), above 27M transistors/chip, in 2001, 100M transistors/chip and with predictions for 2011, 1G transistors/chip. So far in the history of the industrialized world, Moore’s law is a unique exponential scaling law of technology which is verified to hold already for over 30 years. Circuit-switched and packet-switched networks Circuit-switched networks set up dedicated end-to-end connections. The telephony network is the typical example of a circuit-switched network. In circuit-switched networks, the connection is set up by the signaling function which is the most important network function and the routing function

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

10

is invoked during the process of connection set-up by the signaling function . Circuit-switched networks transfer information in a connection-oriented way. Packet-switched networks establish end-to-end communication by chopping the information flow into – generally variable-length – packets and by transferring these packets from source to destination. Routing in packet-switched networks is an important network functionality while signaling can be absent if no resources, are allocated in the network. Usually, packet-switched networks are associated with connectionless forwarding, although this is not a requirement. Transmission Control Protocol (TCP) establishes a connection in an IP network, which is a packet-switched network that operates in CL-mode. Also an ATM network is a packet-switched network that operates in CO-mode. The forwarding of packets in TCP is still CL: the packets of a same TCP connection can follow different paths through the network, in contrast to an ATM connection. The CO-nature of TCP is a logical, not a physical, connection between source and destination on OSI layer 4, that does not see the specifics of forwarding on Three levels of hierarchy in networking. Each level is separated by aggregators. In the access, there are many, but relatively small flows. Each flow can still be controlled (via scheduling and admission control). Towards the core network, these flows are aggregated into less, but larger traffic volumes. In the core network, the traffic streams are large resulting in mainly topology-driven control (routing and load-balancing). The individual flows are aggregated into traffic- and destination-classes and sometimes not identifiable any more. Network hierarchy A general world-wide communication network consists of three parts as an access network that offers connectivity to residential users, an edge network that combines several access networks (and possibly corporate networks) and a core network (or the backbone). Important access networks are the residential telephony network, cable TV network, ADSL network and the mobile networks as GSM and Wireless LANs. As core networks, we mention the international telephony trunks and the Internet backbone(s). Edge networks lie in between and are not clearly defined but only relatively with respect to the access and the core network. The large differences in throughput and other quantities demonstrate the need for different network management and underlying physical transmission technologies. 1.4.7 Types of communication interaction In networking, various types of interactions between communicating parties exist. Centralized versus distributed control In a centralized approach, a master is appointed among the systems in a network. The master controls each interaction in the network of these systems. The other alternative, a distributed control consists in using a policy or protocol of communication (e.g. start speaking as soon as someone else stops, but back-off immediately if a third one starts). It is the rule of the protocol that controls distributed interaction. Finite state machine interaction A first type of interaction between the set of systems is that driven by a finite state machine. A finite state machine follows and executes rules or actions depending on state of the process. For example, system 2 has just spoken, thus, we move to and poll system 3, and so on. The operation of a finite state machine can be visualized and described by a graph that relates the processes in the different states. A well known example of such a graph is a Petri net. Client-server interaction: “ask-when-needed” or event driven Another mode of interaction only operates when an inner state or a process in a system requires information from other connected systems. This mode is event driven and called a client-server interaction. For example, when clicking on a link at a webpage, there is a short communication with a server that returns the IP address of the machine on which the content of the intended page is stored. A client-server interaction is thus a relation between processes in which each

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

11

process can take the initiative to communicate with another process. A particular process is not necessary always client or server, but a process is a client or a server with respect to another process and their role can change over time. Also a process A can be client in the relation with process B, but it can be the server with respect to process C. The communication in distributed systems needs to be designed to avoid a “deadlock� that is the situation in which processes are waiting infinitely long for each other. The communication relations between all processes in a distributed system can be represented by a graph. Deadlocks can be avoided if that graph is a-cyclic, i.e. the graph does not contain cycles. In an a-cyclic graph or tree, there is only 1 path between each pair of processes. Since a client-server interaction asks a question and waits for the reply (using a single path in the process relation graph), the client-server concept allows to build a dead-lock free architecture of a distributed system. Large and complex distributed systems can be built based on the relatively simple clientserver principle. Summary of interaction models Client-server interaction forms the basis of most network communications and is fundamental because it helps us understand the foundation on which distributed algorithms are built. Usually, there are more clients than servers. Typically, a finite state machine is used when there is little interaction or the interaction is simple such that the number of possible states in the communication protocol is limited. Client-server interaction is more flexible and suited for intense interaction. Although we may argue that distributed networking is preferable in terms of scalability and robustness the back side of the medal is that distributed networking is more difficult to control and design. For example, distributed routing seems very robust and scalable. In nature, ants find their way individually by using a small set of rules. It seems interesting to transfer the rules deduced from the behavior of ants to communication networking. Apart from finding the minimal set of rules, the demonstration that the ensemble of rules operates correctly for the whole colony of packets in the network is difficult. Further, proving optimality or how close distributed networking lies to a global optimum is generally difficult. 1.4.8

Communication modes

In general, four different communication modes can be distinguished: unicast, multicast, broadcast and any cast. Unicast is a communication between two parties (one-to-one) and a typical example is a telephone call. Multicast consists of the modes one-to-many and many-tomany with as example a video-conference. Broadcast defined as a communication from one user to all users in a network is an extreme case of multicast. The typical example is the broadcasting of information for television and radio. Finally, anycast is a communication from one-to-any of a group. For example, when information is replicated over many servers, a user wants to download the information from an arbitrary server of that group. Most often, the anycast mode will point or route the user’s request to that server nearest to the user. The user does not need to know the location or the individual addresses of the servers, only the anycast address of the group of servers. 1.4.9 Performance metrics In the design of communications networks, the preference of algorithm or implementation A of a network functionality above algorithm B depends on various factors. Beside the monetary cost, the most common technical factors, called performance metrics, are the computation complexity, the throughput, the blocking, the reliability, the security, the memory consumption, and the manageability. In general, the performance metrics for a particular algorithm/implementation are not always easy to compute. The precise definitions, the analysis, and the computation of performance metrics belong to the domain of performance analysis for which we refer to Van Mieghem (2006). Apart from the precise evaluation of the performance of a particular algorithm/implementation, some of the performance metrics are not yet universally and accurately

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

12

defined. For example, the reliability or the robustness of a network topology can be evaluated in many different ways. Another frequently appearing term as a design metric is the scalability. Scalability The term “scalability� expresses the increase in the complexity to operate, control or manage a network if relevant network parameters such as the size or the number of nodes/systems in the network, the traffic load, the interaction rate, etc. increase. Whether a property of a network is scalable or not strongly depends on that property itself. 1.5 1.5.1

Data Transmission Concepts and terminology Transmission terminology

The distance over which data moves within a computer may vary from a few thousandths of an inch, as is the case within a single IC chip, to as much as several feet along the backplane of the main circuit board. Over such small distances, digital data may be transmitted as direct, two-level electrical signals over simple copper conductors. Except for the fastest computers, circuit designers are not very concerned about the shape of the conductor or the analog characteristics of signal transmission. Frequently, however, data must be sent beyond the local circuitry that constitutes a computer. In many cases, the distances involved may be enormous. Unfortunately, as the distance between the source of a message and its destination increases, accurate transmission becomes increasingly difficult. This results from the electrical distortion of signals traveling through long conductors, and from noise added to the signal as it propagates through a transmission medium. Although some precautions must be taken for data exchange within a computer, the biggest problems occur when data is transferred to devices outside the computer's circuitry. In this case, distortion and noise can become so severe that information is lost. Data Communications concerns the transmission of digital messages to devices external to the message source. "External" devices are generally thought of as being independently powered circuitry that exists beyond the chassis of a computer or other digital message source. As a rule, the maximum permissible transmission rate of a message is directly proportional to signal power, and inversely proportional to channel noise. It is the aim of any communications system to provide the highest possible transmission rate at the lowest possible power and with the least possible noise. 1.5.2 Communication

Channels

A communications channel is a pathway over which information can be conveyed. It may be defined by a physical wire that connects communicating devices, or by a radio, laser, or other radiated energy source that has no obvious physical presence. Information sent through a communications channel has a source from which the information originates, and a destination to which the information is delivered. Although information originates from a single source, there may be more than one destination, depending upon how many receive stations are linked to the channel and how much energy the transmitted signal possesses. In a digital communications channel, the information is represented by individual data bits, which may be encapsulated into multi bit message units. A byte, which consists of eight bits, is an example of a message unit that may be conveyed through a digital communications channel. A collection of bytes may itself be grouped into a frame or other higher-level message unit. Such multiple levels of encapsulation facilitate the handling of messages in a complex data communications network. Any communications channel has a direction associated with it:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

13

The message source is the transmitter, and the destination is the receiver. A channel whose direction of transmission is unchanging is referred to as a simplex channel. For example, a radio station is a simplex channel because it always transmits the signal to its listeners and never allows them to transmit back. A half-duplex channel is a single physical channel in which the direction may be reversed. Messages may flow in two directions, but never at the same time, in a half-duplex system. In a telephone call, one party speaks while the other listens. After a pause, the other party speaks and the first party listens. Speaking simultaneously results in garbled sound that cannot be understood. A full-duplex channel allows simultaneous message exchange in both directions. It really consists of two simplex channels, a forward channel and a reverse channel, linking the same points. The transmission rate of the reverse channel may be slower if it is used only for flow control of the forward channel. Serial Communications Most digital messages are vastly longer than just a few bits. Because it is neither practical nor economic to transfer all bits of a long message simultaneously, the message is broken into smaller parts and transmitted sequentially. Bit-serial transmission conveys a message one bit at a time through a channel. Each bit represents a part of the message. The individual bits are then reassembled at the destination to compose the message. In general, one channel will pass only one bit at a time. Thus, bit-serial transmission is necessary in data communications if only a single channel is available. Bit-serial transmission is normally just called serial transmission and is the chosen communications method in many computer peripherals. Byte-serial transmission conveys eight bits at a time through eight parallel channels. Although the raw transfer rate is eight times faster than in bit-serial transmission, eight channels are needed, and the cost may be as much as eight times higher to transmit the message. When distances are short, it may nonetheless be both feasible and economic to use parallel channels in return for high data rates. The popular Getronics printer interface is a case where byte-serial transmission is used. As another example, it is common practice to use a 16-bit-wide data bus to transfer data between a microprocessor and memory chips; this provides the equivalent of 16 parallel channels. On the other hand, when communicating with a timesharing system over a modem, only a single channel is available, and bit-serial transmission is required. This figure illustrates these ideas:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

14

The baud rate refers to the signaling rate at which data is sent through a channel and is measured in electrical transitions per second. In the EIA232 serial interface standard, one signal transition, at most, occurs per bit, and the baud rate and bit rate are identical. In this case, a rate of 9600 baud corresponds to a transfer of 9,600 data bits per second with a bit period of 104 microseconds (1/9600 sec.). If two electrical transitions were required for each bit, as is the case in non-return-to-zero coding, then at a rate of 9600 baud, only 4800 bits per second could be conveyed. The channel efficiency is the number of bits of useful information passed through the channel per second. It does not include framing, formatting, and error detecting bits that may be added to the information bits before a message is transmitted, and will always be less than one.

The data rate of a channel is often specified by its bit rate (often thought erroneously to be the same as baud rate). However, an equivalent measure channel capacity is bandwidth. In general, the maximum data rate a channel can support is directly proportional to the channel's bandwidth and inversely proportional to the channel's noise level. A communications protocol is an agreedupon convention that defines the order and meaning of bits in a serial transmission. It may also specify a procedure for exchanging messages. A protocol will define how many data bits compose a message unit, the framing and formatting bits, any error-detecting bits that may be added, and other information that governs control of the communications hardware. Channel efficiency is determined by the protocol design rather than by digital hardware considerations. Note that there is a tradeoff between channel efficiency and reliability - protocols that provide greater immunity to noise by adding error-detecting and -correcting codes must necessarily become less efficient. Asynchronous vs. Synchronous Transmission Serialized data is not generally sent at a uniform rate through a channel. Instead, there is usually

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

15

a burst of regularly spaced binary data bits followed by a pause, after which the data flow resumes. Packets of binary data are sent in this manner, possibly with variable-length pauses between packets, until the message has been fully transmitted. In order for the receiving end to know the proper moment to read individual binary bits from the channel, it must know exactly when a packet begins and how much time elapses between bits. When this timing information is known, the receiver is said to be synchronized with the transmitter, and accurate data transfer becomes possible. Failure to remain synchronized throughout a transmission will cause data to be corrupted or lost. Two basic techniques are employed to ensure correct synchronization. In synchronous systems, separate channels are used to transmit data and timing information. The timing channel transmits clock pulses to the receiver. Upon receipt of a clock pulse, the receiver reads the data channel and latches the bit value found on the channel at that moment. The data channel is not read again until the next clock pulse arrives. Because the transmitter originates both the data and the timing pulses, the receiver will read the data channel only when told to do so by the transmitter (via the clock pulse), and synchronization is guaranteed. Techniques exist to merge the timing signal with the data so that only a single channel is required. This is especially useful when synchronous transmissions are to be sent through a modem. Two methods in which a data signal is self-timed are no return-to-zero and biphasic Manchester coding. These both refer to methods for encoding a data stream into an electrical waveform for transmission. In asynchronous systems, a separate timing channel is not used. The transmitter and receiver must be preset in advance to an agreed-upon baud rate. A very accurate local oscillator within the receiver will then generate an internal clock signal that is equal to the transmitters within a fraction of a percent. For the most common serial protocol, data is sent in small packets of 10 or 11 bits, eight of which constitute message information. When the channel is idle, the signal voltage corresponds to a continuous logic '1'. A data packet always begins with a logic '0' (the start bit) to signal the receiver that a transmission is starting. The start bit triggers an internal timer in the receiver that generates the needed clock pulses. Following the start bit, eight bits of message data are sent bit by bit at the agreed upon baud rate. The packet is concluded with a parity bit and stop bit. One complete packet is illustrated below:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

16

The packet length is short in asynchronous systems to minimize the risk that the local oscillators in the receiver and transmitter will drift apart. When high-quality crystal oscillators are used, synchronization can be guaranteed over an 11-bit period. Every time a new packet is sent, the start bit resets the synchronization, so the pause between packets can be arbitrarily long. Note that the EIA232 standard defines electrical, timing, and mechanical characteristics of a serial interface. However, it does not include the asynchronous serial protocol shown in the previous figure, or the ASCII alphabet described next. The ASCII Character Set Characters sent through a serial interface generally follow the ASCII (American Standard Code for Information Interchange) character standard:

This standard relates binary codes to printable characters and control codes. Fully 25 percent of the ASCII character set represents nonprintable control codes, such as carriage return (CR) and line feed (LF). Most modern character-oriented peripheral equipment abides by the ASCII standard, and thus may be used interchangeably with different computers. Parity and Checksums Noise and momentary electrical disturbances may cause data to be changed as it passes through a communications channel. If the receiver fails to detect this, the received message will be incorrect, resulting in possibly serious consequences. As a first line of defense against data errors, they must be detected. If an error can be flagged, it might be possible to request that the faulty packet be resent, or to at least prevent the flawed data from being taken as correct. If sufficient redundant information is sent, one- or two-bit errors may be corrected by hardware within the receiver before the corrupted data ever reaches its destination. A parity bit is added to a data packet for the purpose of error detection. In the even-parity convention, the value of the parity bit is chosen so that the total number of '1' digits in the combined data plus parity packet is an even number. Upon receipt of the packet, the parity needed for the data is recomputed by local hardware and compared to the parity bit received with the data. If any bit has changed state, the parity will not match, and an error will have been detected. In fact, if an odd number of bits (not just one) have been altered, the parity will not match. If an even number of bits has been reversed, the parity will match even though an error has occurred. However, a statistical analysis of data communication errors has shown that a single-bit error is much more probable than a multi bit error in the presence of random noise. Thus, parity is a reliable method of error detection.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

17

Another approach to error detection involves the computation of a checksum. In this case, the packets that constitute a message are added arithmetically. A checksum number is appended to the packet sequence so that the sum of data plus checksum is zero. When received, the packet sequence may be added, along with the checksum, by a local microprocessor. If the sum is nonzero, an error has occurred.

Errors may not only be detected, but also corrected if additional code is added to a packet sequence. If the error probability is high or if it is not possible to request retransmission, this may be worth doing. However, including error-correcting code in a transmission lowers channel efficiency, and results in a noticeable drop in channel throughput.

1.5.3

Data

Compression

If a typical message were statistically analyzed, it would be found that certain characters are used much more frequently than others. By analyzing a message before it is transmitted, short binary codes may be assigned to frequently used characters and longer codes to rarely used characters. In doing so, it is possible to reduce the total number of characters sent without altering the information in the message. Appropriate decoding at the receiver will restore the message to its original form. This procedure, known as data compression, may result in a 50 percent or greater savings in the amount of data transmitted. Even though time is necessary to analyze the message before it is transmitted, the savings may be great enough so that the total time for compression, transmission, and decompression will still be lower than it would be when sending an uncompressed message. Some kinds of data will compress much more than others. Data that represents images, for example, will usually compress significantly, perhaps by as much as 80 percent over its original size. Data representing a computer program, on the other hand, may be reduced only by 15 or 20 percent. A compression method called Huffman coding is frequently used in data communications, and particularly in fax transmission. Clearly, most of the image data for a typical business letter represents white paper, and only about 5 percent of the surface represents black ink. It is possible to send a single code that, for example, represents a consecutive string of 1000 white pixels rather than a separate code for each white pixel. Consequently, data compression will significantly reduce the total message length for a faxed business letter. Were the letter made up of randomly distributed black ink covering 50 percent of the white paper surface, data compression would hold no advantages.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 1.5.4 Data

18

Encryption

Privacy is a great concern in data communications. Faxed business letters can be intercepted at will through tapped phone lines or intercepted microwave transmissions without the knowledge of the sender or receiver. To increase the security of this and other data communications, including digitized telephone conversations, the binary codes representing data may be scrambled in such a way that unauthorized interception will produce an indecipherable sequence of characters. Authorized receive stations will be equipped with a decoder that enables the message to be restored. The process of scrambling, transmitting, and descrambling is known as encryption. Custom integrated circuits have been designed to perform this task and are available at low cost. In some cases, they will be incorporated into the main circuitry of a data communications device and function without operator knowledge. In other cases, an external circuit is used so that the device, and its encrypting/decrypting technique, may be transported easily. Data Storage Technology Normally, we think of communications science as dealing with the contemporaneous exchange of information between distant parties. However, many of the same techniques employed in data communications are also applied to data storage to ensure that the retrieval of information from a storage medium is accurate. We find, for example, that similar kinds of error-correcting codes used to protect digital telephone transmissions from noise are also used to guarantee correct read back of digital data from compact audio disks, CD-ROMs, and tape backup systems. Data Transfer in Digital Circuits Data is typically grouped into packets that are either 8, 16, or 32 bits long, and passed between temporary holding units called registers. Data within a register is available in parallel because each bit exits the register on a separate conductor. To transfer data from one register to another, the output conductors of one register are switched onto a channel of parallel wires referred to as a bus. The input conductors of another register, which is also connected to the bus, capture the information:

Following a data transaction, the content of the source register is reproduced in the destination register. It is important to note that after any digital data transfer, the source and destination registers are equal; the source register is not erased when the data is sent. The transmit and receive switches shown above are electronic and operate in response to commands from a central control unit. It is possible that two or more destination registers will be switched on to receive data from a single source. However, only one source may transmit data onto the bus at any time. If multiple sources were to attempt transmission simultaneously, an

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

19

electrical conflict would occur when bits of opposite value are driven onto a single bus conductor. Such a condition is referred to as a bus contention. Not only will a bus contention result in the loss of information, but it also may damage the electronic circuitry. As long as all registers in a system are linked to one central control unit, bus contentions should never occur if the circuit has been designed properly. Note that the data buses within a typical microprocessor are fundamentally half-duplex channels. Transmission over Short Distances When the source and destination registers are part of an integrated circuit (within a microprocessor chip, for example), they are extremely close (thousandths of an inch). Consequently, the bus signals are at very low power levels, may traverse a distance in very little time, and are not very susceptible to external noise and distortion. This is the ideal environment for digital communications. However, it is not yet possible to integrate all the necessary circuitry for a computer (i.e., CPU, memory, disk control, video and display drivers, etc.) on a single chip. When data is sent off-chip to another integrated circuit, the bus signals must be amplified and conductors extended out of the chip through external pins. Amplifiers may be added to the source register:

Bus signals that exit microprocessor chips and other VLSI circuitry are electrically capable of traversing about one foot of conductor on a printed circuit board, or less if many devices are connected to it. Special buffer circuits may be added to boost the bus signals sufficiently for transmission over several additional feet of conductor length, or for distribution to many other chips (such as memory chips). Noise and Electrical Distortion Because of the very high switching rate and relatively low signal strength found on data, address, and other buses within a computer, direct extension of the buses beyond the confines of the main circuit board or plug-in boards would pose serious problems. First, long runs of electrical conductors, either on printed circuit boards or through cables, act like receiving antennas for electrical noise radiated by motors, switches, and electronic circuits:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

20

Such noise becomes progressively worse as the length increases, and may eventually impose an unacceptable error rate on the bus signals. Just a single bit error in transferring an instruction code from memory to a microprocessor chip may cause an invalid instruction to be introduced into the instruction stream, in turn causing the computer to totally cease operation. A second problem involves the distortion of electrical signals as they pass through metallic conductors. Signals that start at the source as clean, rectangular pulses may be received as rounded pulses with ringing at the rising and falling edges:

These effects are properties of transmission through metallic conductors, and become more pronounced as the conductor length increases. To compensate for distortion, signal power must be increased or the transmission rate decreased.Special amplifier circuits are designed for transmitting direct digital signals through cables. For the relatively short distances between components on a printed circuit board or along a computer backplane, the amplifiers are in simple IC chips that operate from standard +5v power. The normal output voltage from the amplifier for logic '1' is slightly higher than the minimum needed to pass the logic '1' threshold. Correspondingly for logic '0', it is slightly lower. The difference between the actual output voltage and the threshold value is referred to as the noise margin, and represents the amount of noise voltage that can be added to the signal without creating an error:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

21

Transmission over Medium Distances Computer peripherals such as a printer or scanner generally include mechanisms that cannot be situated within the computer itself. Our first thought might be just to extend the computer's internal buses with a cable of sufficient length to reach the peripheral. Doing so, however, would expose all bus transactions to external noise and distortion even though only a very small percentage of these transactions concern the distant peripheral to which the bus is connected. If a peripheral can be located within 20 feet of the computer, however, relatively simple electronics may be added to make data transfer through a cable efficient and reliable. To accomplish this, a bus interface circuit is installed in the computer:

It consists of a holding register for peripheral data, timing and formatting circuitry for external data transmission, and signal amplifiers to boost the signal sufficiently for transmission through a cable. When communication with the peripheral is necessary, data is first deposited in the holding register by the microprocessor. This data will then be reformatted, sent with error-detecting codes, and transmitted at a relatively slow rate by digital hardware in the bus interface circuit. In addition, the signal power is greatly boosted before transmission through the cable. These steps ensure that the data will not be corrupted by noise or distortion during its passage through the cable. In addition, because only data destined for the peripheral is sent, the party-line transactions taking place on the computer's buses are not unnecessarily exposed to noise. Data sent in this manner may be transmitted in byte-serial format if the cable has eight parallel channels (at least 10 conductors for half-duplex operation), or in bit-serial format if only a single channel is available. Transmission over Long Distances When relatively long distances are involved in reaching a peripheral device, driver circuits must be inserted after the bus interface unit to compensate for the electrical effects of long cables:

This is the only change needed if a single peripheral is used. However, if many peripherals are connected, or if other computer stations are to be linked, a local area network (LAN) is required,

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

22

and it becomes necessary to drastically change both the electrical drivers and the protocol to send messages through the cable. Because multiconductor cable is expensive, bit-serial transmission is almost always used when the distance exceeds 20 feet. In either a simple extension cable or a LAN, a balanced electrical system is used for transmitting digital data through the channel. This type of system involves at least two wires per channel, neither of which is a ground. Note that a common ground return cannot be shared by multiple channels in the same cable as would be possible in an unbalanced system. The basic idea behind a balanced circuit is that a digital signal is sent on two wires simultaneously, one wire expressing a positive voltage image of the signal and the other a negative voltage image. When both wires reach the destination, the signals are subtracted by a summing amplifier, producing a signal swing of twice the value found on either incoming line. If the cable is exposed to radiated electrical noise, a small voltage of the same polarity is added to both wires in the cable. When the signals are subtracted by the summing amplifier, the noise cancels and the signal emerges from the cable without noise:

A great deal of technology has been developed for LAN systems to minimize the amount of cable required and maximize the throughput. The costs of a LAN have been concentrated in the electrical interface card that would be installed in PCs or peripherals to drive the cable, and in the communications software, not in the cable itself (whose cost has been minimized). Thus, the cost and complexity of a LAN are not particularly affected by the distance between stations. Transmission over Very Long Distances Data communications through the telephone network can reach any point in the world. The volume of overseas fax transmissions is increasing constantly, and computer networks that link thousands of businesses, governments, and universities are pervasive. Transmissions over such distances are not generally accomplished with a direct-wire digital link, but rather with digitallymodulated analog carrier signals. This technique makes it possible to use existing analog telephone voice channels for digital data, although at considerably reduced data rates compared to a direct digital link. Transmission of data from your personal computer to a timesharing service over phone lines requires that data signals be converted to audible tones by a modem. An audio sine wave carrier is used, and, depending on the baud rate and protocol, will encode data by varying the frequency, phase, or amplitude of the carrier. The receiver's modem accepts the modulated sine wave and extracts the digital data from it. Several modulation techniques typically used in encoding digital data for analog transmission are shown below:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

23

Similar techniques may be used in digital storage devices such as hard disk drives to encode data for storage using an analog medium. Transmission from transmitter to receiver goes over some transmission medium using electromagnetic Waves Guided media. Waves are guided along a physical path; twisted pair, optical _ber, coaxial cable Unguided media. Waves are not guided; air waves, radio Direct link. Signal goes from transmitter to receiver with no intermediate devices, other than amplifiers and repeaters Point-to-point link. Guided media with direct link between two devices, with those two devices being the only ones sharing the medium Multipoint guided conjuration. More than two devices can share the same medium Simplex, half duplex, and full duplex transmission Frequency, spectrum, and bandwidth Signal is generated by a transmitter and transmitted over a medium Signal is a function of time or frequency (components of deferent frequency) Timedomain concepts Continuous signal. Signal intensity varies in a smooth fashion over time; no breaks or discontinuities in signal Discrete signal. Signal intensity can take one of two prospected values for any amount of time Periodic signal. 1.5.5 Data Transmission Base frequency such that the frequency of all components can be expressed as its integer multiples; the period of the aggregate signal is the same as the period of fundamental frequency .We can show that each signal can be decomposed into a set of sinusoid signals by making use of Fourier analysis  The time domain function s(t) specifies a signal in terms of its amplitude at each instant of time  The frequency domain function S(f) specifies the signal in terms of peak amplitude of constituent frequencies Spectrum. Range of frequencies contained in a signal Absolute bandwidth. Width of the spectrum Elective bandwidth. Narrow band of frequencies containing most of the energy of the signal DC component. Component of zero frequency; changes the average amplitude of the signal to nonzero. Relationship between data rate an bandwidth:      

Any transmitter/receiver system can accommodate only a limited range of frequencies The range for fm radio transmission is 88{108 mhz This limits the data rate that can be carried over the transmission medium Consider a sine wave of period f Consider the positive pulse to be binary 1 and the negative pulse to be binary 0 Add to it sine waves of period 3f, 5f, 7f

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 

24

The resultant waveform starts to approximate a square wave

This waveform has infinite number of frequency components and infinite bandwidth Peak amplitude of kth frequency component is 1 k so most of the energy is concentrated in the first few frequencies. Limiting the bandwidth to only the first few frequencies gives a shape that is reasonably close to square wave. Digital transmission system capable of transmitting signals with a bandwidth of 4 mhz . A given bandwidth can support various data rates depending on the ability of the receiver to discern the difference between 0 and 1 in the presence of noise and other impairments. Any digital waveform has infinite bandwidth. The transmission system limits the waveform as a signal over any medium. For any given medium, cost is directly proportional to bandwidth transmitted. Signal of limited bandwidth is preferable to reduce cost. Limiting the bandwidth creates distortions making it difficult to interpret the received signal Center frequency. Point where bandwidth of a signal is centered. Higher the center frequency, higher the potential bandwidth. Analog and digital data transmission  Analog vs digital (continuous vs discrete)  Data { Entities that convey information  Signaling { Physical propagation of signal along suitable medium  Transmission { Communication of data by propagation and processing of signals Transmission Impairments Attenuation To reduce the amplitude of an electrical signal with little or no distortion Logarithmic in nature for guided media; expressed as a constant number of decibels per unit distance For unguided media, complex function of distance and atmospheric conditions. Three considerations for transmission engineer 1. Received signal must have sufficient strength to enable detection 2. Signal must maintain a level sufficiently higher than noise to be received without error 3. Attenuation is an increasing function of frequency Signal strength must be strong but not too strong to overload the circuitry of transmitter or receiver, which will cause distortion Data Transmission Beyond a certain distance, attenuation becomes large to require the use of repeaters or amplifiers to boost the signal.  Attenuation distorts the received signal, reducing intelligibility  Attenuation can be equalized over a band of frequencies  Use amplifiers than can amplify higher frequencies more than low frequencies  Delay distortion Peculiar to guided transmission media Caused by the fact that the velocity of signal propagation through a guided medium varies with frequency. In band limited signal, velocity tends to be highest near the center frequency and falls towards the two edges of band. Varying frequency components arrive at the receiver at different times, resulting in phase shifts between different frequencies. In digital data transmission, some signal components of one bit position will spill over into other bit positions, causing inter symbol interference. May be reduced by using equalization techniques. Noise- Undesired signals that are inserted into the real signal during transmission. Four types of noise 1. Thermal noise( Also called white noise)  Occurs due to thermal agitation of electrons  Function of temperature and present in all electronic devices  Uniformly distributed across frequency spectrum  Cannot be eliminated and places an upper bound on system performance  Thermal noise in a bandwidth of 1 Hz in any device or conductor is N0 = kT W/Hz Where N0 = Noise power density in watts per 1 Hz of bandwidth k = Boltzmann's constant = 1:3803 _ 1023 J/_K T = temperature in degree Kelvin

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

25

Noise is assumed to be independent of frequency. Thermal noise in a bandwidth of B Hz can be expressed as N = kTB or, in decibel-watts N = 10 log k + 10 log T + 10 logB. Given a receiver with an elective noise temperature of 100_ K and a 10 mhz bandwidth, thermal noise level at the output is N = 0228:6 + 10 log 102 + 10 log 107. 2. Intermediations noise  Signals at different frequencies share the same transmission medium  May result in signals that are sum or difference or multiples of original frequencies  Occurs when there is some nonlinearity in the transmitter, receiver, or intervening transmission system  Nonlinearity may be caused by component malfunction or excessive signal strength 3. Crosstalk  Unwanted coupling between signal paths  Occurs due to electric coupling between nearby twisted pairs, multiple signals on a coaxial cable, or unwanted signals picked up by microwave antennas  Typically same order of magnitude or less than thermal noise 4. Impulse noise  Noncontiguous noise, consisting of irregular pulses or noise spikes of short duration and high amplitudes  May be caused by lightning, or awes in communications system  Not a major problem for analog data but can be significant for digital data  A spike of 0.01 s will not destroy any voice data but will destroy 560 bits being transmitted at 56 kbps Channel capacity Maximum rate at which data can be transmitted over a communication path or channel depends on four factors 1. Data rate in bps 2. Bandwidth constrained by transmitter and nature of transmission medium, expressed in cycles per second, or Hz 3. Noise Average noise level over channel 4. Error rate Percentage of time when bits are skipped Bandwidth is proportional to cost for digital data, we'll like to get as high a data rate as possible within a limit of error rate for a given bandwidth . NyQuest bandwidth  Limitation on data rate for a noise free channel; equals that of channel bandwidth  If the rate of signal transmission is 2B, then a signal with frequencies no greater than B is sufficient to carry the signal rate  Given a bandwidth B, the highest possible signal rate is 2B  The above is true for signals with two voltage levels  With multilevel signaling, Nyquist formulation is = 2B log2M  For a given bandwidth, data rate can be increased by increasing the number of different signal elements Value of M is practically limited by noise and other impairments on transmission line Shannon capacity formula  Nyquist formula gives the relationship between bandwidth and data rate  Noise can corrupt bits of data  Shorter bits imply that more bits get corrupted by a given noise pattern  Higher data rate means higher error rate  Higher signal strength can lead to better discrimination of signal in the presence of noise  Signal-to-noise (snr) ratio  Ratio of power in signal to the power in noise present at a particular point in the noise  Typically measured at the receiver to process the signal and eliminate unwanted noise  Often measured in decibels (snr)dB = 10 log10

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

26

signal power and noise power  snr expresses the amount by which the intended signal exceeds the noise level  High snr implies a high quality signal while low snr indicates the need for repeaters  snr sets the upper bound on achievable data rate  Maximum channel capacity C, in bps, is given by C = B log2(1 + snr) where B is the bandwidth of the channel in Hz Shannon formula gives the maximum possible capacity assuming only white noise; it does not take into account the impulse noise, delay distortion, and attenuation. Digital Data Communication Techniques: For two devices linked by a transmission medium to exchange data ,a high degree of cooperation is required. Typically data is transmitted one bit at a time. The timing (rate, duration, spacing) of these bits must be same for transmitter and receiver. There are two options for transmission of bits. 1. Parallel All bits of a byte are transferred simultaneously on separate parallel wires. Synchronization between multiple bits is required which becomes difficult over large distance. Gives large band width but expensive. Practical only for devices close to each other. 2. Serial Bits transferred serially one after other. Gives less bandwidth but cheaper. Suitable for transmission over long distances. 1.5.6 Transmission Techniques: Asynchronous: Small blocks of bits(generally bytes) are sent at a time without any time relation between consecutive bytes .when no transmission occurs a default state is maintained corresponding to bit 1. Due to arbitrary delay between consecutive bytes,the time occurrences of the clock pulses at the receiving end need to be synchronized for each byte. This is achieved by providing 2 extra bits start and stop. Start bit: It is prefixed to each byte and equals 0. Thus it ensures a transition from 1 to 0 at onset of transmission of byte.The leading edge of start bit is used as a reference for generating clock pulses at required sampling instants. Thus each onset of a byte results in resynchronization of receiver clock. Stop bit: To ensure that transition from 1 to 0 is always present at beginning of a byte it is necessary that default state be 1. But there may be two bytes one immediately following the other and if last bit of first byte is 0, transition from 1 to 0 will not occur . Therefore a stop bit is suffixed to each byte equaling 1. It's duration is usually 1,1.5,2 bits. Asynchronous transmission is simple and cheap but requires an overhead of 3 bits i.e. for 7 bit code 2 (start ,stop bits)+1 parity bit implying 30% overhead.However % can be reduced by sending larger blocks of data but then timing errors between receiver and sender can not be tolerated beyond [50/no. of bits in block] % (assuming sampling is done at middle of bit interval). It will not only result in incorrect sampling but also misaligned bit count i.e. a data bit can be mistaken for stop bit if receiver's clock is faster.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

27

Synchronous - Larger blocks of bits are successfully transmitted. Blocks of data are either treated as sequence of bits or bytes. To prevent timing drift clocks at two ends need to be synchronized. This can done in two ways: 1. Provide a separate clock line between receiver and transmitter. OR 2. Clocking information is embedded in data signal i.e. biphase coding for digital signals. Still another level of synchronization is required so that receiver determines beginning or end of block of data. Hence each block begins with a start code and ends with a stop code. These are in general same known as flag that is unique sequence of fixed no. of bits. In addition some control characters encompass data within these flags. Data+control information is called a frame. Since any arbitrary bit pattern can be transmitted there is no assurance that bit pattern for flag will not appear inside the frame thus destroying frame level synchronization. So to avoid this we use bit stuffing Bit Stuffing: Suppose our flag bits are 01111110 (six 1's). So the transmitter will always insert an extra 0 bit after each occurrence of five 1's (except for flags). After detecting a starting flag the receiver monitors the bit stream . If pattern of five 1's appear, the sixth is examined and if it is 0 it is deleted else if it is 1 and next is 0 the combination is accepted as a flag. Similarly byte stuffing is used for byte oriented transmission. Here we use an escape sequence to prefix a byte similar to flag and 2 escape sequences if byte is itself a escape sequence. Check Your Progress 3. Define Parallel. 4. What is called start-bit? 1.6 Summary This lesson gave the brief introduction to the networks and their protocols via communication between the computers. Now, you can define a communication and trace the types of the communication. You have also learnt that the various types of the transmission media with neat diagram. The exchange of information, called communication, requires both a medium and a content of the information itself. Sometimes policing, shaping and spacing of the packet flows are necessary. This functionality is called traffic management. A communication channel can be simplex, in which only one party can transmit, full-duplex, in which both correspondents can transmit and receive simultaneously, or half-duplex, in which the correspondents alternate between transmitting and receiving states (such as conversing adults).

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

28

1.7 Keywords Groupware: This is the latest network application, it allows user groups to share documents, schedules databases, etc.. ex. Lotus Notes. Bandwidth: It constrained by transmitter and nature of transmission medium, expressed in cycles per second, or Hz Bit Stuffing : Suppose our flag bits are 01111110 (six 1's). So the transmitter will always insert an extra 0 bit after each occurrence of five 1's (except for flags). After detecting a starting flag the receiver monitors the bit stream. Ionospheric Propagation: bounces off of the Earths Ionospheric Layer in the upper atmosphere. It is sometimes called Double Hop Propagation. It operates in the frequency range of 30 - 85 MHz. 1.9 Check Your Progress Note: Use the space provided below for your answers. Compare your answers with those given at the end. 1. Define Communication. ………………………………………………………………………………………………………………… ……………………………………. 2. What is consistency in networks? ………………………………………………………………………………………………………………… …………………………………………………………………………… 3. Define Parallel. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ……………………………………………………… 4. What is called start-bit? ………………………………………………………………………………………………………………… …………………………………. Answer to Check Your Progress. 1. The exchange of information, called communication, requires both a medium and a content of the information itself. 2. Consistency is the predictability of response time and accuracy of data.  

Users prefer to have consistent response times, they develop a feel for normal operating conditions. For example: if the "normal" response time is 3 sec. Accuracy of Data determines if the network is reliable! If a system loses data, then the users will not have confidence in the information and will often not use the system.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

29

3. All bits of a byte are transferred simultaneously on separate parallel wires. Synchronization between multiple bits is required which becomes difficult over large distance. Gives large band width but expensive. Practical only for devices close to each other. 4. Start bit: It is prefixed to each byte and equals 0. Thus it ensures a transition from 1 to 0 at onset of transmission of byte. The leading edge of start bit is used as a reference for generating clock pulses at required sampling instants.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT 2 TRANSMISSION MEDIA Structure 2.0 Introduction 2.1 Objectives 2.2 Definition 2.3 Transmission Media 2.3.1 Transmission Media Guided 2.3.2 Transmission Media Unguided 2.4 Data Encoding 2.4.1 Digital data to analog signal 2.4.2 Digital data to digital signals 2.4.3 Encoding Techniques 2.4.4 Analog data to digital signals 2.5 Data Link Control 2.5.1 General Overview 2.5.2 Configurations 2.5.3 Data Link Protocols 2.6 Summary 2.7 Keywords 2.8 Exercise and Questions 2.9 Check Your Progress 2.10 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

30


COMPUTER COMMUNICATION & NETWORKS

31

2.0 Introduction This lesson describes the different types of transmission media. Also we will see about the data encoding, encoding techniques, over view and protocols of the data link control. Exchanging information, in the broad sense, is a strong human desire and even a need. Today, we almost take for granted how fast, plentiful and easily information can be exchanged, mainly due to the classical telephony system, the Internet (email, www) and mobile telephony (GSM). 2.1 Objectives After reading this lesson you should able to   

Understand the Transmission Media How the data are encoded Explain briefly the Data Link Control

2.2 Definition Transmission Media consists of a means for the data signals to travel but nothing to guide them along a specific path. The data signals are not bound to a cabling media and as such are often called Unbound Media. 2.3 Transmission Media There are 2 basic categories of Transmission Media:  

Guided Transmission Media Unguided Transmission Media

Guided Transmission Media uses a "cabling" system that guides or bounds the data signals along a specific path. The data signals are bound by the "cabling" system. Guided Media is also known as Bound Media. Cabling is meant in a generic sense in the previous sentences and is not meant to be interpreted as copper wire cabling only. Unguided Transmission Media consists of a means for the data signals to travel but nothing to guide them along a specific path. The data signals are not bound to a cabling media and as such are often called Unbound Media. 2.3.1 Transmission Media Guided There 4 basic types of Guided Media:    

Open Wire Twisted Pair Coaxial Cable Optical Fibre

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

32

Media versus Bandwidth The following table compares the usable bandwidth between the different Guided Transmission Media Cable Type

Bandwidth

Open Cable

0 - 5 MHz

Twisted Pair

0 - 100 MHz

Coaxial Cable

0 - 600 MHz

Optical Fibre

0 - 10 GHz

Open Wire Open Wire is traditionally used to describe the electrical wire strung along a telephone poles. There is a single wire strung between poles. No shielding or protection from noise interference is used. We are going to extend the traditional definition of Open Wire to include any data signal path without shielding or protection from noise interference. This can include multiconductor cables or single wires. This media is susceptible to a large degree of noise and interference and consequently not acceptable for data transmission except for short distances under 20 ft.

Twisted Pair The wires in Twisted Pair cabling are twisted together in pairs. Each pair would consist of a wire used for the +ve data signal and a wire used for the -ve data signal. Any noise that appears on 1 wire of the pair would occur on the other wire. Because the wires are opposite polarities, they are 180 degrees out of phase (180 degrees - phasor definition of opposite polarity). When the noise appears on both wires, it cancels or nulls itself out at the receiving end. Twisted Pair cables are most effectively used in systems that use a balanced line method of transmission: polar line coding (Manchester Encoding) as opposed to unipolar line coding (TTL logic).

Unshielded Twisted Pair The degree of reduction in noise interference is determined specifically by the number of turns per foot. Increasing the number of turns per foot reduces the noise interference. To further improve noise rejection, a foil or wire braid shield is woven around the twisted pairs. This "shield" can be woven around individual pairs or around a multi-pair conductor (several pairs).

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

33

Shielded Twisted Pair Cables with a shield are called Shielded Twisted Pair and commonly abbreviated STP. Cables without a shield are called Unshielded Twisted Pair or UTP. Twisting the wires together results in a characteristic impedance for the cable. A typical impedance for UTP is 100 ohm for Ethernet 10BaseT cable. UTP or Unshielded Twisted Pair cable is used on Ethernet 10BaseT and can also be used with Token Ring. It uses the RJ line of connectors (RJ45, RJ11, etc..) STP or Shielded Twisted Pair is used with the traditional Token Ring cabling or ICS - IBM Cabling System. It requires a custom connector. IBM STP (Shielded Twisted Pair) has a characteristic impedance of 150 ohms. Coaxial Cable Coaxial Cable consists of 2 conductors. The inner conductor is held inside an insulator with the other conductor woven around it providing a shield. An insulating protective coating called a jacket covers the outer conductor.

Coax Cable The outer shield protects the inner conductor from outside electrical signals. The distance between the outer conductor (shield) and inner conductor plus the type of material used for insulating the inner conductor determine the cable properties or impedance. Typical impedances for coaxial cables are 75 ohms for Cable TV, 50 ohms for Ethernet Thinnet and Thicknet. The excellent control of the impedance characteristics of the cable allow higher data rates to be transferred than Twisted Pair cable. Noise Immunity: Unbalanced Line versus Balanced Line When sending a signal down a guided media, two wires are required. One wire carries the information and the other carries the reference. There are two methods used to transmit the signal on the wire: a. Unbalanced lines b. Balanced lines Unbalanced Lines Unbalanced lines consist of two wires. One wire carries the signal and the other is the reference line called the common. The common wire is usually at ground potential. Often the common wire will also be used as a shield for noise immunity.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

34

Unbalanced line

Unbalanced line using common as shield Unbalanced lines have difficulty with noise immunity as any EMI noise will appear on the signal lead. Unbalanced lines are used for short distances because of the inherent problem with noise immunity. Balanced lines Balanced lines do not have a common. The signal information is carried on both wires. One wire carries the signal called the positive (+ve) signal and the other carries a signal 180 degrees out of phase called the negative (-ve) signal.

Balanced line - usually a twisted pair

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

35

Noise appears on both wires Often the wires are twisted together in order to tightly couple the wires electrically. The goal is to have any noise that appears on one wire to appear on the other wire. Because the signals are 180 degrees out of phase, the noise will cancel, here's the math:

1 volt noise spike in red appears on both wires During normal operation, the receiver subtracts the two transmitted signals. For the point where the noise spikes appear in the preceding drawing, normally the +ve signal would have an amplitude of +2V and the -ve signal would have an amplitude of -2V. The receiver would subtract these two voltages electronically to obtain a +4V signal. Received voltage = (+ve Signal) - (-ve Signal) = +2V - (-2V) = +4V But there is a voltage spike that appears on both wires with an ampitude of +1V (as indicated in red). This would be disastrous for an unbalanced line as the information is only carried on one wire but lets see what happens to the unbalanced line: Received voltage = (+ve Signal) - (-ve Signal) = +3V - (-1V) = +4V The noise spike adds to the +ve Signal for a total instaneous voltage of +3V. Similarly the noise spike adds to the -ve Signal for a total instaneous voltage of -1V. When subtracted, the resulting voltage is still +4V. The noise is effectively cancelled. Unbalanced line is used for long distances. Unshielded twisted pair (UTP) used in Cat 5 and Cat 6 cable utilizes the noise cancellation principle. Further noise immunity is obtained by adding an outer shield that is grounded, this is used by shielded twisted pair (STP). Tighter coupling of the wires is obtained by having the twists wound closer together, this results in better noise immunity. Optical Fibre Optical Fibre consists of thin glass fibres that can carry information at frequencies in the visible light spectrum and beyond. The typical optical fibre consists of a very narrow strand of glass called the Core. Around the Core is a concentric layer of glass called the Cladding. A typical Core diameter is 62.5 microns (1 micron = 10-6 meters). Typically Cladding has a diameter of 125 microns. Coating the cladding is a protective coating consisting of plastic, it is called the Jacket.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

36

An important characteristic of Fibre Optics is Refraction. Refraction is the characteristic of a material to either pass or reflect light. When light passes through a medium, it "bends" as it passes from one medium to the other. An example of this is when we look into a pond of water.

If the angle of incidence is small, the light rays are reflected and do not pass into the water. If the angle of incident is great, light passes through the media but is bent or refracted.

Optical Fibres work on the principle that the core refracts the light and the cladding reflects the light. The core refracts the light and guides the light along its path. The cladding reflects any light back into the core and stops light from escaping through it - it bounds the media! Optical Transmission Modes There are 3 primary types of transmission modes using optical fibre. They are a. Step Index b. Grade Index c. Single Mode Step Index has a large core the light rays tend to bounce around, reflecting off the cladding, inside the core. This causes some rays to take a longer or shorted path through the core. Some

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

37

take the direct path with hardly any reflections while others bounce back and forth taking a longer path. The result is that the light rays arrive at the receiver at different times. The signal becomes longer than the original signal. LED light sources are used. Typical Core: 62.5 microns.

Step Index Mode Grade Index has a gradual change in the Core's Refractive Index. This causes the light rays to be gradually bent back into the core path. This is represented by a curved reflective path in the attached drawing. The result is a better receive signal than Step Index. LED light sources are used. Typical Core: 62.5 microns.

Grade Index Mode Single Mode has separate distinct Refractive Indexes for the cladding and core. The light ray passes through the core with relatively few reflections off the cladding. Single Mode is used for a single source of light (one color) operation. It requires a laser and the core is very small: 9 microns.

Single Mode

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

38

Comparison of Optical Fibres

The Wavelength of the light sources is measured in nanometers or 1 billionth of a meter. We don't use frequency to talk about speed any more, we use wavelengths instead. Indoor cable specifications:     

LED (Light Emitting Diode) Light Source 3.5 dB/Km Attenuation (loses 3.5 dB of signal per kilometer) 850 nM - wavelength of light source Typically 62.5/125 (core dia/cladding dia) Multimode - can run many light sources.

Outdoor Cable specifications:    

Laser Light Source 1 dB/Km Attenuation (loses 1 dB of signal per kilometer) 1170 nM - wavelength of light source Mono mode (Single Mode)

Advantages of Optical Fibre:       

Noise immunity: RFI and EMI immune (RFI - Radio Frequency Interference, EMI Electromagnetic Interference) Security: cannot tap into cable. Large Capacity due to BW (bandwidth) No corrosion Longer distances than copper wire Smaller and lighter than copper wire Faster transmission rate

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

39

Disadvantages of Optical Fibre:   

Physical vibration will show up as signal noise! Limited physical arc of cable. Bend it too much and it will break! Difficult to splice

The cost of optical fibre is a trade-off between capacity and cost. At higher transmission capacity, it is cheaper than copper. At lower transmission capacity, it is more expensive. 2.3.2 Transmission Media - Unguided Unguided Transmission Media is data signals that flow through the air. They are not guided or bound to a channel to follow. They are classified by the type of wave propagation.

RF Propagation There are 3 types of RF (Radio Frequency) Propagation: 1. Ground Wave, 2. Ionospheric and 3. Line of Sight (LOS) Propagation. 1. Ground Wave Propagation follows the curvature of the Earth. Ground Waves have carrier frequencies up to 2 MHz. AM radio is an example of Ground Wave Propagation.

2. Ionospheric Propagation bounces off of the Earths Ionospheric Layer in the upper atmosphere. It is sometimes called Double Hop Propagation. It operates in the frequency range of 30 - 85 MHz. Because it depends on the Earth's ionosphere, it changes with weather and time of day. The signal bounces off of the ionosphere and back to earth. Ham radios operate in this range.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

40

3. Line of Sight Propagation transmits exactly in the line of sight. The receive station must be in the view of the transmit station. It is sometimes called Space Waves or Tropospheric Propagation. It is limited by the curvature of the Earth for ground based stations (100 km: horizon to horizon). Reflected waves can cause problems. Examples of Line of Sight Propagation are: FM Radio, Microwave and Satellite.

Radio Frequencies The frequency spectrum operates from 0 Hz (DC) to Gamma Rays (1019 Hz). Name

Frequency (Hertz)

Examples

Gamma Rays

10^19 +

X-Rays

10^17

Ultra-Violet Light

7.5 x 10^15

Visible Light

4.3 x 10^14

Infrared Light

3 x 10^11

EHF - Extremely High Frequencies

30 GHz (Giga = 10^9)

Radar

SHF - Super High Frequencies

3 GHz

Satellite and Microwaves

UHF - Ultra High Frequencies

300 MHz (Mega = 10^6) UHF TV (Ch. 14-83)

VHF - Very High Frequencies

30 MHz

FM / TV (Ch2 - 13)

HF - High Frequencies

3 MHz2

Short Wave Radio

MF - Medium Frequencies

300 kHz (kilo = 10^3)

AM Radio

LF - Low Frequencies

30 kHz

Navigation

VLF - Very Low Frequencies

3 kHz

Submarine Communications

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

41

VF - Voice Frequencies

300 Hz

Audio

ELF - Extremely Low Frequencies

30 Hz

Power Transmission

Radio Frequencies are in the range of 300 kHz to 10 GHz. We are seeing an emerging technology called wireless LANs. Some use radio frequencies to connect the workstations together, some use infrared technology. Microwave Microwave transmission is line of sight transmission. The Transmit station must be in visible contact with the receive station. This sets a limit on the distance between stations depending on the local geography. Typically the line of sight due to the Earth's curvature is only 50 km to the horizon! Repeater stations must be placed so the data signal can hop, skip and jump across the country.

Microwaves operate at high operating frequencies of 3 to 10 GHz. This allows them to carry large quantities of data due to the large bandwidth. Advantages: a. b. c. d.

They require no right of way acquisition between towers. They can carry high quantities of information due to their high operating frequencies. Low cost land purchase: each tower occupies small area. High frequency/short wavelength signals require small antenna.

Disadvantages: a. b. c. d.

Attenuation by solid objects: birds, rain, snow and fog. Reflected from flat surfaces like water and metal. Diffracted (split) around solid objects Refracted by atmosphere, thus causing beam to be projected away from receiver.

Satellite Satellites are transponders that are set in a geostationary orbit directly over the equator. A transponder is a unit that receives on one frequency and retransmits on another. The geostationary orbit is 36,000 km from the Earth's surface. At this point, the gravitational pull of the Earth and the centrifugal force of Earths rotation are balanced and cancel each other out. Centrifugal force is the rotational force placed on the satellite that wants to fling it out to space.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

42

The uplink is the transmitter of data to the satellite. The downlink is the receiver of data. Uplinks and downlinks are also called Earth stations due to be located on the Earth. The footprint is the "shadow" that the satellite can transmit to. The shadow being the area that can receive the satellite's transmitted signal.

Iridium Telecom System The Iridium telecom system was an ambitious satellite project that was to be the largest private aerospace project. It was planned to be a mobile telecom system to compete with cellular phones. It relies on satellites in Lower Earth Orbit (LEO). The satellites were to orbit at an altitude of 900 - 10,000 km and are a polar non-stationary orbit. They were planning on using 77 satellites to provide 100% coverage of the Earth at any moment. 77 is the atomic number of Iridium. The plan was that the user's handset was to require less power and would be cheaper than cellular phones. Unfortunately, it took so long to design and launches the satellites that cell phone technology surpassed the Iridium project in both size and power requirements. The Iridium phones ended up to be big and bulky and were very expensive compared to cell phones.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

43

They launched 66 satellites during 1998 and were hoping to have 1.5 million subscribers by end of the decade. Unfortunately they found that the cell phone market had captured the majority of the world and they were left with expensive and large mobile phone systems that were only practical for those areas without cell phone coverage. The Iridium project became financially unstable and went bankrupt in 1999. In 2001, there was talk of crashing the satellites back to earth because it was costing in the order of $1 million per day to keep them up. The original company was purchased by a consortium of private investors under Iridium Satellite LLC and the service re-established in 2001. Check Your Progress: 1. List out the types of transmission media. 2. What is called Private and Public Networks? 3. Define Performance. 4. What is mean by communication channel? 2.4 Data Encoding 2.4.1 Digital data to analog signals A modem (modulator-demodulator) converts digital data to analog signal. There are 3 ways to modulate a digital signal on an analog carrier signal. 1. Amplitude shift keying (ASK): is a form of modulation which represents digital data as variations in the amplitude of a carrier wave. Two different amplitudes of carrier frequency represent '0' , '1'.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

44

2. Frequency shift keying (FSK): In Frequency Shift Keying, the change in frequency defines different digits. Two different frequencies near carrier frequency represent '0' ,''1'.

3. Phase shift keying (PSK): The phase of the carrier is discretely varied in relation either to a reference phase or to the phase of the immediately preceding signal element, in accordance with data being transmitted. Phase of carrier signal is shifted to represent '0' , '1'.

2.4.2 Digital data to digital signals A digital signal is sequence of discrete, discontinuous voltage pulses. Each pulses a signal element. Encoding scheme is an important factor in how successfully the receiver interprets the incoming signal. 2.4.3 Encoding Techniques Following are several ways to map data bits to signal elements. 

Non return to zero(NRZ) NRZ codes share the property that voltage level is constant during a bit interval. High level voltage = bit 1 and Low level voltage = bit 0. A problem arises when there is a long sequence of 0s or 1s and the voltage level is maintained at the same value for a long time. This creates a problem on the receiving end because now, the clock synchronization is lost due to lack of any transitions and hence, it is difficult to determine the exact number of 0s or 1s in this sequence.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

45

The two variations are as follows: 1. NRZ-Level: In NRZ-L encoding, the polarity of the signal changes only when the incoming signal changes from a 1 to a 0 or from a 0 to a 1. NRZ-L method looks just like the NRZ method, except for the first input one data bit. This is because NRZ does not consider the first data bit to be a polarity change, where NRZ-L does. 2. NRZ-Inverted: Transition at the beginning of bit interval = bit 1 and No Transition at beginning of bit interval = bit 0 or vice versa. This technique is known as differential encoding. NRZ-I has an advantage over NRZ-L. Consider the situation when two data wires are wrongly connected in each other's place. In NRZ-L all bit sequences will get reversed (B'coz voltage levels get swapped).Whereas in NAZ-I since bits are recognized by transition the bits will be correctly interpreted. A disadvantage in NRZ codes is that a string of 0's or 1's will prevent synchronization of transmitter clock with receiver clock and a separate clock line need to be provided. 

Biphase encoding: It has following characteristics: 1. Modulation rate twice that of NRZ and bandwidth correspondingly greater. (Modulation is the rate at which signal level is changed). 2. Because there is predictable transition during each bit time,the receiver can synchronize on that transition i.e. clock is extracted from the signal itself. 3. Since there can be transition at the beginning as well as in the middle of the bit interval the clock operates at twice the data transfer rate.

Types of Encoding -->  

Biphase-manchester: Transition from high to low in middle of interval = 1 and Transition from low to high in middle of interval = 0 Differential-Manchester: Always a transition in middle of interval. No transition at beginning of interval=1 and Transition at beginning of interval = 0

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 

46

4B/5B Encoding: In Manchester encoding scheme , there is a transition after every bit. It means that we must have clocks with double the speed to send same amount of data as in NRZ encodings. In other words, we may say that only 50% of the data is sent. This performance factor can be significantly improved if we use a better encoding scheme. This scheme may have a transition after fixed number of bits instead of every other bit. Like if we have a transition after every four bits, then we will be sending 80% data of actual capacity. This is a significant improvement in the performance.

This scheme is known as 4B/5B. So here we convert 4-bits to 5-bits, ensuring at least one transition in them. The basic idea here is that 5-bit code selected must have :  

one leading 0 no more than two trailing 0s

Thus it is ensured that we can never have more than three consecutive 0s. Now these 5-bit codes are transmitted using NRZI coding thus problem of consecutive 1s is solved. The exact transformation is as follows : 4-bit Data 5-bit code 4-bit Data 5-bit code 0000

11110

1000

10010

0001

01001

1001

10011

0010

10100

1010

10110

0011

10101

1011

10111

0100

01010

1100

11010

0101

01011

1101

11011

0110

01110

1110

11100

0111

01111

1111

11101

Of the remaining 16 codes, 7 are invalid and others are used to send some control information like line idle(11111), line dead(00000), Halt(00100) etc. There are other variants for this scheme viz. 5B/6B, 8B/10B etc. These have self suggesting names. 

8B/6T Encoding: In the above schemes, we have used two/three voltage levels for a signal. But we may altogether use more than three voltage levels so that more than onebit could be send over a single signal. Like if we use six voltage levels and we use 8-bits then the scheme is called 8B/6T. Clearly here we have 729(3^6) combinations for signal and 256(2^8) combinations for bits.

Bipolar AIM: Here we have 3 voltage levels: middle, upper, lower o Representation 1: Middle level =0 Upper, Lower level =1 such that successive 1's will be represented alternately on upper and lower levels. o Representation 2 (pseudo ternary): Middle level =1 Upper, Lower level=0

2.4.4 Analog data to digital signal: The process is called digitization. Sampling frequency must be at least twice that of highest frequency present in the signal so that it may be fairly regenerated. Quantization - Max. and Min values of amplitude in the sample are noted. Depending on number of bits (say n) we use we divide the interval (min,max) into 2(^n) number of levels. The amplitude is then approximated to the nearest level by a 'n' bit integer. The digital signal thus consists of blocks of n bits. On

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

47

reception the process is reversed to produce analog signal. But a lot of data can be lost if fewer bits are used or sampling frequency not so high.  

Pulse code modulation(PCM): Here intervals are equally spaced. 8 bit PCB uses 256 different levels of amplitude. In non-linear encoding levels may be unequally spaced. Delta Modulation(DM): Since successive samples do not differ very much we send the differences between previous and present sample. It requires fewer bits than in PCM.

2.5 Data Link Control 2.5.1 General Overview Data link protocols assume that the connected nodes have some way to transfer frames of data from one to the other, and that the receiver can detect errors when they occur in a frame. These topics were considered in earlier pages. Here we consider the problem of managing the shared link between two or more nodes. First we consider possible configurations, then we consider protocols used to assure delivery of error-free frames in correct order. 2.5.2 Configurations 1. Topology A link may be  

point-to-point, or multipoint.

Point-to-point links have exactly two nodes attached to them, and so do not require addressing (it is implicit who is sending and who is receiving by the link used). Multipoint links require addressing: if there is a distinguished node (a Master or primary station), and all communication that takes place either has it as the source or the destination, then only the address of the undistinguished (Slave or secondary) node need be included. Otherwise, both the source and the destination address must be present. Usually, address recognition at this level is done in hardware. The interface card may not even capture the frame if the address does not match. If the address does match, the interface buffers the frame and sends an interrupt (or is polled) by the main CPU on the node, which then sets up a DMA into main memory where the frame is processed further. 2. Duplexity The directionality and temporal constraints on use of the medium are captured by the notion of duplexity. 1. Simplex - strictly one-way communication. 2. Half-duplex - communication in one direction at a time. 3. Full duplex - simultaneous communication in both directions. Simplex links are often used as parts of a full duplex connection, and may be used for true simplex communication in sensor networks, for monitoring, and in broadcast radio and television. There is also satellite distributed data broadcast, for distribution of price lists, weather information, etc., which is simplex. Half-duplex links usually require that echo suppressors, directional

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

48

amplifiers and other electronics change state when the direction of transmission changes on the link. This involves some transition time, which is to be avoided if possible. 3. Line Discipline Which node is eligible to transmit and when is the subject of line discipline. This is our main topic at the link and medium access layer, since this is where the protocols come into play. Peer (balanced, symmetric) protocols There is no distinguished station to control the access to the shared medium, so stations must function in a distributed manner. 1. Contention-based - two or more stations may try to transmit at the same time, causing neither to succeed. 2. Collision-avoidance - still contention-based, but there is an attempt to limit the likelihood of collisions. 3. Reservation methods - fixed, dynamic, and token-passing schemes may be used to prevent collisions. 4. Master-Slave (Primary-secondary, unbalanced, asymmetric) protocols A primary station directs the secondary station(s), and usually all communication between secondary’s must go through the primary. The primary uses polling to determine whether a secondary has data to send, and select to determine whether a secondary is ready to receive data. (i) Basic Select The primary sends a SEL to the secondary station it wants to send data. The selected station either ACKs if it can receive the data or NAKs if it can't. If it can, the primary sends it the data and it ACKs the frame if it was good. (ii) Fast Select The primary sends a SEL and the data frame without waiting for the secondary to ACK the SEL first. If the secondary was ready, it ACKs as usual if the frame was good. Otherwise, it NAKs the frame (if it was not ready or if the frame was damaged). (iii) Basic Poll The primary polls a secondary station by sending it a POLL frame. If the secondary has data, it responds with a data frame else it NAKs the POLL. The primary ACKs the data frame if it was received correctly, or NAKs it otherwise. (iv) Roll-call Poll The primary polls each station individually, waiting for a NAK or data from each one before polling the next. The POLL for the next station can be pipelined following the ACK for the current one on multi drop lines.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

49

(v) Hub Poll The primary polls the farthest station first. Each secondary station, when it receives a poll, sends data if it has any, then polls the next nearest station. The nearest station polls the primary, which informs it that all the stations have been polled. The primary then ACKs all the data frames it received, and starts the next poll. This is most useful in half-duplex environments, since the cost of changing the direction of the line is paid only twice per cycle (the minimum possible) rather than twice per station. The line must be multi drop, of course. Pipelining accounts for the gain here. 5. Hybrid A hybrid scheme may use a primary station to set up communication resources for a pair (or a set) of secondary’s, using one method to allow a secondary to request a reservation and then using reservation techniques for the secondary to carry out their conversation. These are used in some LANs and especially in satellite networks. 2.5.3 Data Link Protocols 1. General Model  

The data link protocol must provide for Link management - a communicating pair must be able to 1. Establish a link, 2. Transfer data, and 3. Terminate the link.

  

Pacing - the receiver must not be sent frames faster than it can handle them Error handling - the receiver must be able to detect damaged frames and obtain corrected ones Sequencing - the receiver must be able to detect missing and duplicate frames

2. Data Link Protocols - Description, Assumptions & Requirements i. Utopia - a hopelessly optimistic protocol The sender sends frames one after the other, without waiting for any ACKs. There is no way to recover from lost or damaged frames. 1. Simplex channel 2. Error-free channel 3. Arbitrarily fast receiver (or arb. # buffers) ii. Stop& Wait The sender sends a frame, then waits for the receiver to ACK or NAK. The protocol hangs if a data frame, an ACK or a NAK is lost.   

Half-duplex needed (or full duplex) Error-free channel Pacing

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

50

iii. PAR (Positive Ack & Retransmit) - no seq# for ACKs The sender transmits numbered frames to the receiver, puts the frame on the retransmission queue, starts a timer, and waits for an ACK or a NAK. If none is received before the timer expires, then the frame on the retransmission queue is retransmitted. The receiver can distinguish between new frames and retransmitted duplicates by the sequence number. Without numbered ACKs, however, the sender can misinterpret an ACK and fail to retransmit a lost or damaged packet.     

Full-duplex needed Errors tolerated (somewhat) Pacing Timer required for retransmission Need sequence number to distinguish duplicate frames

iv. Sliding Window Protocols in general Sliding window alone allows for multiple outstanding unacknowledged packets, so that the protocol performance can be improved. With automatic retransmit request (ARQ), it handles errors well. The receiver may be able to adjust the sender's window size for flow control. The sender has a maximum send window size, N, that is used to define the greatest difference (modulo max_seq_#) between the lowest numbered unacknowledged frame (min) and the highest numbered frame that can be sent (max). The sender keeps track of these two quantities, plus the next sequence number to use for a new data frame, i. If i > max the the sender has to wait until max is incremented to send frame i. The sender increments both max and min (modulo max_seq_#) when an ACK for min is received. The receiver also has a window, the receive window, that defines the frames it will accept. At first, the rmin = i, and rmax-rmin is the receive window size. Whenever a frame is received, the receiver acknowledges it (whether or not it is accepted). If the sequence number of the received frame j is such that rmin <= j <= rmax, and it has not already been received, then the receiver buffers it. If j = rmin, then the receiver increments both rmax and rmin (modulo max_seq_#) and passes the frame to the next higher layer. If frame j+1 was already received and buffered, then it too is passed on and the receive window shifted. This continues until frame remind is not in a receive buffer.      

Full-duplex needed Errors tolerated Pacing provided by window control Timers required for retransmission Need sequence number to distinguish duplicate frames Need sequence number for ACKs as well

v. Stop& Wait ARQ (One-bit SW) A one-bit sequence number is used, with the ACK naming the next sequence number it expects. vi. Go-Back-N ARQ (ACK max, no NAK) The receiver typically uses cumulative ACKs, naming the next sequence number it expects. NAKs only expedite retransmission. The receiver need only have one buffer, provided it can always transfer frames to the host.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

51

vii. Selective-report ARQ (ACK or NAK each frame) The SR (selective reject/report/retransmit) ARQ requires the receiver to have as many buffers as the sender, and to buffer frames received out of order, provided they are in the receive window. Cumulative ACKs will not inform the sender of frames correctly received out of order until the missing frame is received, so explicit ACKs are often used. NAKs are very useful with this protocol. 3. Data Link Protocols - Analysis of Efficiency and Correctness An important parameter to consider is the number of frames that can be in transit from the sender to the receiver a one time, a. a = (Prop delay)/(Frame Tx time) = (D/V)/(L/R) = DR/VL, where D = distance (meters), V = propagation velocity (m/s), L = frame size (bits), and R = data rate (bps). This normalizes the propagation delay to frame transmission time. Performance has three main components (L = total frame length, H = control and framing bits, N = number of data frames sent in a cycle, T_d is the time spent in a cycle sending data frames, T_c is the total cycle time, P = frame error rate, N_r = number of retransmissions required per data frame, on the average, so N_r+1 = average number of frames sent per successful data frame):   

a factor due to framing and control bits added around the data in a data frame (overhead bits): U_f = (L-H)/L, a factor due to the protocol itself (how much time the sender has to spend in order to send a data frame): U_p = N(T_d)/(T_c), and a factor due to errors (how many data frames have to be sent on the average in order to deliver one without errors), U_e = 1/(N_r+1). Usually, U_e = 1-P. The overall efficiency of the system is the product of these component efficiencies, U = U_f U_p U_e.

i. Utopia - max utilization U_p = 1. ii. Stop &Wait - pacing U_p = 1/(1 + 2a) iii. PAR (Positive Ack & Retransmit) - no seq# for ACKs Note: ln(z) = Sum (n=1 to inf) [ (-1)**(n-1)(z-1)**n/n ], |z-1|<1 ln(1-x) = Sum (n=1 to inf) [(-1)**(n-1)(-x)**n/n] , |x| < 1 = Sum (n=1 to inf) [(-1)**(2n-1)x**n/n] = Sum (n=1 to inf) [- x**n/n] = - Sum (n=1 to inf) [x**n/n] , |x|<1 which for 0 < x << 1 is approximately -x.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

52

iv. Sliding Window Protocols in general Up = min{1, N/(1+2a)}, where N = send window size. 1. Effect of window size As N increases, so does Up to a point (when N = 1_2a). 2. Limits on window size: There is little point to having a receive window larger than the send window, and the sum of the maximum send and receive windows must not exceed the total number of sequence numbers available. v. Stop Wait ARQ (One-bit SW) Like PAR, but it has sequence numbers on the ACKs. vi. Go-Back-N (GBN) ARQ (ACK max, no NAK) vii. Selective-repeat (or selective-reject) (SR) ARQ (ACK or NAK each frame) viii. Effect of errors To estimate the effect of errors, the decrease in utilization due to retransmission of damaged frames must be considered. If P = frame error rate = Prob(frame is damaged), then the expected number of transmissions per success, assuming that an error only causes one frame to be retransmitted, is: Nr+1 = Sum (i=1 to oo) [i P^(i-1) (1-P)] = 1/(1-P). This is a weighted infinite sum, with i representing the number of transmissions needed to have success, and P^(i-1) (1-P) being the probability that exactly i attempts are required (assuming that the errors are independent). P^(i-1) is the probability that the first i-1 frames are damaged, and (1-P) is the probability that the ith one is not. As long as one damaged frame does not cause other frames to be retransmitted (as it does in GBN ARQ), then this expected number of transmissions per successful transmission appears as a multiplicative factor in the utilization equation. ix. Optimization The error rate, hence the expected number of retransmissions, increases as the frame length increases, which causes Use to decrease. On the other hand, U_f improves as the frame length increases, since more data is packed into a frame with a fixed amount of overhead. Likewise, increasing L will decrease a, so the protocol utilization rate may also improve. Balancing these conflicting trends to obtain the best net overall utilization is an optimization problem. A parameterized formula for the overall utilization taking all these factors into account may be differentiated with respect to L, and when this derivative is set to zero and the equation solved, a

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

53

local extremism will be found. By checking the utilization at this value of L as well as the limit as L -> oo, the global optimum value of L may be found, as a function of bit error rate, framing overhead, and protocol. 4. Data Link Protocols - Comments on piggybacking, NAKs and ACKs Piggy-backing: For piggybacking the frame format includes a field for an acknowledgement in each data frame. This avoids the high overhead of sending a control frame for no purpose but to acknowledge a received data frame, and also allows for some redundancy in acknowledgements. If the traffic is not about equal between the stations, then some form of cumulative ACK is needed, otherwise explicit ACK frames will have to be sent anyway. The receiver may not have a data frame on which to piggyback and ACK, and so may have to wait for a data frame to arrive in order to acknowledge the frame it has received. It can't wait too long, or else the sender's retransmission timer will expire, and the frame will be resent, wasting resources unnecessarily. Thus the receiver should have a piggyback timer that it sets when a frame is received and that it turns off when a piggybacked ACK is sent. If the piggyback timer expires before a return data frame arrives that can piggyback the ACK, then an ACK control frame is sent. NAKs: All that NAKs do is allow the receiver to signal the sender that a frame has been damaged before the sender's retransmission timer goes off. In this sense, it is just an optimization for early retransmission. Cumulative ACKs: Most common, and especially useful for GBN ARQ are ACKs that indicate that all frames up to some sequence number have been received correctly. Usually these cumulative ACKs indicate the next sequence number that the receiver expects, and so means that all the sequence numbers before that one have been received in good shape. Since the GBN receiver does not buffer out-of-sequence frames, this is the only number of interest to the sender anyway. Explicit ACKs: If an ACK means only that the sequence number mentioned has been received, then this is an explicit ACK. When using Selective Repeat ARQ, this allows the sender to determine that a frame has been received out of order, and retransmit the missing frame. Unfortunately, it also means that each frame must have its own ACK, so piggybacking does not work as well if there is asymmetric traffic. It also means that there is little or no redundancy, so a damaged ACK will cause the sender to retransmit a frame that has been received in good condition. Explicit ACKs serve no purpose in GBN ARQ. Block ACKs: On links with very large delay x data rate products (large frame storage capacity) and high error rates, block ACKs are sometimes used. The block ACK has two fields: one that indicates the prefix of the sequence numbers in the block, and the other a bitmap of the sequence numbers in the block. Each position in the bitmap corresponds to an ACK or a NAK for a sequence number. For example, |SEQ #|0<--- BITMAP-->F| | 20A |1011111101110000| indicates that sequence numbers 20A1, 20A8, 20AC, 20AD, 20AE, and 20AF are missing from the range 20A0-20AF. Block ACKs combine the redundancy of cumulative ACKS with the specificity of explicit ACKs and NAKs, at a cost of extra overhead in the ACK field (for the example, assuming that the digits are all in hexadecimal, there are 12 bits of prefix and 16 bits of bitmap for a 16-bit sequence number, using 12 more bits than either standard ACK type would use).

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

54

Check Your progress: 5. List out the three considerations for transmission engineer. 6. Write the four basic types of Guided Media. 7. Write short notes on Satellite. 2.6 Summary From this lesson we understood the different types of transmission media. Also the data encoding and the explanation of the data link control were discussed. Data link protocols assume that the connected nodes have some way to transfer frames of data from one to the other, and that the receiver can detect errors when they occur in a frame. 2.7 Keywords The Iridium telecom system: was an ambitious satellite project that was to be the largest private aerospace project. It was planned to be a mobile telecom system to compete with cellular phones. Duplexity :The directionality and temporal constraints on use of the medium are captured by the notion of duplexity. Biggybacking:For piggybacking the frame format includes a field for an acknowledgement in each data frame. Coaxial Cable :consists of 2 conductors. The inner conductor is held inside an insulator with the other conductor woven around it providing a shield. 2.9 Check Your Progress Note: Use the space provided below for your answers. Compare your answers with those given at the end.

1. List out the types of transmission media. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… …………………. 2. What is called Private and Public Networks? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ……………………………………………………………………………………………………… 3. Define Performance. ………………………………………………………………………………………………………………… ……………………………………………………………………………………………..

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

55

4. What is mean by communication channel? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………… 5. List out the three considerations for transmission engineer. ………………………………………………………………………………………………………………… …………………………………………………………………………… 6. Write the four basic types of Guided Media. ………………………………………………………………………………………………………………… …………………………………………………………………………… 7. Write short notes on Satellite. ………………………………………………………………………………………………………………… …………………………………………………………………………… Answer to Check Your Progress. 1. There are 2 basic categories of Transmission Media: o o

Guided Transmission Media Unguided Transmission Media

2. Most networks are for the private use of the organizations to which they belong; these are called private networks. Networks maintained by banks, insurance companies, airlines, hospitals, and most other businesses are of this nature. Public networks, on the other hand, are generally accessible to the average user, but may require registration and payment of connection fees. Internet is the most-widely known example of a public network. 3. Performance is the defined as the rate of transferring error free data. It is measured by the Response Time. Response Time is the elapsed time between the end of an inquiry and the beginning of a response. 4. A communications channel is a pathway over which information can be conveyed. It may be defined by a physical wire that connects communicating devices, or by a radio, laser, or other radiated energy source that has no obvious physical presence. 5. Three considerations for transmission engineer 1. Received signal must have sufficient strength to enable detection 2. Signal must maintain a level sufficiently higher than noise to be received without error 3. Attenuation is an increasing function of frequency 6. There 4 basic types of Guided Media:   

Open Wire Twisted Pair Coaxial Cable

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 

56

Optical Fibre

7. Satellites are transponders that are set in a geostationary orbit directly over the equator. A transponder is a unit that receives on one frequency and retransmits on another. The geostationary orbit is 36,000 km from the Earth's surface.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT – 3 PROTOCOL ARCHITECTURE Structure 3.0 Introduction 3.1 Objectives 3.2 Definition 3.3 Protocol Architecture 3.3.1 Network Interface Layer 3.3.2 Internet Layer 3.3.3 Transport Layer 3.3.4 Application Layer 3.4 Protocols 3.4.1 Service Primitives 3.4.2 Standards 3.5 OSI 3.5.1 Layer 7- Application Layer 3.5.2 Layer 6- Presentation Layer 3.5.3 Layer 5- Session Layer 3.5.4 Layer 4 – Transport Layer 3.5.5 Layer 3 – Network Layer 3.5.6 Layer 2 - Data Link Layer 3.5.7 Layer 1- Physical Layer 3.6 TCP/IP 3.6.1 Network of lowest Bidders 3.6.2 Address 3.6.3 Subnets 3.6.4 TCP/IP Model 3.6.5 Hardware and Software Implementation 3.7 Summary 3.8 Keywords 3.9 Exercise and Questions 3.10 Check Your Progress 3.11 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

57


COMPUTER COMMUNICATION & NETWORKS

58

3.0 Introduction This lesson describes the fundamentals of the network layer. The network layer handles the routing of data packets across the network, and defines the interface between a host and a network node. Then we will describe about the types of topologies, protocol architecture and transmission control protocol/ internet protocols. 3.1 Objectives After reading this lesson you should be able to:    

Describe the general characteristics of a computer network. Understand the role of the major components of a computer network. Distinguish between different network types and understand their properties. Appreciate the relevance and importance of standards, in general, and the OSI model in particular.

3.2 Definition A computer network is the infrastructure that allows two or more computers (called hosts) to communicate with each other. The network achieves this by providing a set of rules for communication, called protocols, which should be observed by all participating hosts. The need for a protocol should be obvious: it allows different computers from different vendors and with different operating characteristics to ‘speak the same language’. 3.3 Protocol Architecture TCP/IP protocols map to a four-layer conceptual model known as the DARPA model , named after the U.S. government agency that initially developed TCP/IP. The four layers of the DARPA model are: Application, Transport, Internet, and Network Interface. Each layer in the DARPA model corresponds to one or more layers of the seven-layer Open Systems Interconnection (OSI) model. TCP/IP protocol architecture.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

59

TCP/IP Protocol Architecture 3.3.1 Network Interface Layer The Network Interface layer (also called the Network Access layer) is responsible for placing TCP/IP packets on the network medium and receiving TCP/IP packets off the network medium. TCP/IP was designed to be independent of the network access method, frame format, and medium. In this way, TCP/IP can be used to connect differing network types. These include LAN technologies such as Ethernet and Token Ring and WAN technologies such as X.25 and Frame Relay. Independence from any specific network technology gives TCP/IP the ability to be adapted to new technologies such as Asynchronous Transfer Mode (ATM). 3.3.2 Internet Layer The Internet layer is responsible for addressing, packaging, and routing functions. The core protocols of the Internet layer are IP, ARP, ICMP, and IGMP.    

The Internet Protocol (IP) is a routable protocol responsible for IP addressing, routing, and the fragmentation and reassembly of packets. The Address Resolution Protocol (ARP) is responsible for the resolution of the Internet layer address to the Network Interface layer address such as a hardware address. The Internet Control Message Protocol (ICMP) is responsible for providing diagnostic functions and reporting errors due to the unsuccessful delivery of IP packets. The Internet Group Management Protocol (IGMP) is responsible for the management of IP multicast groups.

The Internet layer is analogous to the Network layer of the OSI model. 3.3.3 Transport Layer The Transport layer (also known as the Host-to-Host Transport layer) is responsible for providing the Application layer with session and datagram communication services. The core protocols of the Transport layer are Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). TCP provides a one-to-one, connection-oriented, reliable communications service. TCP is responsible for the establishment of a TCP connection, the sequencing and acknowledgment of packets sent, and the recovery of packets lost during transmission. UDP provides a one-to-one or one-to-many, connectionless, unreliable communications service. UDP is used when the amount of data to be transferred is small (such as the data that would fit into a single packet), when the overhead of establishing a TCP connection is not desired or when the applications or upper layer protocols provide reliable delivery. The Transport layer encompasses the responsibilities of the OSI Transport layer and some of the responsibilities of the OSI Session layer. 3.3.4 Application Layer The Application layer provides applications the ability to access the services of the other layers and defines the protocols that applications use to exchange data. There are many Application layer protocols and new protocols are always being developed. 

The most widely-known Application layer protocols are those used for the exchange of user information:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS    

60

The Hypertext Transfer Protocol (HTTP) is used to transfer files that make up the Web pages of the World Wide Web. The File Transfer Protocol (FTP) is used for interactive file transfer. The Simple Mail Transfer Protocol (SMTP) is used for the transfer of mail messages and attachments. Telnet, a terminal emulation protocol, is used for logging on remotely to network hosts.

Additionally, the following Application layer protocols help facilitate the use and management of TCP/IP networks:   

The Domain Name System (DNS) is used to resolve a host name to an IP address. The Routing Information Protocol (RIP) is a routing protocol that routers use to exchange routing information on an IP inter network. The Simple Network Management Protocol (SNMP) is used between a network management console and network devices (routers, bridges, intelligent hubs) to collect and exchange network management information.

Examples of Application layer interfaces for TCP/IP applications are Windows Sockets and NetBIOS. Windows Sockets provides a standard application programming interface (API) under Windows 2000. NetBIOS is an industry standard interface for accessing protocol services such as sessions, datagram’s, and name resolution. 3.4 Protocols OSI network protocols are specified in a variety of notations. This section describes two popular notations, sequence diagrams and state transition diagrams , which are extensively used in standards and the literature. Both rely on the notion of a service primitive which is described first. 3.4.1 Service Primitives A service primitive is an abstract representation of the interaction between a service provider and a service user. Service primitives are concerned with what interactions take place rather than how such interactions are implemented. Service primitives may be of one of the following four types: 1. Request Primitive. This is issued by a service user to the service provider to request the invocation of a procedure. 2. Indication Primitive. This is issued by the service provider to a peer service user (usually in response to a request primitive) to indicate that a procedure has been requested. 3. Response Primitive. This is issued by a peer service user to the service provider (usually in response to an indication primitive) to indicate that the requested procedure has been invoked. 4. Confirm Primitive. This is issued by the service provider to a service user to indicate that an earlier request for the invocation of a procedure has been completed. 3.4.2 Standards The importance of standards in the field of communication cannot be overstressed. Standards enable equipment from different vendors and with different operating characteristics to become components of the same network. Standards are developed by national and international organizations established for this exact purpose. During the course of this book we will discuss a number of important standards developed by various organizations, including the following: The International Standards Organization (ISO) has already been mentioned. This is a voluntary organization with representations from national standards organizations of member

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

61

countries (e.g., ANSI), major vendors, and end-users. ISO is active in many area of science and technology, including information technology. The Consultative Committee for International Telegraph and Telephone (CCITT) is a standards organization devoted to data and telecommunication, with representations from governments, major vendors, telecommunication carriers, and the scientific community. CCITT standards are published as Recommendation .These are revised and republished every four years. CCITT standards are very influential in the field of telecommunications and are adhered to by most vendors and carriers. The Institute of Electrical and Electronic Engineers (IEEE) is a US standards organization with members throughout the world. IEEE is active in many electric and electronic-related areas. The IEEE standards for local area networks are widely adopted . The Electronic Industries Association (EIA) is a US trade association best known for its EIA232 standard. The European Computer Manufacturers Association (ECMA) is a standards organization involved in the area of computer engineering and related technologies. ECMA directly cooperates with ISO and CCITT. In addition to these organizations, and because of their global market influence, large vendors occasionally succeed in establishing their products as de facto standards. Protocols specify the rules for communicating over a channel, much as one person politely waiting for another to finish before they speak. Protocols coupled with channel characteristics determine the net efficiency of communications over the channel. Protocols can improve the effective channel quality. An example is an ARQ (automatic repeat request) protocol in which a source automatically retransmits a message if it fails to receive an acknowledgment from the destination within some predefined time period following the original transmission of the message. The destination knows whether to acknowledge the message based on some error detection capability, which is typically based on redundant information added to the message, such as a parity code or cyclic redundancy check (CRC).

Protocol and Service Data Units Protocol data units or PDUs are the messages passed between entities at a given layer. Layer 2 PDUs are called LPDUs or frames; Layer 3 PDUs are called NPDUs or packets; Layer 4 PDUs are called TPDUs or segments. In general, a PDU-regardless of the protocol layer-consists of header and data fields. The header field contains the information necessary to get the PDU to the peer entity and typically includes the source and destination addresses appropriate for that layer as well as error sequence and flow control information. The data field contains the information

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

62

carried by the Layer (N) protocol in support of Layer (N+1); it is formally referred to as the Layer (N) service data unit or SDU.

Similarly, when a Layer (N-1) SDU is passed up to Layer (N), the Layer (N-1) header is removed from the Layer (N-1) PDU. Likewise, the Layer (N) PDU header is stripped to provide the Layer (N) SDU for Layer (N+1). This process continues as data units are passed up from layer to layer in the OSI reference model. Mobile Data Communications Entities At each layer of the OSI reference model there are protocol entities communicating with each other. They are the sources and destinations of PDUs at that layer. Because this book is about mobility in WANs, the entities of greatest interest are Layer 3 entities, commonly called hosts, nodes or end systems. Layer 3 PDUs, commonly called packets, are exchanged between hosts via Layer 3 entities commonly called routers.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

63

Adding the capability of mobility to a wide area data network creates a need for defining additional entities.

A mobile host or mobile is a host which can receive network services regardless of its location. The extent to which this host enjoys transparent location-independence is a key concern. Different systems use different terminology for the mobile; CDPD adopts ISO terminology by calling it a Mobile End System (M-ES). The mobile is an occasionally-connected entity, which means it may or may not be connected at any given moment to a subnet somewhere in the mobile WAN. The second role necessary in any communications is the opposite side of the correspondence. In this book we refer to this as the correspondent host or correspondent. The correspondent is the location of the opposite side of a mobile's application association; it could be the ultimate source of data destined for the mobile or another entity such as a store-and-forward device. The correspondent could itself be mobile or fixed in location, but this is generally not material to our analysis. CDPD refers to the correspondent as the Fixed End System (F-ES) when it is fixed in location. In circuit-switched systems, there will be a maximum of one correspondent per mobile host. However, in the packet-switched systems of greatest interest to us, there can always be multiple correspondents per mobile host. Associated with the mobile communications is the assisting entity or assistant. The assistant is an enabler of mobility. It could be a network store-and-forward device or mobility-supporting intermediate system (router). Most likely, it consists of multiple entities in a mobile network infrastructure which collectively support host mobility. In CDPD this role is largely filled by a combination of the Mobile Serving Function and the Mobile Home Function; the Mobile IP Task Force calls this combination the foreign agent and the mobile router or home agent. Check Your Progress 1. Define computer network. 2. Write short note on Transport Layer in OSI reference model. 3.5 OSI The ISO (International Standards Organization) has created a layered model called the OSI (Open Systems Interconnect) model to describe defined layers in a network operating system. The purpose of the layers is to provide clearly defined functions to improve inter network connectivity between "computer" manufacturing companies. Each layer has a standard defined

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

64

input and a standard defined output. Understanding the function of each layer is instrumental in understanding data communication within networks whether Local, Metropolitan or Wide This is a top-down explanation of the OSI Model, starting with the user's PC and what happens to the user's file as it passes though the different OSI Model layers. The top-down approach was selected specifically (as opposed to starting at the Physical Layer and working up to the Application Layer) for ease of understanding of how the user's files are transformed through the layers into a bit stream for transmission on the network. There are 7 Layers of the OSI model and they are always presented in this manner starting with layer 7:       

7. Application Layer (Top Layer) 6. Presentation Layer 5. Session Layer 4. Transport Layer 3. Network Layer 2. Data Link Layer 1. Physical Layer (Bottom Layer)

3.5.1 Layer 7 - Application Layer

Basic PC Logical Flowchart A basic PC logical flowchart is shown in above figure. The Keyboard and Application are shown as inputs to the CPU that would request access to the hard-drive. The Keyboard requests accesses to the hard-drive through user enquiries such as "DIR" commands and the Application through "File Openings" and "Saves". The CPU, through the Disk Operating System, sends/receives data from the local hard-drive .

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

65

Simple Network Redirection A PC setup as a network workstation has a software "Network Redirector" (actual name depends on the network. The Network Redirector is a TSR (Terminate and Stay Resident) program which presents the network hard-drive as another local hard-drive ("G:" in this example) to the CPU. Any CPU requests are intercepted by the "Network Redirector". The Network Redirector checks to see if a local drive is requested or a network drive. If a local drive is requested, the request is passed on to DOS. If a network drive is requested, the request is passed on to the network operating system (NOS). Electronic mail (E-Mail), client-server databases, games played over the network, print and file servers, remote logons and network management programs or any "network aware" application are aware of the network redirector and can communicate directly with other "network applications " on the network. The "Network Aware Applications" and the "Network Redirector" make up Layer 7 - the Application layer of the OSI Model is described below :

. The Application layer identifies the network aware service by assigning a unique number to it. The actual number and identifier will depend on the Network Operating System (NOS) used. 3.5.2 Layer 6 - Presentation Layer The Network Redirector directs CPU operating system native code to the network operating system. The coding and format of the data may not recognizable by the network operating system. The Presentation layer deals with translation of file formats, encryption of data and compression of data. The data consists of file transfers and network calls by network aware programs.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

66

Presentation layer Similarly, the Presentation layer strips the pertinent file from the workstation operating system's file envelope. The control characters, screen formatting and workstation operating system envelope are stripped or added to the file, depending on if the workstation is receiving or transmitting data to the network. This could also include translating ASCII files characters from a PC world to EBCDIC in an IBM Mainframe world. 3.5.3 Layer 5 - Session Layer The Session layer manages the communications between the workstation and network. The Session layer directs the information to the correct destination and identifies the source to the destination. The Session layer identifies the type of information as data or control. The Session layer manages the initial start-up of a session and the orderly closing of a session. The Session layer also manages Logon procedures and Password recognition and permissions.

Session Layer The Session layer is concerned with managing the communications. Does the source have rights and permissions to access the destination? Is the destination alive and present? The session layer will periodically check to see if the both the source and destinations are still operating and will timeout if no communications has been seen for a while. 3.5.4 Layer 4 - Transport Layer The Transport layers main job is to provide error free end to end delivery of data. The file is broken up into smaller sized data units called segments. The segments are numbered and sent off to the destination. The destination acknowledges receipt of the each segment by replying with an acknowledgement.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

67

Networks are dynamic, meaning that the path to the destination may change while the information is being sent. This can cause the segments to arrive in an out of order sequence. Since there is no guarantee that the segments will arrive in the correct order, the Transport layer will correct out of order segments and put them in the correct order. If a segment that has been sent to the destination has not been acknowledged within a certain time period by the source. The transport layer will time out and resend the segment. If the destination does not receive or misses a segment in a sequence with in a certain period of time, it will request that the missing segment be resent. The Transport layer also provides error checking to make sure that data received hasn't been corrupted during transmission. The Transport layer guarantees an error-free host to host connection, it is not concerned with the path between machines. 3.5.5 Layer 3 - Network Layer The Network layer is concerned with finding the shortest path to the destination. This usually means finding the fastest route through multiple networks to the destination. Routing algorithms are used to determine the "shortest" path. Shortest does not mean the physically shortest distance but fastest route. The Network layer converts the segments into smaller protocol data units (PDUs) called packets that the network can handle. The Network layer is connectionless in that it does not guarantee that the packet will reach its destination. It is often referred to as "send and pray". The packet is sent out on the wire and we pray that it arrives. Networks are identified by network addresses that are separate from node addresses. Since the Network layer is concerned with finding networks, it adds the source and destination addresses to the packet

Network Layer 3.5.6 Layer 2 - Data Link Layer The Data Link layer has three primary jobs: 1. Bus Arbitration: The Data Link layer controls "whose turn it is to talk?" on the medium. 2. Framing of the bits: it puts the bits in proper order and fields 3. Error detection and correction at the bit level The Data Link layer is in charge of whose turn it is to talk on the wire. It will have a method for determining how its going to control the communication

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

68

The Data Link layer has the job putting the bits in the correct order for sending out on the wire. The Data Link layer has the job of organizing the data in a frame. The frame consists of sections called fields. Each field will have a specific function such as Destination Address which identifies the host for which the data is being sent to. Other fields will be used for synchronizing source and destination clocks and others for error checking. The Data Link layer resides in the firmware layer of the network interface card. Firmware is software that is burnt into a read only memory. The node will have a unique address that identifies it from all other nodes. This address is called a hardware address because it is burnt into the firmware. Ethernet network interface cards' unique address is typcially called a MAC address after a Data Link sub-layer called the Media Access Control (MAC) layer. The Data Link layer takes the packets and puts them into frames of bits: 1s and 0s for transmission and assembles received frames into packets. The Data Link layer works at the bit level and is concerned about bit sequence. Error checking is at the bit level and frames with errors are discarded and a request for re-transmission is sent out.

Data Link Layer 3.5.7 Layer 1 - Physical Layer The Physical layer concerns itself with the transmission of bits and the network card's hardware interface to the network. The hardware interface involves the type of cabling (coax, twisted pair, etc..), frequency of operation (1 Mbps, 10Mbps, etc..), voltage levels, cable terminations, topology (physical shape of the network: star, bus, ring, etc..), etc.. Examples of Physical layer protocols are 100BaseT, 1000BaseT, Token Ring.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

69

Physical Layer Layer Specific Communication Each layer may add a Header and a Trailer to its Data. The combination of the header, data and trailer is called a Protocol Data Unit (PDU). A PDU is a generic term that is applied to the data structure in the layers.

The header is information that proceeds the data and the trailer is information that follows the data. Usually the header will contain source and destination addresses and some control information used to manage the communication. The trailer usually contains information such as error checking or a field to indicate the end of the PDU. The Application Header is indicated by the letters AH. The Application Trailer is indicated by the letters AT in the above figure. Each layer's data field consists of the upper layers PDU. For example: the Network layer's data field consists of the Transport layer's PDU. Check Your Progress 3. List out the four types of service primitives of the OSI network protocol. 4. What is CCITT stands for? 3.6 TCP/IP TCP and IP were developed by a Department of Defense (DOD) research project to connect a number different networks designed by different vendors into a network of networks (the "Internet"). It was initially successful because it delivered a few basic services that everyone needs (file transfer, electronic mail, remote logon) across a very large number of client and server systems. Several computers in a small department can use TCP/IP (along with other protocols) on a single LAN. The IP component provides routing from the department to the enterprise network, then to regional networks, and finally to the global Internet. On the battlefield a communications network will sustain damage, so the DOD designed TCP/IP to be robust and

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

70

automatically recover from any node or phone line failure. This design allows the construction of very large networks with less central management. However, because of the automatic recovery, network problems can go undiagnosed and uncorrected for long periods of time. As with all other communications protocol, TCP/IP is composed of layers: 

 

IP - is responsible for moving packet of data from node to node. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world. TCP - is responsible for verifying the correct delivery of data from client to server. Data can be lost in the intermediate network. TCP adds support to detect errors or lost data and to trigger retransmission until the data is correctly and completely received. Sockets - is a name given to the package of subroutines that provide access to TCP/IP on most systems.

3.6.1 Network of Lowest Bidders The Army puts out a bid on a computer and DEC wins the bid. The Air Force puts out a bid and IBM wins. The Navy bid is won by Unisys. Then the President decides to invade Grenada and the armed forces discover that their computers cannot talk to each other. The DOD must build a "network" out of systems each of which, by law, was delivered by the lowest bidder on a single contract.

The Internet Protocol was developed to create a Network of Networks (the "Internet"). Individual machines are first connected to a LAN (Ethernet or Token Ring). TCP/IP shares the LAN with other uses (a Novell file server, Windows for Workgroups peer systems). One device provides the TCP/IP connection between the LAN and the rest of the world. To insure that all types of systems from all vendors can communicate, TCP/IP is absolutely standardized on the LAN. However, larger networks based on long distances and phone lines are more volatile. In the US, many large corporations would wish to reuse large internal networks based on IBM's SNA. In Europe, the national phone companies traditionally standardize on X.25. However, the sudden explosion of high speed microprocessors, fiber optics, and digital phone systems has created a burst of new options: ISDN, frame relay, FDDI, Asynchronous Transfer Mode (ATM). New technologies arise and become obsolete within a few years. With cable TV

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

71

and phone companies competing to build the National Information Superhighway, no single standard can govern citywide, nationwide, or worldwide communications. The original design of TCP/IP as a Network of Networks fits nicely within the current technological uncertainty. TCP/IP data can be sent across a LAN, or it can be carried within an internal corporate SNA network, or it can piggyback on the cable TV service. Furthermore, machines connected to any of these networks can communicate to any other network through gateways supplied by the network vendor. 3.6.2 Addresses Each technology has its own convention for transmitting messages between two machines within the same network. On a LAN, messages are sent between machines by supplying the six byte unique identifier (the "MAC" address). In an SNA network, every machine has Logical Units with their own network address. DECNET, Appletalk, and Novell IPX all have a scheme for assigning numbers to each local network and to each workstation attached to the network. The organization then connects to the Internet through one of a dozen regional or specialized network suppliers. The network vendor is given the subscriber network number and adds it to the routing configuration in its own machines and those of the other major network suppliers. New Haven is in a border state, split 50-50 between the Yankees and the Red Sox. In this spirit, Yale recently switched its connection from the Middle Atlantic regional network to the New England carrier. When the switch occurred, tables in the other regional areas and in the national spine had to be updated, so that traffic for 130.132 was routed through Boston instead of New Jersey. The large network carriers handle the paperwork and can perform such a switch given sufficient notice. During a conversion period, the university was connected to both networks so that messages could arrive through either path. 3.6.3 Subnets Although the individual subscribers do not need to tabulate network numbers or provide explicit routing, it is convenient for most Class B networks to be internally managed as a much smaller and simpler version of the larger network organizations. It is common to subdivide the two bytes availabl02e for internal assignment into a one byte department number and a one byte

.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

72

workstation ID The enterprise network is built using commercially available TCP/IP router boxes. Each router has small tables with 255 entries to translate the one byte department number into selection of a destination Ethernet connected to one of the routers. Messages to the PC Lube and Tune server (130.132.59.234) are sent through the national and New England regional networks based on the 130.132 part of the number. Arriving at Yale, the 59 department ID selects an Ethernet connector in the C& IS building. The 234 selects a particular workstation on that LAN. The Yale network must be updated as new Ethernets and departments are added, but it is not effected by changes outside the university or the movement of machines within the department. A Uncertain Path Every time a message arrives at an IP router, it makes an individual decision about where to send it next. There is concept of a session with a preselected path for all traffic. Consider a company with facilities in New York, Los Angeles, Chicago and Atlanta. It could build a network from four phone lines forming a loop (NY to Chicago to LA to Atlanta to NY). A message arriving at the NY router could go to LA via either Chicago or Atlanta. The reply could come back the other way. If one phone line in this network breaks down, traffic can still reach its destination through a roundabout path. After losing the NY to Chicago line, data can be sent NY to Atlanta to LA to Chicago. This provides continued service though with degraded performance. This kind of recovery is the primary design feature of IP. The loss of the line is immediately detected by the routers in NY and Chicago, but somehow this information must be sent to the other nodes. Otherwise, LA could continue to send NY messages through Chicago, where they arrive at a "dead end." Each network adopts some Router Protocol which periodically updates the routing tables throughout the network with information about changes in route status. If the size of the network grows, then the complexity of the routing updates will increase as will the cost of transmitting them. Building a single network that covers the entire US would be unreasonably complicated. Fortunately, the Internet is designed as a Network of Networks. This means that loops and redundancy are built into each regional carrier. The regional network handles its own problems and reroutes messages internally. Its Router Protocol updates the tables in its own routers, but no routing updates need to propagate from a regional carrier to the NSF spine or to the other regions (unless, of course, a subscriber switches permanently from one region to another). Undiagnosed Problems IBM designs its SNA networks to be centrally managed. If any error occurs, it is reported to the network authorities. By design, any error is a problem that should be corrected or repaired. IP networks, however, were designed to be robust. In battlefield conditions, the loss of a node or line is a normal circumstance. Casualties can be sorted out later on, but the network must stay up. So IP networks are robust. They automatically (and silently) reconfigure themselves when something goes wrong. If there is enough redundancy built into the system, then communication is maintained. In 1975 when SNA was designed, such redundancy would be prohibitively expensive, or it might have been argued that only the Defense Department could afford it. Today, however, simple routers cost no more than a PC. However, the TCP/IP design that, "Errors are normal and can be largely ignored," produces problems of its own. Data traffic is frequently organized around "hubs," much like airline traffic. One could imagine an IP router in Atlanta routing messages for smaller cities throughout the Southeast. The problem is

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

73

that data arrives without a reservation. Airline companies experience the problem around major events, like the Super Bowl. Just before the game, everyone wants to fly into the city. After the game, everyone wants to fly out. Imbalance occurs on the network when something new gets advertised. Adam Curry announced the server at "mtv.com" and his regional carrier was swamped with traffic the next day. The problem is that messages come in from the entire world over high speed lines, but they go out to mtv.com over what was then a slow speed phone line. TCP was designed to recover from node or line failures where the network propagates routing table changes to all router nodes. Since the update takes some time, TCP is slow to initiate recovery. The TCP algorithms are not tuned to optimally handle packet loss due to traffic congestion. Instead, the traditional Internet response to traffic problems has been to increase the speed of lines and equipment in order to say ahead of growth in demand. TCP treats the data as a stream of bytes. It logically assigns a sequence number to each byte. The TCP packet has a header that says, in effect, "This packet starts with byte 379642 and contains 200 bytes of data." The receiver can detect missing or incorrectly sequenced packets. TCP acknowledges data that has been received and retransmits data that has been lost. The TCP design means that error recovery is done end-to-end between the Client and Server machine. There is no formal standard for tracking problems in the middle of the network, though each network has adopted some ad hoc tools. Need to Know There are three levels of TCP/IP knowledge. Those who administer a regional or national network must design a system of long distance phone lines, dedicated routing devices, and very large configuration files. They must know the IP numbers and physical locations of thousands of subscriber networks. They must also have a formal network monitor strategy to detect problems and respond quickly. Each large company or university that subscribes to the Internet must have an intermediate level of network organization and expertise. A half dozen routers might be configured to connect several dozen departmental LANs in several buildings. All traffic outside the organization would typically be routed to a single connection to a regional network provider. However, the end user can install TCP/IP on a personal computer without any knowledge of either the corporate or regional network. Three pieces of information are required:   

The IP address assigned to this personal computer The part of the IP address (the subnet mask) that distinguishes other machines on the same LAN (messages can be sent to them directly) from machines in other departments or elsewhere in the world (which are sent to a router machine) The IP address of the router machine that connects this LAN to the rest of the world.

In the case of the PCLT server, the IP address is 130.132.59.234. Since the first three bytes designate this department, a "subnet mask" is defined as 255.255.255.0 (255 is the largest byte value and represents the number with all bits turned on). It is a Yale convention (which we recommend to everyone) that the router for each department have station number 1 within the department network. Thus the PCLT router is 130.132.59.1. Thus the PCLT server is configured with the values:   

My IP address: 130.132.59.234 Subnet mask: 255.255.255.0 Default router: 130.132.59.1

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

74

3.6.4 TCP/IP model The TCP/IP model is a specification for computer network protocols created in the 1970s by DARPA, an agency of the United States Department of Defense. It laid the foundations for ARPANET, which was the world's first wide area network and a predecessor of the Internet. The TCP/IP Model is sometimes called the Internet Reference Model, the DoD Model or the ARPANET Reference Model. The TCP/IP Suite defines a set of rules to enable computers to communicate over a network. TCP/IP provides end-to-end connectivity specifying how data should be formatted, addressed, shipped, routed and delivered to the right destination. The specification defines protocols for different types of communication between computers and provides a framework for more detailed standards. TCP/IP is generally described as having four abstraction layers (RFC 1122). This layer view is often compared with the seven-layer OSI Reference Model written after the TCP/IP specifications. The TCP/IP model and related protocols are currently maintained by the Internet Engineering Task Force (IETF). Key architectural principles An early architectural document, RFC 1122, emphasizes architectural principles over layering. 1. End-to-End Principle: This principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this Principle. 2. Robustness Principle: "Be liberal in what you accept, and conservative in what you send. Software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features". 3. Even when the layers are examined, the assorted architectural documents—there is no single architectural model such as ISO 7498, the OSI reference model—have fewer, less rigidly defined layers than the commonly referenced OSI model, and thus provide an easier fit for real-world protocols. In point of fact, one frequently referenced document does not contain a stack of layers. The lack of emphasis on layering is a strong difference between the IETF and OSI approaches. It only refers to the existence of the "internetworking layer" and generally to "upper layers"; this document was intended as a 1996 "snapshot" of the architecture: "The Internet and its architecture have grown in evolutionary fashion from modest beginnings, rather than from a Grand Plan. While this process of evolution is one of the main reasons for the technology's success, it nevertheless seems useful to record a snapshot of the current principles of the Internet architecture." RFC 1122 on Host Requirements is structured in paragraphs referring to layers, but refers to many other architectural principles not emphasizing layering. It loosely defines a four-layer version, with the layers having names, not numbers, as follows: 

Process Layer or Application Layer: this is where the "higher level" protocols such as SMTP, FTP, SSH, HTTP, etc. operate.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS   

75

Host-To-Host (Transport) Layer: this is where flow-control and connection protocols exist, such as TCP. This layer deals with opening and maintaining connections, ensuring that packets are in fact received. Internet or Internetworking Layer: this layer defines IP addresses, with many routing schemes for navigating packets from one IP address to another. Network Access Layer: this layer describes both the protocols (i.e., the OSI Data Link Layer) used to mediate access to shared media, and the physical protocols and technologies necessary for communications from individual hosts to a medium.

Strict layering. Layers in the TCP/IP model The layers near the top are logically closer to the user application (as opposed to the human user), while those near the bottom are logically closer to the physical transmission of the data. Viewing layers as providing or consuming a service is a method of abstraction to isolate upper layer protocols from the nitty-gritty detail of transmitting bits over, say, Ethernet and collision detection, while the lower layers avoid having to know the details of each and every application and its protocol. This abstraction also allows upper layers to provide services that the lower layers cannot, or choose not to, provide. Again, the original OSI Reference Model was extended to include [3] connectionless services (OSIRM CL). For example, IP is not designed to be reliable and is a best effort delivery protocol. This means that all transport layers must choose whether or not to provide reliability and to what degree. UDP provides data integrity (via a checksum) but does not guarantee delivery; TCP provides both data integrity and delivery guarantee (by retransmitting until the receiver receives the packet).

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

76

This model lacks the formalism of the OSI reference model and associated documents, but the IETF does not use a formal model and does not consider this a limitation, as in the comment by David D. Clark, "We reject: kings, presidents and voting. We believe in: rough consensus and running code." Criticisms of this model, which have been made with respect to the OSI Reference Model, often do not consider ISO's later extensions to that model. For multiaccess links with their own addressing systems (e.g. Ethernet) an address mapping protocol is needed. Such protocols can be considered to be below IP but above the existing link system. While the IETF does not use the terminology, this is a subnetwork dependent convergence facility according to an extension to the OSI model, the Internal Organization of the Network Layer (IONL). ICMP & IGMP operate on top of IP but do not transport data like UDP or TCP. Again, this functionality exists as layer management extensions to the OSI model, in its Management Framework (OSIRM MF) .The SSL/TLS library operates above the transport layer (utilizes TCP) but below application protocols. Again, there was no intention, on the part of the designers of these protocols, to comply with OSI architecture. The link is treated like a black box here. This is fine for discussing IP (since the whole point of IP is it will run over virtually anything). The IETF explicitly does not intend to discuss transmission systems, which is a less academic but practical alternative to the OSI Reference Model. The following is a description of each layer in the TCP/IP networking model starting from the lowest level. Link layer The link layer is used to move packets from the internet layer on two different hosts. The processes of transmitting packets on a given link layer and receiving packets from a given link layer can be controlled both in the software device driver for the network card, as well as on firmware or specialist chipsets. These will perform data link functions such as adding a packet header to prepare it for transmission, then actually transmit the frame over a physical medium. Internet layer As originally defined, the internet layer (or Network layer) solves the problem of getting packets across a single network. Examples of such protocols are X.25, and the ARPANET's Host/IMP Protocol. With the advent of the concept of internetworking, additional functionality was added to this layer, namely getting data from the source network to the destination network. This generally involves routing the packet across a network of networks, known as an internetwork or internet (lower case). In the Internet protocol suite, IP, performs the basic task of getting packets of data from source to destination. IP can carry data for a number of different upper layer protocols; these protocols are each identified by a unique protocol number: ICMP and IGMP are protocols 1 and 2, respectively. Some of the protocols carried by IP, such as ICMP (used to transmit diagnostic information about IP transmission) and IGMP (used to manage IP Multicast data) are layered on top of IP but perform inter network layer functions, illustrating an incompatibility between the Internet and the IP stack and OSI model. Some routing protocols, such as OSPF, are also part of the network layer. Transport layer The transport layer's responsibilities include end-to-end message transfer capabilities independent of the underlying network, along with error control, fragmentation and flow control.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

77

End to end message transmission or connecting applications at the transport layer can be categorized as either:  

connection-oriented e.g. TCP connectionless e.g. UDP

The transport layer can be thought of literally as a transport mechanism e.g. a vehicle whose responsibility is to make sure that its contents (passengers/goods) reach its destination safely and soundly, unless a higher or lower layer is responsible for safe delivery. The transport layer provides this service of connecting applications together through the use of ports. Since IP provides only a best effort delivery, the transport layer is the first layer of the TCP/IP stack to offer reliability. Note that IP can run over a reliable data link protocol such as the High-Level Data Link Control (HDLC). Protocols above transport, such as RPC, also can provide reliability. For example, TCP is a connection-oriented protocol that addresses numerous reliability issues to provide a reliable byte stream:     

data arrives in-order data has minimal error (i.e. correctness) duplicate data is discarded lost/discarded packets are resent includes traffic congestion control

The newer SCTP is also a "reliable", connection-oriented, transport mechanism. It is Messagestream-oriented — not byte-stream-oriented like TCP — and provides multiple streams multiplexed over a single connection. It also provides multi-homing support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP), but can also be used for other applications. UDP is a connectionless datagram protocol. Like IP, it is a best effort or "unreliable" protocol. Reliability is addressed through error detection using a weak checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP etc) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. RTP is a Application layer The application layer refers to the higher-level protocols used by most applications for network communication. Examples of application layer protocols include the File Transfer Protocol (FTP) and the Simple Mail Transfer Protocol (SMTP). Data coded according to application layer protocols are then encapsulated into one or (occasionally) more transport layer protocols (such as the Transmission Control Protocol (TCP) or User Datagram Protocol (UDP)), which in turn use lower layer protocols to effect actual data transfer. Since the IP stack defines no layers between the application and transport layers, the application layer must include any protocols that act like the OSI's presentation and session layer protocols. This is usually done through libraries. Application layer protocols generally treat the transport layer (and lower) protocols as "black boxes" which provide a stable network connection across which to communicate, although the applications are usually aware of key qualities of the transport layer connection such as the end point IP addresses and port numbers. As noted above, layers are not necessarily clearly defined in the Internet protocol suite. Application layer protocols are most often associated with client-server applications, and the commoner servers

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

78

have specific ports assigned to them by the IANA: HTTP has port 80; Telnet has port 23; etc. Clients, on the other hand, tend to use ephemeral ports, i.e. port numbers assigned at random from a range set aside for the purpose. 3.6.5 Hardware and software implementation Normally the application programmers are in charge of layer 5 protocols (the application layer), while the layer 3 and 4 protocols are services provided by the TCP/IP stack in the operating system. Microcontroller firmware in the network adapter typically handles layer 2 issues, supported by driver software in the operational system. Non-programmable analog and digital electronics are normally in charge of the physical layer, typically using an application-specific integrated circuit (ASIC) chipset for each radio interface or other physical standard. However, hardware or software implementation is not stated in the protocols or the layered reference model. High-performance routers are to a large extent based on fast nonprogrammable digital electronics, carrying out layer 3 switching. In modern modems and wireless equipment, the physicapt was an example of CPU software implementation of the physical layer, making it possible to emulate some modem standards. OSI and TCP/IP layering differences The three top layers in the OSI model—the application layer, the presentation layer and the session layer—are not distinguished separately in the TCP/IP model where it is just the application layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack needs to impose monolithic architecture above the transport layer. For example, the Network File System (NFS) application protocol runs over the eXternal Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol with session layer functionality, Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can run safely over the best-effort User Datagram Protocol (UDP) transport. The session layer roughly corresponds to the Telnet virtual terminal functionality, which is part of text based protocols such as the HTTP and SMTP TCP/IP model application layer protocols. It also corresponds to TCP and UDP port numbering, which is considered as part of the transport layer in the TCP/IP model. The presentation layer has similarities to the MIME standard, which also is used in HTTP and SMTP. Since the IETF protocol development effort is not concerned with strict layering, some of its protocols may not appear to fit cleanly into the OSI model. These conflicts, however, are more frequent when one only looks at the original OSI model, ISO 7498, without looking at the annexes to this model (e.g., ISO 7498/4 Management Framework), or the ISO 8648 Internal Organization of the Network Layer (IONL). When the IONL and Management Framework documents are considered, the ICMP and IGMP are neatly defined as layer management protocols for the network layer. In like manner, the IONL provides a structure for "subnetwork dependent convergence facilities" such as ARP and RARP. Check Your Progress 5. Write the primary job of the Data Link Layer. 6. Write short oon the TCP/IP.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

79

3.7 Summary This lesson introduced you the fundamentals of the network. Now, you can define a network and trace the evolution of networks. You have also learnt that the various protocols were developed based on their technological developments for the network and communication. A computer network is the infrastructure that allows two or more computers (called hosts) to communicate with each other.

3.8 Keywords HTTP : The Hypertext Transfer Protocol (HTTP) is used to transfer files that make up the Web pages of the World Wide Web. FTP : The File Transfer Protocol (FTP) is used for interactive file transfer. SMTP : The Simple Mail Transfer Protocol (SMTP) is used for the transfer of mail messages and attachments. 3.10 Check Your Progress Note: Use the space provided below for your answers. Compare your answers with those given at the end. 1. Define computer network. ………………………………………………………………………………………………………………… …………………………………….. 2. Write short note on Transport Layer in OSI reference model. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………………………… 3. List out the four types of service primitives of the OSI network protocol. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ….. 4. What is CCITT stands for? ………………………………………………………………………………………………………………… ………………………..

5. Write the primary job of the Data Link Layer. ………………………………………………………………………………………………………………… …………………………………………………………………………..

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

80

6. Write short oon the TCP/IP. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………………. Answer To Check Your Progress. 1. A computer network is the infrastructure that allows two or more computers (called hosts) to communicate with each other. 2. The Transport layer (also known as the Host-to-Host Transport layer) is responsible for providing the Application layer with session and datagram communication services. 3. (i) Request Primitive. (ii) Indication Primitive (iii) Response Primitive. (iv)Confirm Primitive. 4. (CCITT) is a standards organization devoted to data and telecommunication, with representations from governments, major vendors, telecommunication carriers, and the scientific community. CCITT standards are published as Recommendation . 5. The Data Link layer has three primary jobs:   

Bus Arbitration: The Data Link layer controls "whose turn it is to talk?" on the medium. Framing of the bits: it puts the bits in proper order and fields Error detection and correction at the bit level

6. TCP and IP were developed by a Department of Defense (DOD) research project to connect a number different networks designed by different vendors into a network of networks (the "Internet"). 3.11 Further Reading th

1. William Stallings, “Data and Computer Communications”, 5 Edition, Pearson Education, 1997. 2. “Computer Networks ‘, Tenanbaum.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT- 4 LAN ARCHITECTURE Structure 4.0 Introduction 4.1 Objectives 4.2 Definition 4.3 LAN Architecture 4.3.1 Basic Concepts 4.3.2 Topologies and access protocol 4.3.3 Architecture 4.3.4 Transmission 4.3.5 IEEE 802 Standards 4.4 Topologies 4.4.1 Basic types of Topologies 4.4.2 Classification of network topologies 4.4.3 Hybrid network topologies 4.4.4 Classification of logical topologies 4.5 MAC 4.5.1 Notational Conventions 4.6 Ethernet 4.6.1 History 4.6.2 General Description 4.6.3 Dealing with multiple clients 4.6.4 Varieties of Ethernet 4.7 Fast Ethernet 4.8 Summary 4.9 Keywords 4.10 Exercise and Questions 4.11 Check Your Progress 4.12 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

81


COMPUTER COMMUNICATION & NETWORKS

82

4.0 Introduction This lesson describes the fundamentals of the network layer. From this unit we will understand the basic concepts of LAN architecture, different types of network topologies and national conventions of MAC. Further, it explains the general description and verities of Ethernet and brief introduction to the fast Ethernet. 4.1 Objectives After reading this lesson you should be able to:      

Understand the basic concepts of LAN architecture Explain the different types of network topologies Write a brief note on conventions of MAC address Explain the general description of Ethernet Discussed the types of Ethernet Describe the brief introduction to the fast Ethernet

4.2 Definition Local Area Networks (LANs) have become an important part of most computer installations. Personal computers have been the main driving force behind the LAN proliferation. 4.3 LAN Architecture Local Area Networks (LANs) have become an important part of most computer installations. Personal computers have been the main driving force behind the LAN proliferation. As personal computers became more widely used in office environments, so it became desirable to interconnect them to achieve two aims: to enable them to exchange information (e.g., e-mail), and to enable them to share scarce and expensive resources (e.g., printers). LANs have been so successful in realizing these aims that their cost is well justified even when there are only a handful of participating computers. Current LANs are used for interconnecting almost any type of computing devices imaginable, including mainframes, workstations, personal computers, file servers, and numerous types of peripheral devices. Many LANs are further connected to other LANs or WANs via bridges and gateways, hence increasing the reach of their users. In this lesson we will first look at some basic LAN concepts, and then discuss a number of widelyadopted LAN standards. As before, our aim will be to concentrate on general principles and protocols of importance rather than to get involved in the details of vendor-specific products. 4.3.1 Basic Concepts A LAN consists of four general types of components: 1. User station. This provides the user with access to the LAN. The most common example is a personal computer. The user station runs special network software (usually in form of a driver) for accessing the LAN. 2. LAN protocol stack. This implements the LAN protocol layers. This usually takes the form of a hardware card inside the user station, containing a microprocessor and firmware which implements the non-physical protocols. 3. Physical Interface Unit. This directly interfaces the user station-based LAN hardware to the LAN physical medium. The exact form of the PIU is highly dependent on the LAN physical medium. Coaxial cable connectors and cable TV taps are common examples.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

83

4. Physical Medium. This provides a physical path for signals to travel between stations. Coaxial cable, optical fiber, and infra red light are examples.

4.3.2 Topologies and Access Protocols There are two general categories of LAN topologies: bus and ring. The bus topology uses a broadcast technique, hence only one station at a time can send messages and all other station listen to the message. A listening station examines the recipient address of the message and if it matches its own address, copies the message; otherwise, it ignores the message. The ring topology uses a closed, point-to-point-connected loop of stations. Data flows in one direction only, from one station to the next. As with the bus topology, transmission is restricted to one user at a time. When a station gains control and sends a message, the message is sent to the next station in the ring. Each receiving station in the ring examines the recipient address of the message and if it matches its own address, copies the message. The message is passed around the ring until it reaches the originator which removes the message by not sending it to the next station. Given that access to the bus or ring is restricted to one station at a time, some form of arbitration is needed to ensure equitable access by all stations. Arbitration is imposed by access protocols. A number of such protocols have been devised: Carrier Sense. This protocol is applicable to a bus topology. Before a station can transmit, it listens to the channel to see if any other station is already transmitting. If the station finds the channel idle, it attempt to transmit; otherwise, it waits for the channel to become idle. Because of an unavoidable delay in a station’s transmission to reach other stations, it is possible that two or more stations find the channel idle and simultaneously attempt to transmit. This is called a collision. Two schemes exist for handling collisions: Collision Detection. In this scheme a transmitting station is required to also listen to the channel, so that it can detect a collision by observing discrepancies in the transmission voltage levels. Upon detecting a collision, it suspends transmission and re-attempts after a random period of time. Use of a random wait period reduces the chance of the collision recurring. Collision Free. This scheme avoids collisions occurring in the first place. Each station has a predetermined time slot assigned to it which indicates when it can transmit without a collision occurring. The distribution of time slots between stations also makes it possible to assign priorities. Token Ring. This protocol is applicable to a ring topology. Channel access is regulated by a special message, called a token, which is passed around the ring from one station to the next. The state of the ring is encoded in the token (i.e., idle or busy). Each station wishing to transmit needs to get hold of the idle token first. When a station gets hold of the idle token, it marks it as busy, appends to it the message it wishes to transmit, and sends the whole thing to the next station. The message goes round the ring until it reaches the intended recipient which copies the message and passes it on. When the message returns to the originator, it detaches the message, marks the token as idle and passes it on. To ensure fair access, the token should go round the ring, unused, at least once before it can be used by the same station again. Token Bus. This protocol is applicable to a bus topology but makes it behave as a ring. Each station on the bus has two other stations designated as its logical predecessor and its logical successor, in a way that results in a logical ring arrangement. A special message is provided which plays the role of a token. Each station receives the token from its predecessor,

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

84

readdresses it to its successor, and retransmits it on the bus. The rest of the protocol is as in a token ring. 4.3.3

Architecture

The role of the physical layer is the same as in the OSI model. It includes the connectors used for connecting the PIU to the LAN and the signaling circuitry provided by the PIU. The OSI data link layer is broken into two sub layers. The Media Access Control (MAC) layer is responsible for implementing a specific LAN access protocol, like the ones described earlier. This layer is therefore highly dependent on the type of the LAN. Its aim is to hide hardware and access protocol dependencies from the next layer. As we will see shortly, a number of MAC standards have been devised, one for each popular type of access protocol. The Logical Link Control (LLC) layer provides data link services independent of the specific MAC protocol involved. LLC is a subset of HDLC and is largely compatible with the data link layer of OSI-compatible WANs. LLC is only concerned with providing Link Service Access Points (LSAPs). All other normal data link functions (i.e., link management, frame anagement, and error handling) are handled by the MAC layer. 4.3.4

Transmission

LAN transmission techniques are divided into two categories: baseband and broadband. In the baseband technique, the digital signal from a transmitting device is directly introduced into the transmission medium (possibly after some conditioning). In the broadband technique, a modem is used to transform the digital signal from a transmitting device into a high frequency analog signal. This signal is typically frequency multiplexed to provide multiple FDM channels over the same transmission medium. Baseband is a simple and inexpensive digital technique. By comparison, broadband has additional costs: each device requires its own modem; also, because transmission is possible in one direction only, two channels typically need to be provided, one for either direction. Broadband, however, has the advantages of offering a higher channel capacity which can be used for multiplexing data from a variety of sources (e.g., video, voice, fax), not just digital data. It is also capable of covering longer distances, typically tens of kilometers compared to up to a kilometer for baseband. 4.3.5

IEEE 802 Standards

The IEEE 802 series of recommendations provide a widely-accepted set of LAN standards. These recommendations are formulated by nine subcommittees. Logical Link Control LLC is specified by the IEEE 802.2 and ISO 8802.2 standards. It provides link services to LAN users, independent of the MAC protocol involved. LLC offers three types of service: 1. Unacknowledged connectionless service. This service must be provided by all 802.2 implementations. It is based on data being transferred in independent data units, the delivery of which is neither guaranteed, nor acknowledged. Furthermore, there are no provisions for ordered delivery of data units or for flow control. Obviously, a higher-level protocol is needed to make this service reliable.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

85

2. Connection-oriented service. This service is based on the use of logical connections. Data is transferred using ordered, acknowledged, and flow controlled data units. Transmission errors are detected and reported. 3. Acknowledged connectionless service. Same as the unacknowledged connectionless service, except that the delivery of each data unit is acknowledged before the next data unit is sent. Token Bus The token bus protocol is specified by the IEEE 802.4 and ISO 8802.4 standards. The logical ring is determined by the descending numeric order of the station addresses. When a station takes possession of the token, it is given exclusive network access for a limited period of time. The station either transmits during this time window or hands over the token to its successor. The token holder can also poll other stations in order to learn about their status. The protocol provides for token-related fault handling. When a station passes the token to its successor, it listens for transmissions on the bus for a period of time. A transmission would be a sign that the successor has successfully received the token and is either passing it on or is transmitting data. If the token sender establishes that the token has not been received after two attempts, it will attempt to bypass the presumably faulty station. To do this, it polls other stations to find out who is the next logical successor and will attempt to pass the token to that station.

Token Ring The token ring protocol is specified by the IEEE 802.5 and ISO 8802.5 standards. The most wellknown realization of this protocol is the IBM token ring product. A time limit is imposed on each station for holding the token and transmitting on the ring. The protocol includes a scheme for handling priority traffic, and for a station to assume a monitoring role to keep an eye on the status of the network. It is also possible to bypass an inactive or faulty station without disrupting ring traffic. The shaded fields appear in data frames but not in a token frame. As before, the start and end of the frame is delimited by two special octets. Addresses, Data, and FCS fields are as before. The Access Control field is an octet divided into four components: 1. A token bit flag to indicate if this is a token frame. 2. A monitor bit flag set by the monitor station during recovery. 3. A three-bit priority field which can indicate up to eight token priority levels. Only a station which has a frame of equal or higher priority can gain control of the ring. 4. A three-bit reservation field is used for implementing a reservation scheme for handling priority traffic. When a frame (i.e., busy token) is passing through a station which has a frame waiting to be transmitted, it can raise the reservation field value to the value of the priority of the frame. When the station finally gets hold of the token and transmits its frame, it restores the previous reservation value. This scheme ensures that higher priority frames have a better chance of being transmitted first, without totally denying lower-priority frames an opportunity to be also transmitted. ANSI FDDI Standard Fiber Distributed Data Interface (FDDI) is a high-speed LAN protocol designed by ANSI for use with optical fiber transmission media. It is capable of achieving data rates in order of 100 mbps, and a network size in order of 1000 stations. Furthermore, with the high reliability of optical fiber,

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

86

station distances in order of kilometers and a geographic spread of the LAN in order of hundreds of kilometers are feasible. Topology FDDI utilizes a ring topology that involves two counter-rotating rings and two classes of stations. Class A stations are connected to both rings. Each class A station also has a bypass switch (i.e., short circuit) which when activated causes the station to be excluded from the ring without affecting the ring continuity. Class B stations are only connected to one of the rings via a concentrator. The concentrator also provides a bypass switch. Furthermore, because each of the B stations is independently connected to the concentrator, it can be switched off without affecting the other stations. Typically class A represents the more important stations (e.g., servers) and class B represents the less significant stations (e.g., infrequently-used PCs and terminals).

Token Ring Protocol The FDDI protocol is specified by the ANSI X3T9 standard. It is a token ring protocol similar to IEEE 802.5 but with an important difference: stations can transmit even if the token they receive is busy. When a station receives a frame, it examines the address of the frame to see if it matches its own address, in which case it copies the data. In either case, if the station has nothing to transmit, it passes the frame to the next station. If it does have a frame to transmit, it absorbs the token, appends its frame(s) to any existing frames and then appends a new token to the result. The whole thing is then sent to the next station in the ring. As with earlier token ring protocols, only the originating station is responsible for removing a frame. The FDDI frame structure, which is almost identical to the IEEE 802.5 frame structure, except for the different physical size of the fields, and that it includes a Preamble and excludes the Access Control field. The FDDI protocol requires that each station maintains three timers for regulating the operation of the ring: 1. The token holding timer determines how long a transmitting station can keep the token. When this timer expires, the station must cease transmission and release the token. 2. The token rotation timer facilitates the normal scheduling of token rotation between stations. 3. The valid transmission timer facilitates recovery from transmission errors. Check Your Progress 1. What is LAN architecture? 4.4 Topologies

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

87

Diagram of different network topologies. Network topology is the study of the arrangement or mapping of the elements (links, nodes, etc.) of a network, especially the physical (real) and logical (virtual) interconnections between nodes. A local area network (LAN) is one example of a network that exhibits both a physical topology and a logical topology. Any given node in the LAN will have one or more links to one or more other nodes in the network and the mapping of these links and nodes onto a graph results in a geometrical shape that determines the physical topology of the network. Likewise, the mapping of the flow of data between the nodes in the network determines the logical topology of the network. It is important to note that the physical and logical topologies might be identical in any particular network but they also may be different. 4.4.1 Basic types of topologies The arrangement or mapping of the elements of a network gives rise to certain basic topologies which may then be combined to form more complex topologies (hybrid topologies). The most common of these basic types of topologies are:         

Bus (Linear, Linear Bus) Star Ring Mesh partially connected mesh (or simply 'mesh') fully connected mesh Tree Hybrid Point to Point

4.4.2 Classification of network topologies There are also three basic categories of network topologies:   

physical topologies signal topologies logical topologies

The terms signal topology and logical topology are often used interchangeably even though there is a subtle difference between the two and the distinction is not often made between the two. Physical topologies The mapping of the nodes of a network and the physical connections between them – i.e., the layout of wiring, cables, the locations of nodes, and the interconnections between the nodes and [1][3] the cabling or wiring system . Classification of physical topologies Point-to-point The simplest topology is a permanent link between two endpoints. Switched point-to-point topologies are the basic model of conventional telephony. The value of a permanent point-to-

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

88

point network is the value of guaranteed, or nearly so, communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers, and has been expressed as Metcalfe's Law. Permanent (dedicated) Easiest to understand, of the variations of point-to-point topology, is a point-to-point communications channel that appears, to the user, to be permanently associated with the two endpoints. Children's "tin-can telephone" is one example, with a microphone to a single public address speaker is another. These are examples of physical dedicated channels. Switched: Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up dynamically, and dropped when no longer needed. This is the basic mode of conventional telephony. Bus Linear bus The type of network topology in which all of the nodes of the network are connected to a common transmission medium which has exactly two endpoints (this is the 'bus', which is also commonly referred to as the backbone, or trunk) – all data that is transmitted between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network virtually simultaneously (disregarding propagation delays) Distributed bus The type of network topology in which all of the nodes of the network are connected to a common transmission medium which has more than two endpoints that are created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology (i.e., all nodes share a common transmission medium). Star The type of network topology in which each of the nodes of the network is connected to a central node with a point-to-point link in a 'hub' and 'spoke' fashion, the central node being the 'hub' and the nodes that are attached to the central node being the 'spokes' (e.g., a collection of point-topoint links from the peripheral nodes that converge at a central node) – all data that is transmitted between nodes in the network is transmitted to this central node, which is usually some type of device that then retransmits the data to some or all of the other nodes in the network, although the central node may also be a simple common connection point (such as a 'punch-down' block) without any active device to repeat the signals. Extended star A type of network topology in which a network that is based upon the physical star topology has one or more repeaters between the central node (the 'hub' of the star) and the peripheral or 'spoke' nodes, the repeaters being used to extend the maximum transmission distance of the point-to-point links between the central node and the peripheral nodes beyond that which is supported by the transmitter power of the central node or beyond that which is supported by the standard upon which the physical layer of the physical star network is based.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

89

Distributed Star A type of network topology that is composed of individual networks that are based upon the physical star topology connected together in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes'). Ring The type of network topology in which each of the nodes of the network is connected to two other nodes in the network and with the first and last nodes being connected to each other, forming a ring – all data that is transmitted between nodes in the network travels from one node to the next node in a circular manner and the data generally flows in a single direction only. Dual-ring The type of network topology in which each of the nodes of the network is connected to two other nodes in the network, with two connections to each of these nodes, and with the first and last nodes being connected to each other with two connections, forming a double ring – the data flows in opposite directions around the two rings, although, generally, only one of the rings carries data during normal operation, and the two rings are independent unless there is a failure or break in one of the rings, at which time the two rings are joined (by the stations on either side of the fault) to enable the flow of data to continue using a segment of the second ring to bypass the fault in the primary ring. Mesh The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated by Reed's Law. Full Fully connected The type of network topology in which each of the nodes of the network is connected to each of the other nodes in the network with a point-to-point link – this makes it possible for data to be simultaneously transmitted from any single node to all of the other nodes. Partial Partially connected The type of network topology in which some of the nodes of the network are connected to more than one other node in the network with a point-to-point link – this makes it possible to take advantage of some of the redundancy that is provided by a physical fully connected mesh topology without the expense and complexity required for a connection between every node in the network. Tree (also known as hierarchical): The type of network topology in which a central 'root' node (the top level of the hierarchy) is connected to one or more other nodes that are one level lower in the hierarchy (i.e., the second level) with a point-to-point link between each of the second level nodes and the top level central 'root' node, while each of the second level nodes that are connected to the top level central 'root' node will also have one or more other nodes that are one level lower in the hierarchy (i.e., the third level) connected to it, also with a point-to-point link, the top level central 'root' node being the only node that has no other node above it in the hierarchy – the hierarchy of the tree is symmetrical, each node in the network having a specific fixed number, f, of nodes connected to it

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

90

at the next lower level in the hierarchy, the number, f, being referred to as the 'branching factor' of the hierarchical tree. 4.4.3 Hybrid network topologies The hybrid topology is a type of network topology that is composed of one or more interconnections of two or more networks that are based upon different physical topologies or a type of network topology that is composed of one or more interconnections of two or more networks that are based upon the same physical topology, but where the physical topology of the network resulting from such an interconnection does not meet the definition of the original physical topology of the interconnected networks (e.g., the physical topology of a network that would result from an interconnection of two or more networks that are based upon the physical star topology might create a hybrid topology which resembles a mixture of the physical star and physical bus topologies or a mixture of the physical star and the physical tree topologies, depending upon how the individual networks are interconnected, while the physical topology of a network that would result from an interconnection of two or more networks that are based upon the physical distributed bus network retains the topology of a physical distributed bus network). Star-bus A type of network topology in which the central nodes of one or more individual networks that are based upon the physical star topology are connected together using a common 'bus' network whose physical topology is based upon the physical linear bus topology, the endpoints of the common 'bus' being terminated with the characteristic impedance of the transmission medium where required – e.g., two or more hubs connected to a common backbone with drop cables through the port on the hub that is provided for that purpose (e.g., a properly configured 'uplink' port) would comprise the physical bus portion of the physical star-bus topology, while each of the individual hubs, combined with the individual nodes which are connected to them, would comprise the physical star portion of the physical star-bus topology. Star-of-stars Hierarchical star A type of network topology that is composed of an interconnection of individual networks that are based upon the physical star topology connected together in a hierarchical fashion to form a more complex network – e.g., a top level central node which is the 'hub' of the top level physical star topology and to which other second level central nodes are attached as the 'spoke' nodes, each of which, in turn, may also become the central nodes of a third level physical star topology. Star-wired ring A type of hybrid physical network topology that is a combination of the physical star topology and the physical ring topology, the physical star portion of the topology consisting of a network in which each of the nodes of which the network is composed are connected to a central node with a point-to-point link in a 'hub' and 'spoke' fashion, the central node being the 'hub' and the nodes that are attached to the central node being the 'spokes' (e.g., a collection of point-to-point links from the peripheral nodes that converge at a central node) in a fashion that is identical to the physical star topology, while the physical ring portion of the topology consists of circuitry within the central node which routes the signals on the network to each of the connected nodes sequentially, in a circular fashion. Hybrid mesh A type of hybrid physical network topology that is a combination of the physical partially connected topology and one or more other physical topologies the mesh portion of the topology consisting of redundant or alternate connections between some of the nodes in the network – the physical hybrid mesh topology is commonly used in networks which require a high degree of availability..

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

91

Signal topology The mapping of the actual connections between the nodes of a network, as evidenced by the path that the signals take when propagating between the nodes. Logical topology The mapping of the apparent connections between the nodes of a network, as evidenced by the path that data appears to take when traveling between the nodes. 4.4.4 Classification of logical topologies The logical classification of network topologies generally follows the same classifications as those in the physical classifications of network topologies, the path that the data takes between nodes being used to determine the topology as opposed to the actual physical connections being used to determine the topology. Daisy chains Except for star-based networks, the easiest way to add more computers into a network is by daisy-chaining, or connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring. ďƒ˜ ďƒ˜

A linear topology puts a two-way link between one computer and the next. However, this was expensive in the early days of computing, since each computer (except for the ones at each end) required two receivers and two transmitters. By connecting the computers at each end, a ring topology can be formed. An advantage of the ring is that the number of transmitters and receivers can be cut in half, since a message will eventually loop all of the way around. When a node sends a message, the message is processed by each computer in the ring. If a computer is not the destination node, it will pass the message to the next node, until the message arrives at its destination. If the message is not accepted by any node on the network, it will travel around the entire ring and return to the sender. This potentially results in a doubling of travel time for data, but since it is traveling at a fairly insignificant multiple of the speed of light, the loss is usually negligible.

Centralization The star topology reduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such as Ethernet, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The failure of a transmission line linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes also. A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks arranged in a hierarchy. This tree has individual peripheral nodes (e.g. leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

92

regenerators. Unlike the star network, the functionality of the central node may be distributed. As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest. Decentralization In a mesh topology (i.e., a partially connected mesh topology), there are at least two nodes with two or more paths between them to provide redundant paths to be used in case the link providing one of the paths fails. This decentralization is often used to advantage to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful. This is similar in some ways to a grid network, where a linear or ring topology is used to connect systems in multiple directions. A multi-dimensional ring has a toroidal topology, for instance. A fully connected network, complete topology or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are n(n-1)/2 direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in military applications. However, it can also be seen in the file sharing protocol BitTorrent in which users connect to other users in the "swarm" by allowing each user sharing the file to connect to other users also involved. Often in actual usage of BitTorrent any given individual node is rarely connected to every single other node as in a true fully connected network but the protocol does allow for the possibility for any one node to connect to any other node when sharing files. Hybrids Hybrid networks use a combination of any two or more topologies in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree network connected to a tree network is still a tree network, but two star networks connected together exhibit a hybrid network topology. A hybrid topology is always produced when two different basic network topologies are connected. Two common examples for Hybrid network are: star ring network and star bus network ďƒ˜ ďƒ˜

A Star ring network consists of two or more star topologies connected using a multistation access unit (MAU) as a centralized hub. A Star Bus network consists of two or more star topologies connected using a bus trunk (the bus trunk serves as the network's backbone).

Checck Your Progress 2. Define Bus and Ring Topology. 3. Differentiate baseband and broadband technique. 4. Define Network Topology. Write its types. 4.5 MAC In computer networking a Media Access Control address (MAC address) or Ethernet Hardware Address (EHA) or hardware address or adapter address or physical address is a quasi-unique identifier attached to most network adapters (NIC or Network Interface Card). It is a

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

93

number that serves as an identifier for a particular network adapter. Thus network cards (or builtin network adapters) in two different computers will have different MAC addresses, as would an Ethernet adapter and a wireless adapter in the same computer, and as would multiple network cards in a router. However, it is possible to change the MAC address on most of today's hardware, often referred to as MAC spoofing. Most layer 2 network protocols use one of three numbering spaces managed by the Institute of Electrical and Electronics Engineers (IEEE): MAC-48, EUI-48, and EUI-64, which are designed to be globally unique. Not all communications protocols use MAC addresses, and not all protocols require globally unique identifiers. The IEEE claims trademarks on the names "EUI-48" and "EUI64" ("EUI" stands for Extended Unique Identifier). MAC addresses, unlike IP addresses and IPX addresses, are not divided into "host" and "network" portions. Therefore, a host cannot determine from the MAC address of another host whether that host is on the same layer 2 network segment as the sending host or a network segment bridged to that network segment. ARP is commonly used to convert from addresses in a layer 3 protocol such as Internet Protocol (IP) to the layer 2 MAC address. On broadcast networks, such as Ethernet, the MAC address allows each host to be uniquely identified and allows frames to be marked for specific hosts. It thus forms the basis of most of the layer 2 networking upon which higher OSI Layer protocols are built to produce complex, functioning networks. 4.5.1 Notational conventions The standard (IEEE 802) format for printing MAC-48 addresses in human-readable media is six groups of two hexadecimal digits, separated by hyphens (-) in transmission order, e.g. 01-23-cd67-89-ab. This form is also commonly used for EUI-64. Other conventions include six groups of two separated by colons (:), e.g. 01:23:45:67:89:ab; or three groups of four hexadecimal digits separated by dots (.), e.g. 0123.4567.89ab; again in transmission order.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

94

Address details

The original IEEE 802 MAC address comes from the original Xerox Ethernet addressing 48 scheme. This 48-bit address space contains potentially 2 or 281,474,976,710,656 possible MAC addresses. All three numbering systems use the same format and differ only in the length of the identifier. Addresses can either be "universally administered addresses" or "locally administered addresses." A universally administered address is uniquely assigned to a device by its manufacturer; these are sometimes called "burned-in addresses" (BIA). The first three octets (in transmission order) identify the organization that issued the identifier and are known as the Organizationally Unique Identifier (OUI). The following three (MAC-48 and EUI-48) or five (EUI-64) octets are assigned by that organization in nearly any manner they please, subject to the constraint of uniqueness. The IEEE expects the MAC-48 space to be exhausted no sooner than the year 2100; EUI-64s are not expected to run out in the foreseeable future. A locally administered address is assigned to a device by a network administrator, overriding the burned-in address. Locally administered addresses do not contain OUIs. Universally administered and locally administered addresses are distinguished by setting the second least significant bit of the most significant byte of the address. If the bit is 0, the address is universally administered. If it is 1, the address is locally administered. The bit is 0 in all OUIs. For example, 02-00-00-00-00-01. The most significant byte is 02h. The binary is 00000010 and the second least significant bit is 1. Therefore, it is a locally administered address. If the least significant bit of the most significant byte is set to a 0, the packet is meant to reach only one receiving NIC. This is called unicast. If the least significant bit of the most significant byte

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

95

is set to a 1, the packet is meant to be sent only once but still reach several NICs. This is called multicast. MAC-48 and EUI-48 addresses are usually shown in hexadecimal format, with each octet separated by a dash or colon. An example of a MAC-48 address would be "00-08-74-4C-7F-1D". [3] If you cross-reference the first three octets with IEEE's OUI assignments, you can see that this MAC address came from Dell Computer Corp. The last three octets represent the serial number assigned to the adapter by the manufacturer. The following technologies use the MAC-48 identifier format:        

Ethernet 802.11 wireless networks Bluetooth IEEE 802.5 token ring most other IEEE 802 networks FDDI ATM (switched virtual connections only, as part of an NSAP address) Fiber Channel and Serial Attached SCSI (as part of a World Wide Name)

The distinction between EUI-48 and MAC-48 identifiers is purely semantic: MAC-48 is used for network hardware; EUI-48 is used to identify other devices and software. (Thus, by definition, an EUI-48 is not in fact a "MAC address", although it is syntactically indistinguishable from one and assigned from the same numbering space.) The IEEE now considers the label MAC-48 to be an obsolete term which was previously used to refer to a specific type of EUI-48 identifier used to address hardware interfaces within existing 802-based networking applications and should not be used in the future. Instead, the term EUI-48 should be used for this purpose. EUI-64 identifiers are used in:   

FireWire IPv6 (as the low-order 64 bits of a unicast network address when temporary addresses are not being used) ZigBee / 802.15.4 wireless personal-area networks

The IEEE has built in several special address types to allow more than one Network Interface Card to be addressed at one time:   

Packets sent to the broadcast address, all one bits, are received by all stations on a local area network. In hexadecimal the broadcast address would be "FF:FF:FF:FF:FF:FF". Packets sent to a multicast address are received by all stations on a LAN that have been configured to receive packets sent to that address. Functional addresses identify one of more Token Ring NICs that provide a particular service, defined in IEEE 802.5.

These are "group addresses", as opposed to "individual addresses"; the least significant bit of the first octet of a MAC address distinguishes individual addresses from group addresses. That bit is set to 0 in individual addresses and 1 in group addresses. Group addresses, like individual addresses, can be universally administered or locally administered.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

96

Individual address block An Individual Address Block comprises a 24-bit OUI managed by the IEEE Registration Authority, followed by 12 IEEE-provided bits (identifying the organization), and 12 bits for the owner to assign to individual devices. An IAB is ideal for organizations requiring fewer than 4097 unique 48-bit numbers (EUI-48). 4.6 Ethernet

A standard Ethernet cord. Ethernet is a family of frame-based computer networking technologies for local area networks (LANs). The name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for the physical layer, through means of network access at the Media Access Control (MAC)/Data Link Layer, and a common addressing format. Ethernet is standardized as IEEE 802.3. The combination of the twisted pair versions of Ethernet for connecting end systems to the network, along with the fiber optic versions for site backbones, [1] is the most widespread wired LAN technology. It has been in use from around 1980 to the present, largely replacing competing LAN standards such as token ring, FDDI, and ARCNET. 4.6.1 History [2]

Ethernet was originally developed at Xerox PARC in 1973–1975. In 1975, Xerox filed a patent application listing Metcalfe and Boggs, plus Chuck Thacker and Butler Lampson, as inventors (U.S. Patent 4,063,220 : Multipoint data communication system with collision detection). In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper. The experimental Ethernet described in that paper ran at 3 Mbit/s, and had 8-bit destination and source address fields, so Ethernet addresses were not the global addresses they are today. By software convention, the 16 bits after the destination and source address fields were a packet type field, but, as the paper says, "different protocols use disjoint sets of packet types", so those were packet types within a given protocol, rather than the packet type in current Ethernet which specifies the protocol being used. 4.6.2 General description

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

97

A 1990s Ethernet network interface card. This is a combination card that supports both coaxialbased using a 10BASE2 (BNC connector, left) and twisted pair-based 10BASE-T, using a RJ45 (8P8C modular connector, right). Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are fundamental differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived. From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today underlies most LANs. The coaxial cable was replaced with point-to-point links connected by Ethernet hubs and/or switches to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. StarLAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network. The advent of twisted-pair wiring dramatically lowered installation costs relative to competing technologies, including the older Ethernet technologies. Above the physical layer, Ethernet stations communicate by sending each other data packets, blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses. Despite the significant changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all generations of Ethernet (excluding early experimental versions) share the same frame formats (and hence the same interface for higher layers), and can be readily interconnected. Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it, and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards, obviating the need for installation of a separate network card. 4.6.3 Dealing with multiple clients CSMA/CD shared medium Ethernet Ethernet originally used a shared coaxial cable (the shared medium) winding around a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than the competing token ring or token bus technologies. When a computer wanted to send some information, it used the following algorithm: Main procedure    

Frame ready for transmission. Is medium idle? If not, wait until it becomes ready and wait the interframe gap period (9.6 µs in 10 Mbit/s Ethernet). Start transmitting. Did a collision occur? If so, go to collision detected procedure.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 

98

Reset retransmission counters and end frame transmission.

Collision detected procedure     

Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers detect the collision. Increment retransmission counter. Was the maximum number of transmission attempts reached? If so, abort transmission. Calculate and wait random back off period based on number of collisions. Re-enter main procedure at stage 1.

This can be likened to what happens at a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current speaker to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time (in Ethernet, this time is generally measured in microseconds). The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Exponentially increasing back-off times (determined using the truncated binary exponential back off algorithm) are used when there is more than one failed attempt to transmit. Computers were connected to an Attachment Unit Interface (AUI) transceiver, which was in turn connected to the cable (later with thin Ethernet the transceiver was integrated into the network adapter). While a simple passive wire was highly reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, could make the whole Ethernet segment unusable. Multipoint systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work properly while others work slowly because of excessive retries or not at all (see standing wave for an explanation of why); these could be much more painful to diagnose than a complete failure of the segment. Debugging such failures often involved several people crawling around wiggling connectors while others watched the displays of computers running a ping command and shouted out reports as performance changed. Ethernet repeaters and hubs For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size which depended on the medium used. For example, 10BASE5 coax cables had a maximum length of 500 meters (1,640 ft). Also, as was the case with most other high-speed buses, Ethernet segments had to be terminated with a resistor at each end. For coaxial-cable-based Ethernet, each end of the cable had a 50-ohm resistor attached. Typically this resistor was built into a male BNC or N connector and attached to the last device on the bus, or, if vampire taps were in use, to the end of the cable just past the last device. If termination was not done, or if there was a break in the cable, the AC signal on the bus was reflected, rather than dissipated, when it reached the end. This reflected signal was indistinguishable from a collision, and so no communication would be able to take place. A greater length could be obtained by an Ethernet repeater, which took the signal from one Ethernet cable and repeated it onto another cable. If a collision was detected, the repeater transmitted a jam signal onto all ports to ensure collision detection. Repeaters could be used to connect segments such that there were up to five Ethernet segments between any two hosts, three of which could have attached devices. Repeaters could detect an improperly terminated link from the continuous collisions and stop forwarding data from it. Hence they alleviated the problem of cable breakages: when an Ethernet coax segment broke, while all devices on that segment were unable to communicate, repeaters allowed the other segments to continue working although depending on which segment was broken and the layout of the network the partitioning

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

99

that resulted may have made other segments unable to reach important servers and thus effectively useless.

A twisted pair CAT-3 or CAT-5 cable is used to connect 10BASE-T Ethernet Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and continuing with 10BASE-T, was designed for point-to-point links only and all termination was built into the device. This changed hubs from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks more reliable by preventing faults with (but not deliberate misbehavior of) one peer or its associated cable from affecting other devices on the network, although a failure of a hub or an inter-hub link could still affect lots of users. Also, since twisted pair Ethernet is point-to-point and terminated inside the hardware, the total empty panel space required around a port is much reduced, making it easier to design hubs with lots of ports and to integrate Ethernet onto computer motherboards. Despite the physical star topology, hub bed Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the hub, primarily the Collision Enforcement signal, in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and security problems aren't addressed. The total throughput of the hub is limited to that of a single link and all links must operate at the same speed. Collisions reduce throughput by their very nature. In the worst case, when there are lots of hosts with long cables that attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 summarized the results of having 20 fast nodes attempting to transmit packets of various sizes as quickly as possible on the same [4] Ethernet segment. The results showed that, even for the smallest Ethernet frames (64B), 90% throughput on the LAN was the norm. This is in comparison with token passing LANs (token ring, token bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits. Bridging and switching While repeaters could isolate some aspects of Ethernet segments, such as cable breakages, they still forwarded all traffic to all Ethernet devices. This created practical limit on how many machines could communicate on an Ethernet network. Also as the entire network was one collision domain and all hosts had to be able to detect collisions anywhere on the network the number of repeaters between the farthest nodes was limited. Finally segments joined by repeaters had to all operate at the same speed, making phased-in upgrades impossible. To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. Bridges learn where devices are, by watching MAC addresses, and do not forward packets across segments when they know the destination address is not located in that direction.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

100

Prior to discovery of network devices on the different segments, Ethernet bridges and switches work somewhat like Ethernet hubs, passing all traffic between segments. However, as the switch discovers the addresses associated with each port, it only forwards network traffic to the necessary segments improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcame the limits on total segments between two hosts and allowed the mixing of speeds, both of which became very important with the introduction of Fast Ethernet. Early bridges examined each packet one by one using software on a CPU, and some of them were significantly slower than hubs (multi-port repeaters) at forwarding traffic, especially when handling many ports at the same time. In 1989 the networking company Kalpana introduced their EtherSwitch, the first Ethernet switch. An Ethernet switch does bridging in hardware, allowing it to forward packets at full wire speed. It is important to remember that the term switch was invented by device manufacturers and does not appear in the 802.3 standard. Functionally, the two terms are interchangeable. Since packets are typically only delivered to the port they are intended for, traffic on a switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the slightly better isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology. Dual speed hubs In the early days of Fast Ethernet, Ethernet switches were relatively expensive devices. However, hubs suffered from the problem that if there were any 10BASE-T devices connected then the whole system would have to run at 10 Mbit. Therefore a compromise between a hub and a switch appeared known as a dual speed hub. These devices consisted of an internal two-port switch, dividing the 10BASE-T (10 Mbit) and 100BASE-T (100 Mbit) segments. The device would typically consist of more than two physical ports. When a network device becomes active on any of the physical ports, the device attaches it to either the 10BASE-T segment or the 100BASE-T segment, as appropriate. This prevented the need for an all-or-nothing migration from 10BASE-T to 100BASE-T networks. These devices are also known as dual-speed hubs because the traffic between devices connected at the same speed is not switched. More advanced networks Simple switched Ethernet networks, while an improvement over hub based Ethernet, suffer from a number of issues:     

They suffer from single points of failure. If any link fails some devices will be unable to communicate with other devices and if the link that fails is in a central location lots of users can be cut off from the resources they require. It is possible to trick switches or hosts into sending data to your machine even if it's not intended for it (see switch vulnerabilities). Large amounts of broadcast traffic, whether malicious, accidental, or simply a side effect of network size can flood slower links and/or systems. It is possible for any host to flood the network with broadcast traffic forming a denial of service attack against any hosts that run at the same or lower speed as the attacking device. As the network grows, normal broadcast traffic takes up an ever greater amount of bandwidth.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS  

101

If switches are not multicast aware, multicast traffic will end up treated like broadcast traffic due to being directed at a MAC with no associated port. If switches discover more MAC addresses than they can store (either through network size or through an attack) some addresses must inevitably be dropped and traffic to those addresses will be treated the same way as traffic to unknown addresses, that is essentially the same as broadcast traffic (this issue is known as failopen). They suffer from bandwidth choke points where a lot of traffic is forced down a single link.

Some switches offer a variety of tools to combat these issues including:    

Spanning-tree protocol to maintain the active links of the network as a tree while allowing physical loops for redundancy. Various port protection features, as it is far more likely an attacker will be on an end system port than on a switch-switch link. VLANs to keep different classes of users separate while using the same physical infrastructure. Fast routing at higher levels to route between those VLANs.

Link aggregation to add bandwidth to overloaded links and to provide some measure of redundancy, although the links won't protect against switch failure because they connect the same pair of switches. Auto negotiation and duplex mismatch The auto negotiation standard does not allow auto detection to detect the duplex setting if the other computer is not also set to Autonegotation. When two interfaces are connected and set to different "duplex" modes, the effect of the duplex mismatch is a network that works, but much slower than at its nominal speeds. The primary rule for avoiding this is that you must not set one end of a connection to a forced full duplex setting and the other end to auto negotiation. Many different modes of operations (10BASE-T half duplex, 10BASE-T full duplex, 100BASE-TX half duplex, …) exist for Ethernet over twisted pair cable using 8P8C modular connectors (not to be confused with FCC's RJ45), and most devices are capable of different modes of operations. In 1995, a standard was released for allowing two network interfaces connected to each other to auto negotiate the best possible shared mode of operation. This works well for the case of every device being set to auto negotiate. The auto negotiation standard contained a mechanism for detecting the speed but not the duplex setting of Ethernet peers that did not use auto negotiation. Physical layer The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable (with BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5 and 10BASE-T used twisted pair connected to Ethernet hubs with 8P8C modular connectors (not to be confused with FCC's RJ45). Currently Ethernet has many varieties that vary both in speed and physical medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and 8P8C modular connectors (often called RJ45). They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. However each version has become steadily more selective about the cable it runs on and some installers have avoided 1000BASE-T for everything except short connections to servers.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

102

Fiber optic variants of Ethernet are commonly used in structured cabling applications. These variants have also seen substantial penetration in enterprise datacenter applications, but are rarely seen connected to end user systems for cost/convenience reasons. Their advantages lie in performance, electrical isolation and distance, up to tens of kilometers with some versions. Fiber versions of a new higher speed almost invariably come out before copper. 10 gigabit Ethernet is becoming more popular in both enterprise and carrier networks, with development starting on 40 [6][7] Gbit/s and 100 Gbit/s Ethernet. Metcalfe now believes commercial applications using terabit Ethernet may occur by 2015 though he says existing Ethernet standards may have to be [8] overthrown to reach terabit Ethernet. A data packet on the wire is called a frame. A frame viewed on the actual physical wire would show Preamble and Start Frame Delimiter, in addition to the other data. These are required by all physical hardware. They are not displayed by packet sniffing software because these bits are removed by the Ethernet adapter before being passed on to the host (in contrast, it is often the device driver which removes the CRC32 (FCS) from the packets seen by the user). The table below shows the complete Ethernet frame, as transmitted. Note that the bit patterns in the preamble and start of frame delimiter are written as bit strings, with the first bit transmitted on the left (not as byte values, which in Ethernet are transmitted least significant bit first). This notation matches the one used in the IEEE 802.3 standard.

Start-ofPreamble FrameDelimiter

MAC MAC Interframe Ethertype/Length Payload CRC32 destination source gap

7 octets of 1 octet of 6 octets 10101010 10101011

6 octets 2 octets

46-1500 4 octets octets

12 octets

64-1518 octets

72-1526 octets

After a frame has been sent transmitters are required to transmit 12 octets of idle characters before transmitting the next frame. For 10M this takes 9600 ns, 100M 960 ns, 1000M 96 ns. 10/100M transceiver chips (MII PHY) work with 4-bits (nibble) at a time. Therefore the preamble will be 7 instances of 0101 + 0101, and the Start Frame Delimiter will be 0101 + 1101. 8-bit values are sent low 4-bit and then high 4-bit. 1000M transceiver chips (GMII) work with 8 bits at a time, and 10 Gbit/s (XGMII) PHY works with 32 bits at a time. Some implementations use larger jumbo frames. Ethernet frame types and the Ether Type field There are several types of Ethernet frames:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS    

103

The Ethernet Version 2 or Ethernet II frame, the so-called DIX frame (named after DEC, Intel, and Xerox); this is the most common today, as it is often used directly by the Internet Protocol. Novell's non-standard variation of IEEE 802.3 ("raw 802.3 frame") without an IEEE 802.2 LLC header. IEEE 802.2 LLC frame IEEE 802.2 LLC/SNAP frame

In addition, Ethernet frames may optionally contain a IEEE 802.1Q tag to identify what VLAN it belongs to and its IEEE 802.1p priority (quality of service). This encapsulation is defined in the IEEE 802.3ac specification and increases the maximum frame by 4 bytes to 1522 bytes. The different frame types have different formats and MTU values, but can coexist on the same physical medium.

The most common Ethernet Frame format, type II Versions 1.0 and 2.0 of the Digital/Intel/Xerox (DIX) Ethernet specification have a 16-bit subprotocol label field called the EtherType. The original IEEE 802.3 Ethernet specification replaced that with a 16-bit length field, with the MAC header followed by an IEEE 802.2 logical link control (LLC) header; the maximum length of a packet was 1500 bytes. The two formats were eventually unified by the convention that values of that field between 0 and 1500 indicated the use of the original 802.3 Ethernet format with a length field, while values of 1536 decimal (0600 hexadecimal) and greater indicated the use of the DIX frame format with an EtherType sub[9] protocol identifier. This convention allows software to determine whether a frame is an Ethernet II frame or an IEEE 802.3 frame, allowing the coexistence of both standards on the same physical medium. By examining the 802.2 LLC header, it is possible to determine whether it is followed by a SNAP (subnetwork access protocol) header. Some protocols, particularly those designed for the OSI networking stack, operate directly on top of 802.2 LLC, which provides both datagram and connection-oriented network services. The LLC header includes two additional eight-bit address fields, called service access points or SAPs in OSI terminology; when both source and destination SAP are set to the value 0xAA, the SNAP service is requested. The SNAP header allows EtherType values to be used with all IEEE 802 protocols, as well as supporting private protocol ID spaces. In IEEE 802.3x-1997, the IEEE Ethernet standard was changed to explicitly allow the use of the 16-bit field after the MAC addresses to be used as a length field or a type field. There exists an Internet standard for encapsulating IP version 4 traffic in IEEE 802.2 frames with [10] LLC/SNAP headers. It is almost never implemented on Ethernet (although it is used on FDDI and on token ring, IEEE 802.11, and other IEEE 802 networks). IP traffic cannot be encapsulated in IEEE 802.2 LLC frames without SNAP because, although there is an LLC protocol type for IP, there is no LLC protocol type for ARP. IP Version 6 can also be transmitted over Ethernet using IEEE 802.2 with LLC/SNAP, but, again, that's almost never used (although LLC/SNAP encapsulation of IPv6 is used on IEEE 802 networks).

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

104

The IEEE 802.1Q tag, if present, is placed between the Source Address and the EtherType or Length fields. The first two bytes of the tag are the Tag Protocol Identifier (TPID) value of 0x8100. This is located in the same place as the EtherType/Length field in untagged frames, so an EtherType value of 0x8100 means the frame is tagged, and the true EtherType/Length is located after the Q-tag. The TPID is followed by two bytes containing the Tag Control Information (TCI) (the IEEE 802.1p priority (quality of service) and VLAN id). The Q-tag is followed by the rest of the frame, using one of the types described above. Runt frames A runt frame is an Ethernet frame that is less than the IEEE 802.3 minimum length of 64 bytes. [11] [12] Possible causes are: Collision, Under runs, Bad network card or Software. 4.6.4 Varieties of Ethernet Some early varieties 

 

10BASE5 -- the original standard uses a single coaxial cable into which you literally tapped a connection by drilling into the cable to connect to the core and screen. Largely obsolete, though due to its widespread deployment in the early days, some systems may still be in use. 10BROAD36 -- Obsolete. An early standard supporting Ethernet over longer distances. It utilized broadband modulation techniques, similar to those employed in cable modem systems, and operated over coaxial cable. 1BASE5 -- An early attempt to standardize a low-cost LAN solution, it operates at 1 Mbit/s and was a commercial failure.

10Mbit/s Ethernet 10BASE2 (also called ThinNet or Cheapernet) -- 50-ohm coaxial cable connects machines together, each machine using a T-adaptor to connect to its NIC. Requires terminators at each end. For many years this was the dominant Ethernet standard 10 Mbit/s. 10BASE-T -- runs over 4 wires (two twisted pairs) on a cat-3 or cat-5 cable. A hub or switch sits in the middle and has a port for each node. This is also the configuration used for 100BASE-T and Gigabit Ethernet. 10 Mbit/s.     

FOIRL -- Fiber-optic inter-repeater link. The original standard for Ethernet over fibre. 10BASE-F -- A generic term for the new family of 10 Mbit/s Ethernet standards: 10BASEFL, 10BASE-FB and 10BASE-FP. Of these only 10BASE-FL is in widespread use. 10BASE-FL -- An updated version of the FOIRL standard. 10BASE-FB -- Intended for backbones connecting a number of hubs or switches, it is now obsolete. 10BASE-FP -- A passive star network that required no repeater, it was never implemented

Fast Ethernet  

100BASE-T -- A term for any of the three standard for 100 Mbit/s Ethernet over twisted pair cable. Includes 100BASE-TX, 100BASE-T4 and 100BASE-T2. 100BASE-TX -- Uses two pairs, but requires cat-5 cable. Similar star-shaped configuration to 10BASE-T. 100 Mbit/s.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS   

105

100BASE-T4 -- 100 Mbit/s Ethernet over Category 3 cabling (as used for 10BASE-T installations). Uses all four pairs in the cable. Now obsolete, as Category 5 cabling is the norm. Limited to half-duplex. 100BASE-T2 -- No products exist. 100 Mbit/s Ethernet over Category 3 cabling. Supports full-duplex, and uses only two pairs. It is functionally equivalent to 100BASE-TX, but supports old cable. 100BASE-FX -- 100 Mbit/s Ethernet over fibre.

Gigabit Ethernet    

1000BASE-T -- 1 Gbit/s over cat-5 copper cabling. 1000BASE-SX -- 1 Gbit/s over fiber. 1000BASE-LX -- 1 Gbit/s over fiber. Optimized for longer distances over single-mode fiber. 1000BASE-CX -- A short-haul solution (up to 25m) for running 1 Gbit/s Ethernet over special copper cable. Predates 1000BASE-T, and now obsolete.

10 gigabit Ethernet The 10 gigabit Ethernet family of standards encompasses media types for single-mode fiber (long haul), multi-mode fiber (up to 300m), copper backplane (up to 1m) and copper twisted pair (up to 100m). It was first standardized as IEEE Std 802.3ae-2002, IEEE 802.3ae, but is now included in IEEE Std 802.3-2005.    

10GBASE-SR -- designed to support short distances over deployed multi-mode fiber cabling, it has a range of between 26 m and 82 m depending on cable type. It also supports 300 m operation over a new 2000 MHz·km multi-mode fiber. 10GBASE-LX4 -- uses wavelength division multiplexing to support ranges of between 240 m and 300 m over deployed multi-mode cabling. Also supports 10 km over singlemode fiber. 10GBASE-LR and 10GBASE-ER -- these standards support 10 km and 40 km respectively over single-mode fiber. 10GBASE-SW, 10GBASE-LW and 10GBASE-EW. These varieties use the WAN PHY, designed to interoperate with OC-192 / STM-64 SONET/SDH equipment. They correspond at the physical layer to 10GBASE-SR, 10GBASE-LR and 10GBASE-ER respectively, and hence use the same types of fiber and support the same distances. (There is no WAN PHY standard corresponding to 10GBASE-LX4.) 10 gigabit Ethernet is still an emerging technology, and it remains to be seen which of the standards will gain commercial acceptance.

Related standards Networking standards that are not part of the IEEE 802.3 Ethernet standard, but support the Ethernet frame format, and are capable of interoperating with it. LattisNet—A SynOptics pre-standard twisted-pair 10 Mbit/s variant. 100BaseVG—An early contender for 100 Mbit/s Ethernet. It runs over Category 3 cabling. Uses four pairs. Commercial failure. TIA 100BASE-SX—Promoted by the Telecommunications Industry Association. 100BASE-SX is an alternative implementation of 100 Mbit/s Ethernet over fiber; it is incompatible with the official 100BASE-FX standard. Its main feature is interoperability with 10BASE-FL, supporting autonegotiation between 10 Mbit/s and 100 Mbit/s operation – a feature lacking in the official

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

106

standards due to the use of differing LED wavelengths. It is targeted at the installed base of 10 Mbit/s fiber network installations. TIA 1000BASE-TX—Promoted by the Telecommunications Industry Association, it was a commercial failure, and no products exist. 1000BASE-TX uses a simpler protocol than the official 1000BASE-T standard so the electronics can be cheaper, but requires Category 6 cabling.      

Networking standards that do not use the Ethernet frame format but can still be connected to Ethernet using MAC-based bridging. 802.11—A standard for wireless networking often paired with an Ethernet backbone. 10BaseS—Ethernet over VDSL Long Reach Ethernet Avionics Full-Duplex Switched Ethernet Metro Ethernet

It has been observed that Ethernet traffic has self-similar properties, with important consequences for traffic engineering. 4.7 Fast Ethernet In computer networking, Fast Ethernet is a collective term for a number of Ethernet standards that carry traffic at the nominal rate of 100 Mbit/s, against the original Ethernet speed of 10 Mbit/s. Of the 100 megabit Ethernet standards 100baseTX (T="Twisted" Pair Copper) is by far the most common and is supported by the vast majority of Ethernet hardware currently produced. Full duplex fast Ethernet is sometimes referred to as "200 Mbit/s" though this is somewhat misleading as that level of improvement will only be achieved if traffic patterns are symmetrical. Fast [1] Ethernet was introduced in 1995 and remained the fastest version of Ethernet for three years before being superseded by gigabit Ethernet. A fast Ethernet adapter can be logically divided into a media access controller (MAC) which deals with the higher level issues of medium availability and a physical layer interface (PHY). The MAC may be linked to the PHY by a 4 bit 25 MHz synchronous parallel interface known as MII. Repeaters (hubs) are also allowed and connect to multiple PHYs for their different interfaces. The MII interface may (rarely) be an external connection but is usually a connection between ICs in a network adapter or even within a single IC. The specs are written based on the assumption that the interface between MAC and PHY will be MII but they do not require it. The MII interface fixes the theoretical maximum data bit rate for all versions of fast Ethernet to 100 Mbit/s. The data signaling rate actually observed on real networks is less than the theoretical maximum, due to the necessary header and trailer (addressing and error-detection bits) on every frame, the occasional "lost frame" due to noise, and time waiting after each sent frame for other devices on the network to finish transmitting. Copper

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

107

3Com 3c905-TX 100BASE-TX PCI network interface card 100BASE-T is any of several Fast Ethernet standards for twisted pair cables, including: 100BASE-TX (100 Mbit/s over two-pair Cat5 or better cable), 100BASE-T4 (100 Mbit/s over fourpair Cat3 or better cable, defunct), 100BASE-T2 (100 Mbit/s over two-pair Cat3 or better cable, also defunct). The segment length for a 100BASE-T cable is limited to 100 metres (328 ft) (as with 10BASE-T and gigabit Ethernet). All are or were standards under IEEE 802.3 (approved 1995). In the early days of Fast Ethernet, much vendor advertising centered on claims by competing standards that "ours will work better with existing cables than theirs." In practice, it was quickly discovered that few existing networks actually met the assumed standards, because 10-megabit Ethernet was very tolerant of minor deviations from specified electrical characteristics and few installers ever bothered to make exact measurements of cable and connection quality; if Ethernet worked over a cable, it was deemed acceptable. Thus most networks had to be rewired for 100megabit speed whether or not there had supposedly been CAT3 or CAT5 cable runs. The vast majority of common implementations or installations of 100BASE-T are done with 100BASE-TX. 100BASE-TX RJ-45 Wiring (TIA/EIA-568-B T568B) Pin Pair Wire Color 1

2

1

white/orange

2

2

2

orange

3

3

1

white/green

4

1

2

blue

5

1

1

white/blue

6

3

2

green

7

4

1

white/brown

8

4

2

brown

100BASE-TX is the predominant form of Fast Ethernet, and runs over two pairs of category 5 or above cable (a typical category 5 cable contains 4 pairs and can therefore support two 100BASETX links). Like 10BASE-T, the proper pairs are the orange and green pairs (canonical second and third pairs) in TIA/EIA-568-B's termination standards, T568A or T568B. These pairs use pins 1, 2, 3 and 6.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

108

In T568A and T568B, wires are in the order 1, 2, 3, 6, 4, 5, 7, 8 on the modular jack at each end. The color-order would be green/white, green, orange/white, blue, blue/white, orange, brown/white, brown for T568A, and orange/white, orange, green/white, blue, blue/white, green, brown/white, brown for T568B. Each network segment can have a maximum distance of 100 metres (330 ft). In its typical configuration, 100BASE-TX uses one pair of twisted wires in each direction, providing 100 Mbit/s of throughput in each direction (full-duplex). See IEEE 802.3 for more details. With 100BASE-TX hardware, the raw bits (4 bits wide clocked at 25 MHz at the MII) go through 4B5B binary encoding to generate a series of 0 and 1 symbols clocked at 125 MHz symbol rate. The 4B5B encoding provides DC equalization and spectrum shaping (see the standard for [citation needed] details) . Just as in the 100BASE-FX case, the bits are then transferred to the physical medium attachment layer using NRZI encoding. However, 100BASE-TX introduces an additional, medium dependent sublayer, which employs MLT-3 as a final encoding of the data stream before transmission, resulting in a maximum "fundamental frequency" of 31.25 MHz. The procedure is borrowed from the ANSI X3.263 FDDI specifications, with minor discrepancies. 100BASE-T4 100BASE-T4 was an early implementation of Fast Ethernet. It requires four twisted copper pairs, but those pairs were only required to be category 3 rather than the category 5 required by TX. One pair is reserved for transmit, one for receive, and the remaining two will switch direction as negotiated. A very unusual 8B6T code is used to convert 8 data bits into 6 base-3 digits . 100BASE-T2 Symbol

Line signal level

000

0

001

+1

010

-1

011

-2

100(ESC) +2

In 100BASE-T2, the data is transmitted over two copper pairs, 4 bits per symbol. First, a 4 bit symbol is expanded into two 3-bit symbols through a non-trivial scrambling procedure based on a linear feedback shift register; see the standard for details. This is needed to flatten the bandwidth and emission spectrum of the signal, as well as to match transmission line properties. The mapping of the original bits to the symbol codes is not constant in time and has a fairly large period (appearing as a pseudo-random sequence). The final mapping from symbols to PAM-5 line modulation levels obeys the table on the right.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

109

Fiber 100BASE-FX 100BASE-FX is a version of Fast Ethernet over optical fiber. It uses a 1300 nm near-infrared (NIR) light wavelength transmitted via two strands of optical fiber, one for receive(RX) and the other for transmit(TX). Maximum length is 400 meters (1,310 ft) for half-duplex connections (to ensure collisions are detected) or 2 kilometers (6,600 ft) for full-duplex over multimode optical fiber. Longer distances are possible when using single-mode optical fiber. 100BASE-FX uses the same 4B5B encoding and NRZI line code that 100BASE-TX does. 100BASE-FX should use SC, [4] ST, or MIC connectors with SC being the preferred option. 100BASE-FX is not compatible with 10BASE-FL, the 10 MBit/s version over optical fibre. 100BASE-SX 100BASE-SX is a version of Fast Ethernet over optical fiber. It uses two strands of multi-mode optical fiber for receive and transmit. It is a lower cost alternative to using 100BASE-FX, because it uses short wavelength optics which are significantly less expensive than the long wavelength optics used in 100BASE-FX. 100BASE-SX can operate at distances up to 300 meters (980 ft). 100BASE-SX uses the same wavelength as 10BASE-FL, the 10 MBit/s version over optical fiber. Unlike 100BASE-FX, this allows 100BASE-SX to be backwards-compatible with 10BASE-FL. 100BASE-BX 100BASE-BX is a version of Fast Ethernet over a single strand of optical fiber (unlike 100BASEFX, which uses a pair of fibers). Single-mode fiber is used, along with a special multiplexer which splits the signal into transmit and receive wavelengths. Check Your Progress 5. What is MAC stands for? 6. What is called Fast Ethernet?

4.8 Summary This lesson described the basic concepts of LAN archtecture. From this lesson we understood the different types of network topologies and national conventions of MAC. Further, it explained the general description and verities of Ethernet and brief introduction to the fast Ethernet. 4.9 Keywords DNS : The Domain Name System (DNS) is used to resolve a host name to an IP address. RIP : The Routing Information Protocol (RIP) is a routing protocol that routers use to exchange routing information on an IP inter network. SNMP : The Simple Network Management Protocol (SNMP) is used between a network management console and network devices (routers, bridges, intelligent hubs) to collect and exchange network management information.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

110

4.11 Check Your Progress 1. What is LAN architecture? ………………………………………………………………………………………………………………… ………………………………………… 2. Define Bus and Ring Topology. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………………. 3. Differentiate baseband and broadband technique. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… …………………….. 4. Define Network Topology. Write its types. ………………………………………………………………………………………………………………… …………………………………………. 5. What is MAC stands for? ………………………………………………………………………………………………………………… ………………………………….. 6.What is called Fast Ethernet? ………………………………………………………………………………………………………………… ………………………………. Answer To Check Your Progress. 1. Local Area Networks (LANs) have become an important part of most computer installations. Personal computers have been the main driving force behind the LAN proliferation. 2. The bus topology uses a broadcast technique, hence only one station at a time can send messages and all other station listen to the message. The ring topology uses a closed, point-to-point-connected loop of stations. Data flows in one direction only, from one station to the next. As with the bus topology, transmission is restricted to one user at a time 3. In the baseband technique, the digital signal from a transmitting device is directly introduced into the transmission medium (possibly after some conditioning). In the broadband technique, a modem is used to transform the digital signal from a transmitting device into a high frequency analog signal. This signal is typically frequency multiplexed to provide multiple FDM channels over the same transmission medium.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

111

4. Network topology is the study of the arrangement or mapping of the elements (links, nodes, etc.) of a network, especially the physical (real) and logical (virtual) interconnections between nodes. The most common of these basic types of topologies are:         

Bus (Linear, Linear Bus) Star Ring Mesh partially connected mesh (or simply 'mesh') fully connected mesh Tree Hybrid Point to Point

5. In computer networking a Media Access Control address (MAC address) or Ethernet Hardware Address (EHA) or hardware address or adapter address or physical address is a quasi-unique identifier attached to most network adapters (NIC or Network Interface Card). 6. In computer networking, Fast Ethernet is a collective term for a number of Ethernet standards that carry traffic at the nominal rate of 100 Mbit/s, against the original Ethernet speed of 10 Mbit/s.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT- 5 WIRELESS LANS Structure 5.0 Introduction 5.1 Objectives 5.2 Definition 5.3 Token Ring 5.3.1 Modes of operation 5.3.2 Token format 5.3.3 Frame format 5.3.4 Ring Maintenance 5.3.5 Other ring networks 5.4 FDDI 5.5 Wireless LANS 5.5.1 Benefits 5.5.2 Disadvantages 5.5.3 Architecture 5.5.4 Types of wireless LANs 5.6 Bridges 5.6.1 Advantages of network Bridges 5.6.2 Disadvantages of network Bridges 5.6.3 Bridging versus Routing 5.7 Summary 5.8 Keywords 5.9 Exercise and Questions 5.10 Check Your Progress 5.11 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

112


COMPUTER COMMUNICATION & NETWORKS

113

5.0 Introduction This lesson describes the format operation of token ring. Also we will look at the FDDI, token ring, ring maintenance and other types of ring networks. Further, it explains the benefits, advantages and disadvantages of wireless LANs. Also you can understand the types of wireless LANs. Finally, we will see about the advantages and disadvantages of the bridges in network. 5.1 Objectives After reading this lesson you should be able to:  Describe the role of token ring format  Explain briefly the concept and usage of FDDI protocols  Differentiate the advantages and disadvantages of the Wireless LAN  List out the types of Wireless LAN  Discuss in detail about the advantages and disadvantages of bridges 5.2 Definition A wireless LAN or WLAN is a wireless local area network, which is the linking of two or more computers or devices without using wires. WLAN uses spread-spectrum or OFDM modulation technology based on radio waves to enable communication between devices in a limited area, also known as the basic service set. 5.3 Token Ring     

Token Ring is formed by the nodes connected in ring format as shown in the diagram below. The principle used in the token ring network is that a token is circulating in the ring and whichever node grabs that token will have right to transmit the data. Whenever a station wants to transmit a frame it inverts a single bit of the 3-byte token which instantaneously changes it into a normal data packet. Because there is only one token, there can at most be one transmission at a time. Since the token rotates in the ring it is guaranteed that every node gets the token with in some specified time. So there is an upper bound on the time of waiting to grab the token so that starvation is avoided. There is also an upper limit of 250 on the number of nodes in the network. To distinguish the normal data packets from token (control packet) a special sequence is assigned to the token packet. When any node gets the token it first sends the data it wants to send, then re-circulates the token.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

114

If a node transmits the token and nobody wants to send the data the token comes back to the sender. If the first bit of the token reaches the sender before the transmission of the last bit, then error situation arises. So to avoid this we should have: propagation delay + transmission of n-bits (1-bit delay in each node ) > transmission of the token time A station may hold the token for the token-holding time. which is 10 ms unless the installation sets a different value. If there is enough time left after the first frame has been transmitted to send more frames, then these frames may be sent as well. After all pending frames have been transmitted or the transmission frame would exceed the token-holding time, the station regenerates the 3-byte token frame and puts it back on the ring. 5.3.1 Modes of Operation 1. Listen Mode: In this mode the node listens to the data and transmits the data to the next node. In this mode there is a one-bit delay associated with the transmission.

2. Transmit Mode: In this mode the node just discards the any data and puts the data onto the network.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

115

3. By-pass Mode: In this mode reached when the node is down. Any data is just bypassed. There is no one-bit delay in this mode.

Token Ring Using Ring Concentrator

One problem with a ring network is that if the cable breaks somewhere, the ring dies. This problem is elegantly addressed by using a ring concentrator. A Token Ring concentrator simply changes the topology from a physical ring to a star wired ring. But the network still remains a ring logically. Physically, each station is connected to the ring concentrator (wire center) by a cable containing at least two twisted pairs, one for data to the station and one for data from the station. The Token still circulates around the network and is still controlled in the same manner, however, using a hub or a switch greatly improves reliability because the hub can automatically bypass any ports that are disconnected or have a cabling fault. This is done by having bypass relays inside the concentrator that are energized by current from the stations. If the ring breaks or station goes down, loss of the drive current will release the relay and bypass the station. The ring can then continue operation with the bad segment bypassed. Who should remove the packet from the ring ? There are 3 possibilities1. The source itself removes the packet after one full round in the ring. 2. The destination removes it after accepting it: This has two potential problems. Firstly, the solution won't work for broadcast or multicast, and secondly, there would be no way to acknowledge the sender about the receipt of the packet. 3. Have a specialized node only to discard packets: This is a bad solution as the specialized node would know that the packet has been received by the destination only

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

116

when it receives the packet the second time and by that time the packet may have actually made about one and half (or almost two in the worst case) rounds in the ring. Thus the first solution is adopted with the source itself removing the packet from the ring after a full one round. With this scheme, broadcasting and multicasting can be handled as well as the destination can acknowledge the source about the receipt of the packet (or can tell the source about some error). 5.3.2 Token Format The token is the shortest frame transmitted MSB (Most Significant Bit) is always transmitted first - as opposed to Ethernet

(24

bit)

SD AC ED SD = Starting Delimiter (1 Octet) AC = Access Control (1 Octet) ED = Ending Delimiter (1 Octet) Starting Delimiter Format: J K O J K O O O J = Code Violation K = Code Violation Access Control Format: P P P T M R R R T=Token T = 0 for Token T = 1 for Frame When a station with a Frame to transmit detects a token which has a priority equal to or less than the Frame to be transmitted, it may change the token to a start-of-frame sequence and transmit the Frame P = Priority Priority Bits indicate tokens priority, and therefore, which stations are allowed to use it. Station can transmit if its priority as at least as high as that of the token. M = Monitor The monitor bit is used to prevent a token whose priority is greater than 0 or any frame from continuously circulating on the ring. If an active monitor detects a frame or a high priority token with the monitor bit equal to 1, the frame or token is aborted. This bit shall be transmitted as 0 in all frame and tokens. The active monitor inspects and modifies this bit. All other stations shall repeat this bit as received. R = Reserved bits The reserved bits allow station with high priority Frames to request that the next token be issued at the requested priority.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

117

5.3.3 Frame Format: MSB (Most Significant Bit) is always transmitted first - as opposed to Ethernet SD AC FC DA SA DATA CRC ED FS SD=Starting Delimiter(1 octet) AC=Access Control(1 octet) FC = Frame Control (1 Octet) DA = Destination Address (2 or 6 Octets) SA = Source Address (2 or 6 Octets) DATA = Information 0 or more octets up to 4027 CRC = Checksum(4 Octets) ED = Ending Delimiter (1 Octet) FS=Frame Status Starting Delimiter Format: J K 0 J K 0 0 0 J = Code Violation K = Code Violation Access Control Format: P P P T M R R R T=Token When a station with a Frame to transmit detects a token which has a priority equal to or less than the Frame to be transmitted, it may change the token to a start-of-frame sequence and transmit the Frame. P = Priority Bits Priority Bits indicate tokens priority, and therefore, which stations are allowed to use it. Station can transmit if its priority as at least as high as that of the token. M = Monitor The monitor bit is used to prevent a token whose priority is greater than 0 or any frame from continuously circulating on the ring. if an active monitor detects a frame or a high priority token with the monitor bit equal to 1, the frame or token is aborted. This bit shall be transmitted as 0 in all frame and tokens. The active monitor inspects and modifies this bit. All other stations shall repeat this bit as received. R = Reserved bits the reserved bits allow station with high priority Frames to request that the next token be issued at the requested priority universal (global) address format: I/G (1 BIT) L/U (1 BIT) RING ADDRESS (14 BITS) NODE ADDRESS (32 BITS)

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

118

The first bit specifies individual The second bit specifies local or global (universal) address.

or

group

address.

local group addresses (16 bits): I/G (1 BIT) T/B(1 BIT) GROUP ADDRESS (14 BITS) The first bit specifies an individual or group address. The second bit specifies traditional or bit signature group address. Traditional Group Address: 2Exp14 groups can be defined. Bit Signature Group Address: 14 grtoups are defined. A host can be a member of none or any number of them. For multicasting, those group bits are set to which the packet should go. For broadcasting, all 14 bits are set. A host receives a packet only if it is a member of a group whose corresponding bit is set to 1. The description is similar to as above. Data Format: No upper limit on amount of data as such, but it is limited by the token holding time. Checksum: The source computes and sets this value. Destination too calculates this value. If the two are different, it indicates an error, otherwise the data may be correct. Frame Status: It contains the A and C bits. A bit set to 1: destination C bit set to 1: destination accepted the packet.

recognized

the

packet.

This arrangement provides an automatic acknowledgement for each frame. The A and C bits are present twice in the Frame Status to increase reliability in as much as they are not covered by the checksum. Ending Delimiter Format: J K 1 J K 1 I E J = Code Violation K = Code Violation I = Intermediate Frame Bit If this bit is set to 1, it indicates that this packet is an intermediate part of a bigger packet, the last packet would have this bit set to 0. E = Error Detected Bit This bit is set if any interface detects an error.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

119

Phase Jitter Compensation : In a token ring the source starts discarding all it's previously transmitted bits as soon as they circumnavigate the ring and reach the source. Hence, it's not desirable that while a token is being sent some bits of the token which have already been sent become available at the incoming end of the source. This behavior though is desirable in case of data packets which ought to be drained from the ring once they have gone around the ring. To achieve the aforesaid behavior with respect to tokens, we would like the ring to hold at least 24 bits at a time. How do we ensure this? Each node in a ring introduces a 1 bit delay. So, one approach might be to set the minimum limit on the number of nodes in a ring as 24. But, this is not a viable option. The actual solution is as follows. We have one node in the ring designated as "monitor". The monitor maintains a 24 bits buffer with help of which it introduces a 24 bit delay. The catch here is what if the clocks of nodes following the source are faster than the source? In this case the 24 bit delay of the monitor would be less than the 24 bit delay desired by the host. To avoid this situation the monitor maintains 3 extra bits to compensate for the faster bits. The 3 extra bits suffice even if bits are 10 % faster. This compensation is called Phase Jitter Compensation. Handling multiple priority frames Each node or packet has a priority level. We don't concern ourselves with how this priority is decided. The first 3 bits of the Access Control byte in the token are for priority and the last 3 are for reservation. P P P T M R R R Initially the reservation bits are set to 000. When a node wants to transmit a priority n frame, it must wait until it can capture a token whose priority is less than or equal to n. Furthermore, when a data frame goes by, a station can try to reserve the next token by writing the priority of the frame it wants to send into the frame's Reservation bits. However, if a higher priority has already been reserved there, the station cannot make a reservation. When the current frame is finished, the next token is generated at the priority that has been reserved. A slight problem with the above reservation procedure is that the reservation priority keeps on increasing. To solve this problem, the station raising the priority remembers the reservation priority that it replaces and when it is done it reduces the priority to the previous priority. Note that in a token ring, low priority frames may starve. 5.3.4 Ring Maintenance Each token ring has a monitor that oversees the ring. Among the monitor's responsibilities are seeing that the token is not lost, taking action when the ring breaks, cleaning the ring when garbled frames appear and watching out for orphan frames. An orphan frame occurs when a station transmits a short frame in it's entirety onto a long ring and then crashes or is powered down before the frame can be removed. If nothing is done, the frame circulates indefinitely. 



Detection of orphan frames: The monitor detects orphan frames by setting the monitor bit in the Access Control byte whenever it passes through. If an incoming frame has this bit set, something is wrong since the same frame has passed the monitor twice. Evidently it was not removed by the source, so the monitor drains it. Lost Tokens: The monitor has a timer that is set to the longest possible tokenless interval : when each node transmits for the full token holding time. If this timer goes off, the monitor drains the ring and issues a fresh token.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 

120

Garbled frames: The monitor can detect such frames by their invalid format or checksum, drain the ring and issue a fresh token.

The token ring control frames for maintenance are: Control field

Name

Meaning

00000000

Duplicate address test

Test if two stations have the same address

00000010 Beacon

Used to locate breaks in the ring

00000011 Claim token

Attempt to become monitor

00000100 Purge

Reinitialize the ring

00000101

Active present

monitor

00000110

Standby monitor Announces present monitors

Issued periodically by the monitor the

presence

of

potential

The monitor periodically issues a message "Active Monitor Present" informing all nodes of its presence. When this message is not received for a specific time interval, the nodes detect a monitor failure. Each node that believes it can function as a monitor broadcasts a "Standby Monitor Present" message at regular intervals, indicating that it is ready to take on the monitor's job. Any node that detects failure of a monitor issues a "Claim" token. There are 3 possible outcomes : 1. If the issuing node gets back its own claim token, then it becomes the monitor. 2. If a packet different from a claim token is received, apparently a wrong guess of monitor failure was made. In this case on receipt of our own claim token, we discard it. Note that our claim token may have been removed by some other node which has detected this error. 3. If some other node has also issued a claim token, then the node with the larger address becomes the monitor. In order to resolve errors of duplicate addresses, whenever a node comes up it sends a "Duplicate Address Detection" message (with the destination = source) across the network. If the address recognize bit has been set on receipt of the message, the issuing node realizes a duplicate address and goes to standby mode. A node informs other nodes of removal of a packet from the ring through a "Purge" message. One maintenance function that the monitor cannot handle is locating breaks in the ring. If there is no activity detected in the ring (e.g. Failure of monitor to issue the Active Monitor Present token...) , the usual procedures of sending a claim token are followed. If the claim token itself is not received besides packets of any other kind, the node then sends "Beacons" at regular intervals until a message is received indicating that the broken ring has been repaired. 5.3.5 Other Ring Networks The problem with the token ring system is that large rings cause large delays. It must be made possible for multiple packets to be in the ring simultaneously. The following ring networks resolve this problem to some extent :Slotted Ring :

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

121

In this system, the ring is slotted into a number of fixed size frames which are continuously moving around the ring. This makes it necessary that there be enough number of nodes (large ring size) to ensure that all the bits can stay on the ring at the same time. The frame header contains information as to whether the slots are empty or full. The usual disadvantages of overhead/wastage associated with fixed size frames are present.

Register Insertion Rings : This is an improvement over slotted ring architecture. The network interface consists of two registers : a shift register and an output buffer. At startup, the input pointer points to the rightmost bit position in the input shift register .When a bit arrives it is in the rightmost empty position (the one indicated by the input pointer). After the node has detected that the frame is not addressed to it, the bits are transmitted one at time (by shifting). As new bits come in, they are inserted at the position indicated by the pointer and then the contents are shifted. Thus the pointer is not moved. Once the shift register has pushed out the last bit of a frame, it checks to see if it has an output frame waiting. In case yes, then it checks that if the number of empty slots in the shift register is at least equal to the number of bits in the output frame. After this the output connection is switched to this second register and after the register has emptied its contents, the output line is switched back to the shift register. Thus, no single node can hog the bandwidth. In a loaded system, a node can transmit a k-bit frame only if it has saved up a k-bits of inter frame gaps.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

122

Two major disadvantages of this topology are complicated hardware and difficulty in the detection of start/end of packets. Contention Ring The token ring has primarily two problems:  

On light loads, huge overhead is incurred for token passing. Nodes with low priority data may starve if there is always a node with high priority data.

A contention ring attempts to address these problems. In a contention ring, if there is no communication in the ring for a while, a sender node will send its data immediately, followed by a token. If the token comes back to the sender without any data packet in between, the sender removes it from the ring. However under heavy load the behavior is that of a normal token ring. In case a collision, each of the sending nodes will remove the others' data packet from the ring, back off for a random period of time and then resend their data. IEEE 802.4: Token Bus Network In this system, the nodes are physically connected as a bus, but logically form a ring with tokens passed around to determine the turns for sending. It has the robustness of the 802.3 broadcast cable and the known worst case behavior of a ring. The structure of a token bus network is as follows:

Frame Structure

A 802.4 frame has the following fields:  

Preamble: The Preamble is used to synchronize the receiver's clock. Starting Delimiter (SD) and End Delimiter (ED): The Starting Delimiter and Ending Delimiter fields are used to mark frame boundaries. Both of them contain analog encoding of symbols other than 1 or 0 so that they cannot occur accidentally in the user data. Hence no length field is needed.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 

  

123

Frame Control (FC): This field is used to distinguish data frames from control frames. For data frames, it carries the frame's priority as well as a bit which the destination can set as an acknowledgement. For control frames, the Frame Control field is used to specify the frame type. The allowed types include token passing and various ring maintenance frames. Destination and Source Address: The Destination and Source address fields may be 2 bytes (for a local address) or 6 bytes (for a global address). Data: The Data field carries the actual data and it may be 8182 bytes when 2 byte addresses are used and 8174 bytes for 6 byte addresses. Checksum: A 4-byte checksum calculated for the data. Used in error detection.

5.4 FDDI Fiber distributed data interface (FDDI) provides a standard for data transmission in a local area network that can extend in range up to 200 kilometers (124 miles). Although FDDI protocol is a token ring network, it does not use the IEEE 802.5 token ring protocol as its basis; instead, its protocol is derived from the IEEE 802.4 token bus timed token protocol. In addition to covering large geographical areas, FDDI local area networks can support thousands of users. As a standard underlying medium it uses optical fiber (though it can use copper cable, in which case one can refer to CDDI). FDDI uses a dual-attached, counter-rotating token ring topology. FDDI, as a product of American National Standards Institute X3T9.5 (now X3T12), conforms to the Open Systems Interconnection (OSI) model of functional layering of LANs using other protocols. FDDI-II, a version of FDDI, adds the capability to add circuit-switched service to the network so that it can also handle voice and video signals. Work has started to connect FDDI networks to the developing Synchronous Optical Network SONET. A FDDI network contains two token rings, one for possible backup in case the primary ring fails. The primary ring offers up to 100 Mbit/s capacity. When a network has no requirement for the secondary ring to do backup, it can also carry data, extending capacity to 200 Mbit/s. The single ring can extend the maximum distance; a dual ring can extend 100 km (62 miles). FDDI has a larger maximum-frame size than standard 100 Mbit/s Ethernet, allowing better throughput. Designers normally construct FDDI rings in the form of a "dual ring of trees" (see network topology). A small number of devices (typically infrastructure devices such as routers and concentrators rather than host computers) connect to both rings - hence the term "dual-attached". Host computers then connect as single-attached devices to the routers or concentrators. The dual ring in its most degenerate form simply collapses into a single device. Typically, a computer-room contains the whole dual ring, although some implementations have deployed FDDI as a Metropolitan area network. FDDI requires this network topology because the dual ring actually passes through each connected device and requires each such device to remain continuously operational (the standard actually allows for optical bypasses, but network engineers consider these unreliable and error-prone). Devices such as workstations and minicomputers that may not come under the control of the network managers are not suitable for connection to the dual ring. As an alternative to using a dual-attached connection, a workstation can obtain the same degree of resilience through a dual-homed connection made simultaneously to two separate devices in the same FDDI ring. One of the connections becomes active while the other one is automatically blocked. If the first connection fails, the backup link takes over with no perceptible delay. Due to their speed, cost and ubiquity, fast Ethernet and (since 1998) Gigabit Ethernet have largely made FDDI redundant.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

124

FDDI standards include:     

ANSI X3.166-1989, Physical Medium Dependent (PMD) -- also ISO 9314-3 ANSI X3.148-1988, Physical Layer Protocol (PHY) -- also ISO 9314-1 ANSI X3.139-1987, Media Access Control (MAC) -- also ISO 9314-2 ANSI X3.229-1994, Station Management (SMT) -- also ISO 9314-6 ANSI X3.184-1993, Single Mode Fiber Physical Medium Dependent (SMF-PMD)

Check Your Progress 1. What is the use of FDDI? 5.5 Wireless LANS

The notebook is connected to the wireless access point using a PC card wireless card.

54 MBit/s WLAN PCI Card (802.11g)

An embedded RouterBoard 112 with U.FL-RSMA pigtail and R52 mini PCI Wi-Fi card widely used by wireless Internet service providers (WISPs) in the Czech Republic. A wireless LAN or WLAN is a wireless local area network, which is the linking of two or more computers or devices without using wires. WLAN uses spread-spectrum or OFDM modulation technology based on radio waves to enable communication between devices in a limited area, also known as the basic service set. This gives users the mobility to move around within a broad coverage area and still be connected to the network. For the home user, wireless has become popular due to ease of installation, and location freedom with the gaining popularity of laptops. Public businesses such as coffee shops or malls have begun to offer wireless access to their customers; some are even provided as a free service. Large wireless network projects are being put up in many major cities. Google is even providing a

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

125

[1]

free service to Mountain View, California and has entered a bid to do the same for San [2] Francisco. New York City has also begun a pilot program to cover all five boroughs of the city with wireless Internet access. History In 1970 University of Hawaii, under the leadership of Norman Abramson, developed the world’s first computer communication network using low-cost ham-like radios, named ALOHAnet. The bidirectional star topology of the system included seven computers deployed over four islands to [3] communicate with the central computer on the Oahu Island without using phone lines. "In 1979, F.R. Gfeller and U. Bapst published a paper in the IEEE Proceedings reporting an experimental wireless local area network using diffused infrared communications. Shortly thereafter, in 1980, P. Ferrert reported on an experimental application of a single code spread spectrum radio for wireless terminal communications in the IEEE National Telecommunications Conference. In 1984, a comparison between Infrared and CDMA spread spectrum communications for wireless office information networks was published by Kaveh Pahlavan in IEEE Computer Networking Symposium which appeared later in the IEEE Communication Society Magazine. In May 1985, the efforts of Marcus led the FCC to announce experimental ISM bands for commercial application of spread spectrum technology. Later on, M. Kavehrad reported on an experimental wireless PBX system using code division multiple access. These efforts prompted significant industrial activities in the development of a new generation of wireless local area networks and it updated several old discussions in the portable and mobile radio industry. The first generation of wireless data modems was developed in the early 1980's by amateur radio operators. They added a voice band data communication modem, with data rates below 9600 bit/s, to an existing short distance radio system, typically in the two meter amateur band. The second generation of wireless modems was developed immediately after the FCC announcement in the experimental bands for non-military use of the spread spectrum technology. These modems provided data rates on the order of hundreds of kbit/s. The third generation of wireless modem [then] aimed at compatibility with the existing LANs with data rates on the order of Mbit/s. Several companies [developed] the third generation products with data rates above 1 Mbit/s and a couple of products [had] already been announced [by the time of the first IEEE Workshop on Wireless LANs]." 5.5.1 Benefits The popularity of wireless LANs is a testament primarily to their convenience, cost efficiency, and ease of integration with other networks and network components. The majority of computers sold to consumers today come pre-equipped with all necessary wireless LAN technology. These are the benefits of wireless LANs: 

 

Convenience: The wireless nature of such networks allows users to access network resources from nearly any convenient location within their primary networking environment (home or office). With the increasing saturation of laptop-style computers, this is particularly relevant. Mobility: With the emergence of public wireless networks, users can access the internet even outside their normal work environment. Most chain coffee shops, for example, offer their customers a wireless connection to the internet at little or no cost. Productivity: Users connected to a wireless network can maintain a nearly constant affiliation with their desired network as they move from place to place. For a business, this implies that an employee can potentially be more productive as his or her work can be accomplished from any convenient location.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 

 

126

Deployment: Initial setup of an infrastructure-based wireless network requires little more than a single access point. Wired networks, on the other hand, have the additional cost and complexity of actual physical cables being run to numerous locations (which can even be impossible for hard-to-reach locations within a building). Expandability: Wireless networks can serve a suddenly-increased number of clients with the existing equipment. In a wired network, additional clients would require additional wiring. Cost: Wireless networking hardware is at worst a modest increase from wired counterparts. This potentially increased cost is almost always more than outweighed by the savings in cost and labor associated to running physical cables.

5.5.2 Disadvantages Wireless LAN technology, while replete with the conveniences and advantages described above, has its share of downfalls. For a given networking situation, wireless LANs may not be desirable for a number of reasons. Most of these have to do with the inherent limitations of the technology. 

Security: Wireless LAN transceivers are designed to serve computers throughout a structure with uninterrupted service using radio frequencies. Because of space and cost, the antennas typically present on wireless networking cards in the end computers are generally relatively poor. In order to properly receive signals using such limited antennas throughout even a modest area, the wireless LAN transceiver utilizes a fairly considerable amount of power. What this means is that not only can the wireless packets be intercepted by a nearby adversary's poorly-equipped computer, but more importantly, a user willing to spend a small amount of money on a good quality antenna can pick up packets at a remarkable distance; perhaps hundreds of times the radius as the typical user. In fact, there are even computer users dedicated to locating and sometimes even cracking into wireless networks, known as wardrivers. On a wired network, any adversary would first have to overcome the physical limitation of tapping into the actual wires, but this is not an issue with wireless packets. To combat this consideration, wireless networks users usually choose to utilize various encryption technologies available such as Wi-Fi Protected Access (WPA). Some of the older encryption methods, such as WEP are known to have weaknesses that a dedicated adversary can compromise. (See main article: Wireless security.) Range: The typical range of a common 802.11g network with standard equipment is on the order of tens of metres. While sufficient for a typical home, it will be insufficient in a larger structure. To obtain additional range, repeaters or additional access points will have to be purchased. Costs for these items can add up quickly. Other technologies are in the development phase, however, which feature increased range, hoping to render this disadvantage irrelevant. (See WiMAX) Reliability: Like any radio frequency transmission, wireless networking signals are subject to a wide variety of interference, as well as complex propagation effects (such as multipath, or especially in this case Rician fading) that are beyond the control of the network administrator. One of the most insidious problems that can affect the stability and [7] reliability of a wireless LAN is the microwave oven. In the case of typical networks, modulation is achieved by complicated forms of phase-shift keying (PSK) or quadrature amplitude modulation (QAM), making interference and propagation effects all the more disturbing. As a result, important network resources such as servers are rarely connected wirelessly. Speed: The speed on most wireless networks (typically 1-108 Mbit/s) is reasonably slow compared to the slowest common wired networks (100 Mbit/s up to several Gbit/s). There are also performance issues caused by TCP and its built-in congestion avoidance. For most users, however, this observation is irrelevant since the speed bottleneck is not in the wireless routing but rather in the outside network connectivity itself. For example, the maximum ADSL throughput (usually 8 Mbit/s or less) offered by telecommunications

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

127

companies to general-purpose customers is already far slower than the slowest wireless network to which it is typically connected. That is to say, in most environments, a wireless network running at its slowest speed is still faster than the internet connection serving it in the first place. However, in specialized environments, higher throughput through a wired network might be necessary. Newer standards such as 802.11n are addressing this limitation and will support peak throughputs in the range of 100-200 Mbit/s. Wireless LANs present a host of issues for network managers. Unauthorized access points, broadcasted SSIDs, unknown stations, and spoofed MAC addresses are just a few of the problems addressed in WLAN troubleshooting. Most network analysis vendors, such as Network Instruments, Network General, and Fluke, offer WLAN troubleshooting tools or functionalities as part of their product line. 5.5.3 Architecture Stations All components that can connect into a wireless medium in a network are referred to as stations. All stations are equipped with wireless network interface cards (WNICs). Wireless stations fall into one of two categories: access points, and clients. Access points (APs) are base stations for the wireless network. They transmit and receive radio frequencies for wireless enabled devices to communicate with. Wireless clients can be mobile devices such as laptops, personal digital assistants, IP phones, or fixed devices such as desktops and workstations that are equipped with a wireless network interface. Basic service set The basic service set (BSS) is a set of all stations that can communicate with each other. There are two types of BSS: Independent BSS ( also referred to as IBSS ), and infrastructure BSS. Every BSS has an identification (ID) called the BSSID, which is the MAC address of the access point servicing the BSS. An independent BSS (IBSS) is an ad-hoc network that contains no access points, which means they can not connect to any other basic service set. An infrastructure BSS can communicate with other stations not in the same basic service set by communicating through access points. Extended service set An extended service set (ESS) is a set of connected BSSes. Access points in an ESS are connected by a distribution system. Each ESS has an ID called the SSID which is a 32-byte (maximum) character string. For example, "linksys" is the default SSID for Linksys routers.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

128

Distribution system A distribution system (DS) connects access points in an extended service set. The concept of a DS can be to increase network coverage through roaming between cells. 5.5.4 Types of wireless LANs Peer-to-peer

Peer-to-Peer or ad-hoc wireless LAN An ad-hoc network is a network where stations communicate only peer to peer (P2P). There is no base and no one gives permission to talk. This is accomplished using the Independent Basic Service Set (IBSS). A peer-to-peer (P2P) allows wireless devices to directly communicate with each other. Wireless devices within range of each other can discover and communicate directly without involving central access points. This method is typically used by two computers so that they can connect to each other to form a network. If a signal strength meter is used in this situation, it may not read the strength accurately and can be misleading, because it registers the strength of the strongest signal, which may be the closest computer. 802.11 specs define the physical layer (PHY) and MAC (Media Access Control) layers. However, unlike most other IEEE specs, 802.11 includes three alternative PHY standards: diffuse infrared operating at 1 Mbit/s in; frequency-hopping spread spectrum operating at 1 Mbit/s or 2 Mbit/s; and direct-sequence spread spectrum operating at 1 Mbit/s or 2 Mbit/s. A single 802.11 MAC standard is based on CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance). The 802.11 specification includes provisions designed to minimize collisions. Because two mobile units may both be in range of a common access point, but not in range of each other. The 802.11 has two basic modes of operation: Ad hoc mode enables peer-to-peer transmission between mobile units. Infrastructure mode in which mobile units communicate through an access point that serves as a bridge to a wired network infrastructure is the more common wireless LAN application the one being covered. Since wireless communication uses a more open medium for communication in comparison to wired LANs, the 802.11 designers also included a shared-key encryption mechanism, called wired equivalent privacy (WEP), or Wi-Fi Protected Access, (WPA, WPA2) to secure wireless computer networks. Bridge A bridge can be used to connect networks, typically of different types. A wireless Ethernet bridge allows the connection of devices on a wired Ethernet network to a wireless network. The bridge acts as the connection point to the Wireless LAN.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

129

Wireless distribution system When it is difficult to connect all of the access points in a network by wires, it is also possible to put up access points as repeaters. Roaming There are 2 definitions for roaming in WLAN: 

Internal Roaming (1): The Mobile Station (MS) moves from one access point (AP) to another AP within a home network because the signal strength is too weak. An authentication server (RADIUS) assumes the re-authentication of MS via 802.1x (e.g. with PEAP). The billing of QoS is in the home network. External Roaming (2): The MS(client) moves into a WLAN of an another Wireless Service Provider (WSP) and takes their services (Hotspot). The user can independently of his home network use another foreign network, if this is open for visitors. There must be special authentication and billing systems for mobile services in a foreign network.

Roaming between Wireless Local Area Networks 5.6 Bridges A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model, and the term layer 2 switch is often used interchangeably with bridge. Bridges are similar to repeaters or network hubs, devices that connect network segments at the physical layer, however a bridge works by using bridging where traffic from one network is managed rather than simply rebroadcast to adjacent network segments. In Ethernet networks, the term "bridge" formally means a device that behaves according to the IEEE 802.1D standard—this is most often referred to as a network switch in marketing literature. Since bridging takes place at the data link layer of the OSI model, a bridge processes the information from each frame of data it receives. In an Ethernet frame, this provides the MAC address of the frame's source and destination. Bridges use two methods to resolve the network segment that a MAC address belongs to. 

Transparent bridging – This method uses a forwarding database to send frames across network segments. The forwarding database is initially empty and entries in the database are built as the bridge receives frames. If an address entry is not found in the forwarding database, the frame is rebroadcast to all ports of the bridge, forwarding the frame to all segments except the source address. By means of these broadcast frames, the destination network will respond and a route will be created. Along with recording the network segment to which a particular frame is to be sent, bridges may also record a bandwidth metric to avoid looping when multiple paths are available. Devices that have this transparent bridging functionality are also known as adaptive bridges.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 

130

Source route bridging – With source route bridging two frame types are used in order to find the route to the destination network segment. Single-Route (SR) frames comprise most of the network traffic and have set destinations, while All-Route(AR) frames are used to find routes. Bridges send AR frames by broadcasting on all network branches; each step of the followed route is registered by the bridge performing it. Each frame has a maximum hop count, which is determined to be greater than the diameter of the network graph, and is decremented by each bridge. Frames are dropped when this hop count reaches zero, to avoid indefinite looping of AR frames. The first AR frame which reaches its destination is considered to have followed the best route, and the route can be used for subsequent SR frames; the other AR frames are discarded. This method of locating a destination network can allow for indirect load balancing among multiple bridges connecting two networks. The more a bridge is loaded, the less likely it is to take part in the route finding process for a new destination as it will be slow to forward packets. A new AR packet will find a different route over a less busy path if one exists. This method is very different from transparent bridge usage, where redundant bridges will be inactivated; however, more overhead is introduced to find routes, and space is wasted to store them in frames. A switch with a faster backplane can be just as good for performance, if not for fault tolerance.

5.6.1 Advantages of network bridges     

Self configuring Primitive bridges are often inexpensive Reduce size of collision domain by microsegmentation in non switched networks Transparent to protocols above the MAC layer Allows the introduction of management - performance information and access control

LANs interconnected are separate and physical constraints such as number of stations, repeaters and segment length don't apply 5.6.2 Disadvantages of network bridges      

Does not limit the scope of broadcasts Does not scale to extremely large networks Buffering introduces store and forward delays - on average traffic destined for bridge will be related to the number of stations on the rest of the LAN Bridging of different MAC protocols introduces errors Because bridges do more than repeaters by viewing MAC addresses, the extra processing makes them slower than repeaters Bridges are more expensive than repeaters

5.6.3 Bridging versus routing Bridging and Routing are both ways of performing data control, but work through different methods. Bridging takes place at OSI Model Layer 2 (Data-Link Layer) while Routing takes place at the OSI Model Layer 3 (Network Layer). This difference means that a bridge directs frames according to hardware assigned MAC addresses while a router makes its decisions according to arbitrarily assigned IP Addresses. As a result of this, bridges are not concerned with and are unable to distinguish networks while routers can. When designing a network, you can choose to put multiple segments into one bridged network or to divide it into different networks interconnected by routers. If a host is physically moved from one network area to another in a routed network, it has to get a new IP address; if this system is moved within a bridged network, it doesn't have to reconfigure anything.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

131

Specific uses of the term "bridge" Documentation on Linux bridging can be found in the Linux networking wiki. Linux bridging allows filtering and routing. Certain versions of Windows (including XP and Vista) allow for creating a Network Bridge - a network component that aggregates two or more Network Connections and establishes a bridging environment between them. Windows does not support creating more than one network bridge per system. Filtering Database To translate between two segments types, a bridge reads a frame's destination MAC address and decides to either forward or filter. If the bridge determines that the destination node is on another segment on the network, it forwards it (retransmits) the packet to that segment. If the destination address belongs to the same segment as the source address, the bridge filters (discards) the frame. As nodes transmit data through the bridge, the bridge establishes a filtering database (also known as a forwarding table) of known MAC addresses and their locations on the network. The bridge uses its filtering database to determine whether a packet should be forwarded or filtered. Bridging is a forwarding technique used in packet-switched computer networks. Unlike routing, bridging makes no assumptions about where in a network a particular address is located. Instead, it depends on broadcasting to locate unknown devices. Once a device has been located, its location is recorded in a routing table where the MAC address is stored alongside its IP Address so as to preclude the need for further broadcasting. This information is stored in the ARP table. The utility of bridging is limited by its dependence on broadcasting, and is thus only used in local area networks. Currently, two different bridging technologies are in widespread use. Transparent bridging predominates in Ethernet networks; while source routing is used in token ring networks. Thus, bridging allows you to connect two different networks seamlessly on the data link layer, e.g. a wireless access point with a wired network switch by using MAC addresses as an addressing system. A bridge and switch are very much alike. Transparent bridging Transparent bridging refers to a form of bridging "transparent" to the end systems using it, in the sense that the end systems operate as if the bridge isn't there in the way that matters: bridges segment broadcasts between networks, and only allows specific addresses to pass through the bridge to the other network. It is used primarily in Ethernet networks, where it has been standardized as IEEE 802.1D. The bridging functions are confined to network bridges which interconnect the network segments. The active parts of the network must form a tree. This can be achieved either by physically building the network as a tree or by using bridges that use the spanning tree protocol to build a loop-free network topology by selectively disabling network broadcast addresses. Source route bridging Source route bridging is used primarily on token ring networks, and is standardized in Section 9 of the IEEE 802.2 standard. The spanning tree protocol is not used, the operation of the network bridges is simpler, and much of the bridging functions are performed by the end systems, particularly the sources, giving rise to its name.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

132

A field in the token ring header, the routing information field (RIF), is used to support source-route bridging. Upon sending a packet, a host attaches a RIF to the packet indicating the series of bridges and network segments to be used for delivering the packet to its destination. The bridges merely follow the list given in the RIF - if a given bridge is next in the list, it forwards the packet, otherwise it ignores it. When a host wishes to send a packet to a destination for the first time, it needs to determine an appropriate RIF. A special type of broadcast packet is used, which instructs the network bridges to append their bridge number and network segment number to each packet as it is forwarded. Loops are avoided by requiring each bridge to ignore packets which already contain its bridge number in the RIF field. At the destination, these broadcast packets are modified to be standard unicast packets and returned to the source along the reverse path listed in the RIF. Thus, for each route discovery packet broadcast, the source receives back a set of packets, one for each possible path through the network to the destination. It is then up to the source to choose one of these paths (probably the shortest one) for further communications with the destination. Source routing transparent bridging Source routing transparent bridging, abbreviated SRT bridging, is a hybrid of source routing and transparent bridging, standardized in Section 9 of the IEEE 802.2 standard. It allows source routing and transparent bridging to coexist on the same bridged network by using source routing with hosts that support it and transparent bridging otherwise. Check Your Progress 2. Write the use of the Wireless Lan Architecture. 3. What is mean by network bridge? 5.7 Summary From this lesson we understood the types of network protocols, bridges, FDDI, OSI reference model and the uses and the history of TCP/IP in network. TCP/IP was designed to be independent of the network access method, frame format, and medium. In this way, TCP/IP can be used to connect differing network types. These include LAN technologies such as Ethernet and Token Ring and WAN technologies such as X.25 and Frame Relay. OSI network protocols are specified in a variety of notations. This section describes two popular notations, sequence diagrams and state transition diagrams, which are extensively used in standards and the literature. Both rely on the notion of a service primitive which is described first. 5.8 Keywords IEEE : The Institute of Electrical and Electronic Engineers (IEEE) is a US standards organization with members throughout the world. ECMA : The European Computer Manufacturers Association (ECMA) is a standards organization involved in the area of computer engineering and related technologies. 5.10 Check Your Progress 1. What is the use of FDDI? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… …

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

133

2. Write the use of the Wireless Lan Architecture. ………………………………………………………………………………………………………………… ……………………………………………………………………………………………..

3. What is mean by network bridge? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ……… Answer to Check Your Progress 1. Fiber distributed data interface (FDDI) provides a standard for data transmission in a local area network that can extend in range up to 200 kilometers (124 miles). Although FDDI protocol is a token ring network, it does not use the IEEE 802.5 token ring protocol as its basis; instead, its protocol is derived from the IEEE 802.4 token bus timed token protocol. 2. A wireless LAN or WLAN is a wireless local area network, which is the linking of two or more computers or devices without using wires. WLAN uses spread-spectrum or OFDM modulation technology based on radio waves to enable communication between devices in a limited area, also known as the basic service set. 3. A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model, and the term layer 2 switch is often used interchangeably with bridge. 5.11 Further Reading

th

1. William Stallings, “Data and Computer Communications”, 5 Edition, Pearson Education, 1997. 2. “Computer Networks ‘, Tenanbaum.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT – 6 NETWORK LAYER 6.0 Introduction 6.1 Objectives 6.2 Definition 6.3 Network Layer 6.3.1 Addressing Scheme 6.4 Switching Concepts 6.5 Circuit Switching Networks 6.5.1 High Speed Switched Data 6.6 Packet Switching 6.6.1 Packet 6.7 Routing 6.7.1 Topology Distribution 6.7.2 Path Vector Protocol 6.7.3 Path election 6.7.4 Routing Protocol 6.8 Congestion Control 6.8.1 Error Handling 6.9 X.25 6.9.1 History 6.9.2 Architecture 6.9.3 Relation to the OSI Reference Model 6.9.4 X.25 Details 6.10 Internetworking Concepts and X.25 architectural models 6.10.1 Network Sub layers 6.10.2 Network layer Standards 6.10.3 Internetworking with X.75 6.11 Summary 6.12 Keywords 6.13 Exercise and Questions 6.14 Check Your Progress 6.15 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

134


COMPUTER COMMUNICATION & NETWORKS

135

6.0 Introduction This lesson describes the network layer of the OSI model. The network layer handles the routing of data packets across the network, and defines the interface between a host and a network node. We will first discuss the use of network primitives for defining network services. Then we will look at two switching methods and their use for routing. 6.1 Objectives After completing this lesson you should be able to:    

Understand the nature of network services and use network primitives to describe network service scenarios. Describe how circuit switching works and appreciate its strengths and weaknesses. Describe how packet switching works and distinguishes between the virtual circuit and datagram methods and their packet formats. Explain the concept, advantages and disadvantages of X.25

6.2 Definition The Network layer is concerned with finding the shortest path to the destination. This usually means finding the fastest route through multiple networks to the destination. Routing algorithms are used to determine the "shortest" path. Shortest does not mean the physically shortest distance but fastest route. 6.3 Network Layer At the outset, the network layer provides a set of services to the transport layer. These services characterize the interface between the two layers and isolate the network details (lower three layers) from the network users (upper four layers). Network service content is therefore of considerable importance. Network services are defined in terms of network service primitives. The following Figure summarizes the primitives together with their possible types and parameters. Please note that these primitives only serve as a modeling tool and do not imply anything about how network services are implemented. Network service primitives.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

136

Sample scenario of network services.

The above figure illustrates the use of the network services in a sample scenario. Network service user A first requests a connection, which is indicated to network service user B by the service provider. B responds to the request and the network confirms with A. Then A expedites some data which is indicated to B by the network. A normal data transfer from A to B follows which includes confirmation. Then the network encounters an error and simultaneously sends reset indications to A and B, which both respond to. Finally, B requests a disconnection, which is indicated to A, and terminates the connection. The network layer is concerned with getting packets from the source all the way to the destination. The packets may require to make many hops at the intermediate routers while reaching the destination. This is the lowest layer that deals with end to end transmission. In order to achieve its goals, the network layer must know about the topology of the communication network. It must also take care to choose routes to avoid overloading of some of the communication lines while leaving others idle. The network layer-transport layer interface frequently is the interface between the carrier and the customer that is the boundary of the subnet. The functions of this layer include: 1. Routing - The process of transferring packets received from the Data Link Layer of the source network to the Data Link Layer of the correct destination network is called routing. Involves decision making at each intermediate node on where to send the packet next so that it eventually reaches its destination. The node which makes this choice is called a router. For routing we require some mode of addressing which is recognized by the Network Layer. This addressing is different from the MAC layer addressing. 2. Inter-networking - The network layer is the same across all physical networks (such as Token-Ring and Ethernet). Thus, if two physically different networks have to communicate, the packets that arrive at the Data Link Layer of the node which connects these two physically different networks, would be stripped of their headers and passed to the Network Layer. The network layer would then pass this data to the Data Link Layer of the other physical network.. 3. Congestion Control - If the incoming rate of the packets arriving at any router is more than the outgoing rate, then congestion is said to occur. Congestion may be caused by

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

137

many factors. If suddenly, packets begin arriving on many input lines and all need the same output line, then a queue will build up. If there is insufficient memory to hold all of them, packets will be lost. But even if routers have an infinite amount of memory, congestion gets worse, because by the time packets reach to the front of the queue, they have already timed out (repeatedly), and duplicates have been sent. All these packets are dutifully forwarded to the next router, increasing the load all the way to the destination. Another reason for congestion are slow processors. If the router's CPUs are slow at performing the bookkeeping tasks required of them, queues can build up, even though there is excess line capacity. Similarly, low-bandwidth lines can also cause congestion. 6.3.1 Addressing Scheme IP addresses are of 4 bytes and consist of : i) The network address, followed by ii) The host address The first part identifies a network on which the host resides and the second part identifies the particular host on the given network. Some nodes which have more than one interface to a network must be assigned separate internet addresses for each interface. This multi-layer addressing makes it easier to find and deliver data to the destination. A fixed size for each of these would lead to wastage or under-usage that is either there will be too many network addresses and few hosts in each (which causes problems for routers who route based on the network address) or there will be very few network addresses and lots of hosts (which will be a waste for small network requirements). Thus, we do away with any notion of fixed sizes for the network and host addresses. We classify networks as follows: 1. Large Networks : 8-bit network address and 24-bit host address. There are approximately 16 million hosts per network and a maximum of 126 ( 2^7 - 2 ) Class A networks can be defined. The calculation requires that 2 be subtracted because 0.0.0.0 is reserved for use as the default route and 127.0.0.0 be reserved for the loop back function. Moreover each Class A network can support a maximum of 16,777,214 (2^24 2) hosts per network. The host calculation requires that 2 be subtracted because all 0's are reserved to identify the network itself and all 1s are reserved for broadcast addresses. The reserved numbers may not be assigned to individual hosts. 2. Medium Networks : 16-bit network address and 16-bit host address. There are approximately 65000 hosts per network and a maximum of 16,384 (2^14) Class B networks can be defined with up to (2^16-2) hosts per network. 3. Small networks : 24-bit network address and 8-bit host address. There are approximately 250 hosts per network. You might think that Large and Medium networks are sort of a waste as few corporations/organizations are large enough to have 65000 different hosts. (By the way, there are very few corporations in the world with even close to 65000 employees, and even in these corporations it is highly unlikely that each employee has his/her own computer connected to the network.) Well, if you think so, you're right. This decision seems to have been a mistake. Address Classes The IP specifications divide addresses into the following classes : 

Class A - For large networks

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

0 7 bits of the network address  

24 bits of host address

Class B - For medium networks 1 0 14 bits of the network address

 

16 bits of host address

Class C - For small networks 1 1 0 21 bits of the network address

 

138

8 bits of host address

Class D - For multi-cast messages ( multi-cast to a "group" of networks ) 1 1 1 0 28 bits for some sort of group address

 

Class E - Currently unused, reserved for potential uses in the future 1 1 1 1 28 bits

6.4 Switching Concepts Switching is the generic method for establishing a path for point-to-point communication in a network. It involves the nodes in the network utilizing their direct communication lines to other nodes so that a path is established in a piecewise fashion. Each node has the capability to ‘switch’ to a neighboring node (i.e., a node to which it is directly connected) to further stretch the path until it is completed.

‘switched’ path.

One of the most important functions of the network layer is to employ the switching capability of the nodes in order to route messages across the network. There are two basic methods of switching:  circuit switching and  Packet switching. 6.5 Circuit Switching Networks In circuit switching, two communicating stations are connected by a dedicated communication path which consists of intermediate nodes in the network and the links that connect these nodes. What is significant about circuit switching is that the communication path remains intact for the

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

139

duration of the connection, engaging the nodes and the links involved in the path for that period. Each cross point appears as a circle. A hollow circle means that the cross point is off (i.e., the two crossing wires are not connected). A solid circles means that the cross point is on (i.e., the crossing wires are connected). The switch can support up to three simultaneous but independent connections. A simple circuit switch.

When the two hosts shown in the figure initiate a connection, the network determines a path through the intermediate switches and establishes a circuit which is maintained for the duration of the connection. When the hosts disconnect, the network releases the circuit. Circuit switching.

Circuit switching relies on dedicated equipment especially built for the purpose, and is the dominant form of switching in telephone networks. Its main advantage lies in its predictable behavior: because it uses a dedicated circuit, it can offer a constant throughput with no noticeable delay in transfer of data. This property is important in telephone networks, where even a short delay in voice traffic can have disruptive effects. Circuit switching’s main weakness is its inflexibility in dealing with computer oriented data. A circuit uses a fixed amount of bandwidth, regardless of whether it is used or not. In case of voice traffic, the bandwidth is usually well used because most of the time one of the two parties in a telephone conversation is speaking. However, computers behave differently; they tend to go through long silent periods followed by a sudden burst of data transfer. This leads to significant underutilization of circuit bandwidth. Another disadvantage of circuit switching is that the network is only capable of supporting a limited number of simultaneous circuits. When this limit is reached, the network blocks further attempts for connection until some of the existing circuits are released. In telecommunications, a circuit switching network is one that establishes a fixed bandwidth circuit (or channel) between nodes and terminals before the users may communicate, as if the nodes were physically connected with an electrical circuit. The bit delay is constant during the connection, as opposed to packet switching, where packet queues may cause varying delay. There is a common misunderstanding that circuit switching is used only for connecting voice circuits (Analog or digital).The concept of circuit switching can be extended to other forms of digital data. Dedicated path still remains between two communicating parties and rest of the procedure remains same as voice circuits. But this time around the data is transferred non-stop NOT in the form of packets and without any overhead bits. Although possible, circuit switching is rarely used for transferring digital data (Except voice circuit) and this scheme is not employed in networks where digital data needs to be transferred.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

140

Each circuit cannot be used by other callers until the circuit is released and a new connection is set up. Even if no actual communication is taking place in a dedicated circuit that channel remains unavailable to other users. Channels that are available for new calls to be set up are said to be idle. Virtual circuit switching is a packet switching technology that may emulate circuit switching, in the sense that the connection is established before any packets are transferred, and that packets are delivered in order. The call For call setup and control (and other administrative purposes), it is possible to use a separate dedicated signaling channel from the end node to the network. ISDN is one such service that uses a separate signaling channel while Plain Old Telephone Service (POTS) does not. The method of establishing the connection and monitoring its progress and termination through the network may also utilize a separate control channel as in the case of links between telephone exchanges which use CCS7 packet-switched signaling protocol to communicate the call setup and control information and use TDM to transport the actual circuit data. Early telephone exchanges are a suitable example of circuit switching. The subscriber would ask the operator to connect to another subscriber, whether on the same exchange or via an interexchange link and another operator. In any case, the end result was a physical electrical connection between the two subscribers' telephones for the duration of the call. The copper wire used for the connection could not be used to carry other calls at the same time, even if the subscribers were in fact not talking and the line was silent. Compared to datagram packet switching Since the first days of the telegraph it has been possible to multiplex multiple connections over the same physical conductor, but nonetheless each channel on the multiplexed link was either dedicated to one call at a time, or it was idle between calls. With circuit switching, and virtual circuit switching, a route is reserved from source to destination. The entire message is sent in order so that it does not have to be reassembled at the destination. Circuit switching can be relatively inefficient because capacity is wasted on connections which are set up but are not in continuous use (however momentarily). On the other hand, the connection is immediately available and capacity is guaranteed until the call is disconnected. Circuit switching contrasts with packet switching which splits traffic data (for instance, digital representation of sound, or computer data) into chunks, called packets, that are routed over a shared network. Packet switching is the process of segmenting a message/data to be transmitted into several smaller packets. Each packet is labeled with its destination and the number of the packet, precluding the need for a dedicated path to help the packet find its way to its destination. Each is dispatched and many may go via different routes. At the destination, the original message is reassembled in the correct order, based on the packet number. Datagram Packet switching networks do not require a circuit to be established and allow many pairs of nodes to communicate almost simultaneously over the same channel. 6.5.1 High-Speed Circuit-Switched Data High-Speed Circuit-Switched Data (HSCSD), is an enhancement to Circuit Switched Data, the original data transmission mechanism of the GSM mobile phone system, four times faster than GSM, with data rates up to 38.4 kbit/s. As with CSD, channel allocation is done in circuit switched mode. The difference comes from the ability to use different coding methods and/or multiple time slots to increase data throughput.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

141

One innovation in HSCSD is to allow different error correction methods to be used for data transfer. The original error correction used in GSM was designed to work at the limits of coverage and in the worst case that GSM will handle. This means that a large part of the GSM transmission capacity is taken up with error correction codes. HSCSD provides different levels of possible error correction which can be used according to the quality of the radio link. This means that in the best conditions 14.4 kbit/s can be put through a single time slot that under CSD would only carry 9.6 kbit/s, for a 50% improvement in throughput. The other innovation in HSCSD is the ability to use multiple time slots at the same time. Using the maximum of four (4) time slots, this can provide an increase in maximum transfer rate of up to 57.6 kbit/s (4 times 14.4 kbit/s) and, even in bad radio conditions where a higher level of error correction needs to be used, can still provide a four times speed increase over CSD (38.4 kbit/s versus 9.6 kbit/s). By combining up to 8 GSM time slots the capacity can be increased to 115 kbit/s. HSCSD requires the time slots being used to be fully reserved to a single user. It is possible that either at the beginning of the call, or at some point during a call, it will not be possible for the user's full request to be satisfied since the network is often configured so that normal voice calls take precedence over additional time slots for HSCSD users. X.21 X.21, sometimes referred to as X21, interface is a specification for differential communications introduced in the mid 1970’s by the ITU-T. X.21 was first introduced as a means to provide a digital signaling interface for telecommunications between carriers and customer’s equipment. This includes specifications for DTE/DCE physical interface elements, alignment of call control characters and error checking, elements of the call control phase for circuit switching services, and test loops. When X.21 is used with V.11, it provides synchronous data transmission at rates from 100 kbit/s to 10 Mbit/s. There is also a variant of X.21 which is only used in select legacy applications, “circuit switched X.21”. X.21 normally is found on a 15-pin D Sub connector and is capable of running full-duplex data transmissions. The Signal Element Timing, or clock, is provided by the carrier (your telephone company), and is responsible for correct clocking of the data. X.21 was primarily used in Europe and Japan, for example in the Scandinavian DATEX and German DATEX-L circuit switched networks during the 1980s. 6.6 Packet Switching Packet switching was designed to address the shortcomings of circuit switching in dealing with data communication. Unlike circuit switching where communication is continuous along a dedicated circuit, in packet switching, communication is discrete in form of packets. Each packet is of a limited size and can hold up to a certain number of octets of user data. Larger messages are broken into smaller chunks so that they can be fitted into packets. In addition to user data, each packet carries additional information (in form of a header) to enable the network to route it to its final destination. A packet is handed over from node to node across the network. Each receiving node temporarily stores the packet, until the next node is ready to receive it, and then passes it onto the next node. This technique is called store-and-forward and overcomes one of the limitations of circuit switching. A packet-switched network has a much higher capacity for accepting further connections. Additional connections are usually not blocked but simply slow down existing connections, because they increase the overall number of packets in the network and hence increase the delivery time of each packet. Each channel has an associated buffer which it uses to store packets in transit. The operation of the switch is controlled by a microprocessor. A packet received on any of the channels can be

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

142

passed onto any of the other channels by the microprocessor moving it to the corresponding buffer.

A simple packet switch.

Two variations of packet switching exist: virtual circuit and datagram. The virtual circuit method (also known as connection-oriented) is closer to circuit switching. Here a complete route is worked out prior to sending data packets. The route is established by sending a connection request packet along the route to the intended destination. This packet informs the intermediate nodes about the connection and the established route so that they will know how to route subsequent packets. The result is a circuit somewhat similar to those in circuit switching, except that it uses packets as its basic unit of communication. Hence it is called a virtual circuit. Each packet carries a virtual circuit identifier which enables a node to determine to which virtual circuit it belongs and hence how it should be handled. (The virtual circuit identifier is essential because multiple virtual circuits may pass through the same node at the same time.) Because the route is fixed for the duration of the call, the nodes spend no effort in determining how to route packets. When the two hosts initiate a connection, the network layer establishes a virtual circuit (denoted by shaded switches) which is maintained for the duration of the connection. When the hosts disconnect, the network layer releases the circuit. The packets in transit are displayed as dark boxes within the buffers. These packets travel only along the designated virtual circuit. Packet switching with virtual circuits.

The datagram method (also known as connectionless) does not rely on a pre- established route, instead each packet is treated independently. Therefore, it is possible for different packets to travel along different routes in the network to reach the same final destination. As a result, packets may arrive out of order, or even never arrive (due to node failure). It is up to the network user to deal with lost packets, and to rearrange packets to their original order. Because of the

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

143

absence of a pre-established circuit, each packet must carry enough information in its header to enable the nodes to route it correctly. The following Figure illustrates the datagram method. Note how the packets exercise different routes. Packet switching with datagrams.

The advantage of the datagram approach is that because there is no circuit, congestion and faulty nodes can be avoided by choosing a different route. Also, connections can be established more quickly because of reduced overheads. This makes datagram’s better suited than virtual circuits for brief connections. For example, database transactions in banking systems are of this nature, where each transaction involves only a few packets. The advantage of the virtual circuit approach is that because no separate routing is required for each packet, they are likely to reach their destination more quickly; this leads to improved throughput. Furthermore, packets always arrive in order. Virtual circuits are better suited to long connections that involve the transfer of large amounts of data (e.g., transfer of large files). Packet switching is a communications method in which packets (discrete blocks of data) are routed between nodes over data links shared with other traffic. In each network node, packets are queued or buffered, resulting in variable delay. This contrasts with the other principal paradigm, circuit switching, which sets up a limited number of constant bit rate and constant delay connections between nodes for their exclusive use for the duration of the communication. Packet mode or packet-oriented communication may be utilized with or without a packet switch, in the latter case directly between two hosts. Examples of that are point-to-point data links, digital video and audio broadcasting or a shared physical medium, such as a bus network, ring network, or hub network. Packet mode communication is a statistical multiplexing technique, also known as a dynamic bandwidth allocation method, where a physical communication channel is effectively divided into an arbitrary number of logical variable bit-rate channels or data streams. Each logical stream consists of a sequence of packets, which normally are forwarded by a network node asynchronously in a first-come first-serve fashion. Alternatively, the packets may be forwarded according to some scheduling discipline for fair queuing or differentiated and/or guaranteed Quality of service. In case of a shared physical media, the packets may be delivered according to some packet-mode multiple access scheme. The service actually provided to the user by networks using packet switching internally to the network can be datagram’s (connectionless messages), and/or virtual circuit switching (also known as connection oriented). Some connectionless protocols are Ethernet, IP, and UDP;

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

144

connection oriented protocols include X.25, Frame relay, Asynchronous Transfer Mode (ATM), Multiprotocol Label Switching (MPLS), and TCP. It is also entirely possible to have to weigh the various metrics against each other. For example, reducing the hop count could increase the latency to an unacceptable limit and some kind of balance would need to be found. For multiparameter optimization, some form of optimization may be needed. Packet switching in networks Packet switching is used to optimize the use of the channel capacity available in digital telecommunication networks such as computer networks, to minimize the transmission latency (i.e. the time it takes for data to pass across the network), and to increase robustness of communication. The most well-known use of packet switching is the Internet and local area networks. The Internet uses the Internet protocol suite over a variety of data link layer protocols. For example, Ethernet and frame relay are very common. Newer mobile phone technologies (e.g., GPRS, I-mode) also use packet switching. X.25 is a notable use of packet switching in that, despite being based on packet switching methods, it provided virtual circuits to the user. These virtual circuits carry variable-length packets. In 1978, X.25 was used to provide the first international and commercial packet switching network, the International Packet Switched Service (IPSS). Asynchronous Transfer Mode (ATM) also is a virtual circuit technology, which uses fixed-length cell relay connection oriented packet switching. Datagram packet switching is also called connectionless networking because no connections are established. Technologies such as Multi-protocol Label Switching (MPLS) and the Resource Reservation Protocol (RSVP) create virtual circuits on top of datagram networks. Virtual circuits are especially useful in building robust failover mechanisms and allocating bandwidth for delay-sensitive applications. History of packet switching The concept of packet switching was first explored by Paul Baran in the early 1960s, and then independently a few years later by Donald Davies (Abbate, 2000). Leonard Kleinrock conducted early research in queuing theory which would be important in packet switching, and published a book in the related field of digital message switching (without the packets) in 1961; he also later played a leading role in building and management of the world's first packet switched network, the ARPANET. Baran developed the concept of packet switching during his research at the RAND Corporation for the US Air Force into survivable communications networks, first presented to the Air Force in the summer of 1961 as briefing B-265 then published as RAND Paper P-2626 in 1962, and then including and expanding somewhat within a series of eleven papers titled On Distributed Communications in 1964. Baran's P-2626 paper described a general architecture for a largescale, distributed, survivable communications network. The paper focuses on three key ideas: first, use of a decentralized network with multiple paths between any two points; and second, dividing complete user messages into what he called message blocks (later called packets); then third, delivery of these messages by store and forward switching. Check Your Progress 1. Define Routing. 2. What is called circuit switching concept? 3. What is mean by Virtual Circuit?

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

145

6.6.1 Packet (information technology) In information technology, a packet is a formatted block of data carried by a packet mode computer network. Computer communications links that do not support packets, such as traditional point-to-point telecommunications links, simply transmit data as a series of bytes, characters, or bits alone. When data is formatted into packets, the bit rate of the communication medium can better be shared among users than if the network would have been circuit switched. By using packet switched networking it is also harder to guarantee a lowest possible bitrates. General virtual circuit packet structure.

The following Figure shows the general structure of a datagram packet. Because packet headers may vary in length, the Header Length field is needed to indicate where User Data starts. Each packet is assigned a limited lifetime denoted by the Lifetime field. This is an integer quantity and is decreased in value by the nodes that handle the packet. When this value reaches zero, the packet is discarded. This is intended as a measure against congestion by packets that aimlessly circulate the network. General datagram packet structure.

Packet framing A packet consists of two kinds of data: control information and user data (also known as payload). The control information provides data the network needs to deliver the user data, for example: source and destination addresses, error detection codes like checksums, and sequencing information. Typically, control information is found in packet headers and trailers, with user data in between. Different communications protocols use different conventions for distinguishing between the elements and for formatting the data. In Binary Synchronous Transmission, the packet is formatted in 8-bit bytes, and special characters are used to delimit the different elements. Other

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

146

protocols, like Ethernet, establish the start of the header and data elements by their location relative to the start of the packet. Some protocols format the information at a bit level instead of a byte level. A good analogy is to consider a packet to be like a letter: the header is like the envelope, and the data area is whatever the person puts inside the envelope. A difference, however, is that some networks can break a larger packet into smaller packets when necessary (note that these smaller data elements are still formatted as packets). A network design can achieve two major results by using packets: error detection and multiple host addressing. Error detection It is more efficient and reliable to calculate a checksum or cyclic redundancy check over the contents of a packet than to check errors using character-by-character parity bit checking. The packet trailer often contains error checking data to detect errors that occur during transmission. Host addressing Modern networks usually connect three or more host computers together; in such cases the packet header generally contains addressing information so that the packet is received by the correct host computer. In complex networks constructed of multiple routing and switching nodes, like the ARPANET and the modern Internet, a series of packets sent from one host computer to another may follow different routes to reach the same destination. This technology is called packet switching. Packet vs. datagram In general, the term packet applies to any message formatted as a packet, while the term datagram is generally reserved for packets of an "unreliable" service. A "reliable" service is one that notifies the user if delivery fails, while an "unreliable" one does not notify the user if delivery fails. For example, IP provides an unreliable service. TCP uses IP to provide a reliable service, whereas UDP uses IP to provide an unreliable one. All these protocols use packets, but UDP packets are generally called datagram. When the ARPANET pioneered packet switching, it provided a reliable packet delivery procedure to its connected hosts via its 1822 interface. A host computer simply arranged the data in the correct packet format, inserted the address of the destination host computer, and sent the message across the interface to its connected IMP. Once the message was delivered to the destination host, an acknowledgement was delivered to the sending host. If the network could not deliver the message, it would send an error message back to the sending host. Meanwhile, the developers of CYCLADES and of ALOHAnet demonstrated that it was possible to build an effective computer network without providing reliable packet transmission. This lesson was later embraced by the designers of Ethernet. If a network does not guarantee packet delivery, then it becomes the host's responsibility to provide reliability by detecting and retransmitting lost packets. Subsequent experience on the ARPANET indicated that the network itself could not reliably detect all packet delivery failures, and this pushed responsibility for error detection onto the sending host in any case. This led to the development of the end-to-end principle, which is one of the Internet's fundamental design assumptions. 6.7 Routing Routing is the task of selecting a path for the transport of packets across the network, and is one of the most important functions of the network layer. Routing is generally viewed as an optimization problem with the objective of choosing an optimal path according to certain criteria: ďƒ˜ Transmission cost (measured in terms of tied up network resources).

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

147

 Transmission delay (measured as the delay involved in delivering each packet).  Throughput (measured as the total number of packets delivered per unit of time). The overall cost depends on all these three, and an optimal route is one that minimizes the overall cost. This can be represented by a weighted network, where an abstract cost figure is associated with each link, as illustrated in the following Figure . The cost of a route is the sum of the cost of its links. A weighted network.

There are three classes of routing algorithms:  Flooding  Static Routing  Dynamic Routing Of these, only dynamic routing makes any serious optimization attempts. In flooding, every possible path between the source and the destination station are exercised. Each node, upon receiving a packet, forwards copies of it to all its neighboring nodes (except the one from which it received the packet). Flooding is a highly robust technique since it offers the best chance of at least one packet copy reaching the destination. Its major disadvantage, however, is that it quickly congests the network. To avoid packets being indefinitely copied, each packet is assigned a limited lifetime which when expired will cause it to be destroyed. Because of its limitations, use of flooding is confined to specialized applications that require very high levels of robustness (e.g., military networks). Flooding is only suited to the datagram approach. In static routing, a fixed routing directory is used to guide the selection of a route which remains unchanged for the duration of the connection. The directory consists of a table which for each node pair (p,q) in the network suggests a partial path by nominating the first intermediate node, r, along the path. This should be interpreted as ‘to get from p to q, first go to r’. The path can then be continued by looking at the entry for the pair (r, q), etc., until q is reached. A sample route directory is shown in Figure 4.46. The route directory is tied to the network topology and remains unchanged unless the network topology changes. The route directory may be stored in a central location or distributed amongst the nodes (each node requires its corresponding row only). The directory should remain accessible to network administrators for manual updates. Static route directory for the network in the above Figure.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

148

The advantages of static routing are its simplicity and ease of implementation. Its disadvantage is lack of flexibility in face of possible variations within the network. Static routing can be used with both the datagram and the virtual circuit approach. Although static routing takes some form of cost into account, these cost measures are fixed and predetermined. Variations in traffic load have no impact on route selection, which makes this method unsuitable for large networks with significant traffic fluctuations. For small networks with relatively constant traffic, however, static routing can be very effective. Dynamic routing attempts to overcome the limitations of static routing by taking network variations into account when selecting a route. Each node maintains a route directory which describes the cost associated with reaching any other node via a neighboring node. Dynamic route directory for node ‘d’ in the above Figure.

The nodes periodically calculate estimates for the costs of the links to their neighboring nodes according to the statistical data that they have collected (queue lengths, packet delays, traffic load, etc.), and share these estimates with other nodes. This enables the nodes to update their route directories in an objective manner so that they reflect the current state of the network. The advantage of dynamic routing is its potential to improve performance and reduce congestion by choosing more optimal routes. Its disadvantage is its inherent complexity. Nevertheless, dynamic routing algorithms enjoy widespread use because they have proved to be generally more effective than other algorithms. Like static routing, dynamic routing can be used with both the datagram and the virtual circuit approach. Routing (or routeing - UK English) is the process of selecting paths in a network along which to send network traffic. Routing is performed for many kinds of networks, including the telephone network, electronic data networks (such as the Internet), and transportation (transport) networks. This article is concerned primarily with routing in electronic data networks using packet switching technology. In packet switching networks, routing directs forwarding, the transit of logically addressed packets from their source toward their ultimate destination through intermediate nodes; typically hardware devices called routers, bridges, gateways, firewalls, or switches. Ordinary computers with multiple network cards can also forward packets and perform routing, though they are not specialized hardware and may suffer from limited performance. The routing process usually directs forwarding on the basis of routing tables which maintain a record of the routes to various network destinations. Thus constructing routing tables, which are held in the routers' memory, becomes very important for efficient routing. Most routing algorithms use only one network path at a time, but multipath routing techniques enable the use of multiple alternative paths. Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Because structured addresses allow a single routing table entry to represent the route to a group of devices, structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging) in large networks, and has become the dominant form of addressing on the Internet, though bridging is still widely used within localized environments.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

149

Delivery semantics Routing schemes differ in their delivery semantics:    

unicast delivers a message to a single specified node; broadcast delivers a message to all nodes in the network; multicast delivers a message to a group of nodes that have expressed interest in receiving the message; anycast delivers a message to any one out of a group of nodes, typically the one nearest to the source.

Unicast is the dominant form of message delivery on the Internet, and this article focuses on unicast routing algorithms. 6.7.1 Topology distribution Small networks may involve manually configured routing tables, while larger networks involve complex topologies and may change rapidly, making the manual construction of routing tables unfeasible. Nevertheless, most of the public switched telephone network (PSTN) uses precomputed routing tables, with fallback routes if the most direct route becomes blocked; see routing in the PSTN. Adaptive routing attempts to solve this problem by constructing routing tables automatically, based on information carried by routing protocols, and allowing the network to act nearly autonomously in avoiding network failures and blockages. Dynamic routing dominates the Internet. However, the configuration of the routing protocols often requires a skilled touch; one should not suppose that networking technology has developed to the point of the complete automation of routing. Distance vector algorithms Distance vector algorithms use the Bellman-Ford algorithm. This approach assigns a number, the cost, to each of the links between each node in the network. Nodes will send information from point A to point B via the path that results in the lowest total cost (i.e. the sum of the costs of the links between the nodes used). The algorithm operates in a very simple manner. When a node first starts, it only knows of its immediate neighbors, and the direct cost involved in reaching them. (This information, the list of destinations, the total cost to each, and the next hop to send data to get there, makes up the routing table, or distance table.) Each node, on a regular basis, sends to each neighbor its own current idea of the total cost to get to all the destinations it knows of. The neighboring node(s) examine this information, and compare it to what they already 'know'; anything which represents an improvement on what they already have, they insert in their own routing table(s). Over time, all the nodes in the network will discover the best next hop for all destinations, and the best total cost. When one of the nodes involved goes down, those nodes which used it as their next hop for certain destinations discard those entries, and create new routing-table information. They then pass this information to all adjacent nodes, which then repeat the process. Eventually all the nodes in the network receive the updated information, and will then discover new paths to all the destinations which they can still "reach".

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

150

Link-state algorithms When applying link-state algorithms, each node uses as its fundamental data a map of the network in the form of a graph. To produce this, each node floods the entire network with information about what other nodes it can connect to, and each node then independently assembles this information into a map. Using this map, each router then independently determines the least-cost path from itself to every other node using a standard shortest paths algorithm such as Dijkstra's algorithm. The result is a tree rooted at the current node such that the path through the tree from the root to any other node is the least-cost path to that node. This tree then serves to construct the routing table, which specifies the best next hop to get from the current node to any other node. 6.7.2 Path vector protocol Distance vector and link state routing are both intra-domain routing protocols. They are used inside an autonomous system, but not between autonomous systems. Both of these routing protocols become intractable in large networks and cannot be used in Inter-domain routing. Distance vector routing is subject to instability if there are more than few hops in the domain. Link state routing needs huge amount of resources to calculate routing tables. It also creates heavy traffic because of flooding. Path vector routing is used for inter-domain routing. It is similar to Distance vector routing. In path vector routing we assume there is one node (there can be many) in each autonomous system which acts on behalf of the entire autonomous system. This node is called the speaker node. The speaker node creates a routing table and advertises it to neighboring speaker nodes in neighboring autonomous systems. The idea is the same as Distance vector routing except that only speaker nodes in each autonomous system can communicate with each other. The speaker node advertises the path, not the metric of the nodes, in its autonomous system or other autonomous systems. Path vector routing is discussed in RFC 1322; the path vector routing algorithm is somewhat similar to the distance vector algorithm in the sense that each border router advertises the destinations it can reach to its neighboring router. Comparison of routing algorithms Distance-vector routing protocols are simple and efficient in small networks, and require little, if any management. However, na誰ve distance-vector algorithms do not scale well, and have poor convergence properties. This has led to the development of more complex but more scalable algorithms for use in large networks. Interior routing mostly uses link-state routing protocols such as OSPF and IS-IS. A more recent development is that of loop-free distance-vector protocols (e.g. EIGRP). Loop-free distance-vector protocols are as robust and manageable as distance-vector protocols, while avoiding counting to infinity and hence having good worst-case convergence times. 6.7.3 Path selection A routing metric is a value used by a routing algorithm to determine whether one route should perform better than another. Metrics can cover such information as bandwidth, network delay, hop count, path cost, load, MTU, reliability, and communication cost (see e.g. this survey for a list of proposed routing metrics). The routing table stores only the best possible routes, while linkstate or topological databases may store all other information as well. As a routing metric is specific to a given routing protocol, multi-protocol routers must use some external heuristic in order to select between routes learned from different routing protocols.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

151

Cisco's routers, for example, attribute a value known as the administrative distance to each route, where smaller administrative distances indicate routes learned from a supposedly more reliable protocol. A local network administrator, in special cases, can set up host-specific routes to a particular machine which provides more control over network usage, permits testing and better overall security. This can come in handy when required to debug network connections or routing tables. Multiple agents In some networks, routing is complicated by the fact that no single entity is responsible for selecting paths: instead, multiple entities are involved in selecting paths or even parts of a single path. Complications or inefficiency can result if these entities choose paths to selfishly optimize their own objectives, which may conflict with the objectives of other participants. A classic example involves traffic in a road system, in which each driver selfishly picks a path which minimizes her own travel time. With such selfish routing, the equilibrium routes can be longer than optimal for all drivers. In particular, Braess' paradox shows that adding a new road can lengthen travel times for all drivers. The Internet is partitioned into autonomous systems (ASs) such as internet service providers (ISPs), each of which has control over routes involving its network, at multiple levels. First, ASlevel paths are selected via the BGP protocol, which produces a sequence of ASs through which packets will flow. Each AS may have multiple paths, offered by neighboring ASs, from which to [2] choose. Its decision often involves business relationships with these neighboring ASs, which may be unrelated to path quality or latency. Second, once an AS-level path has been selected, there are often multiple corresponding router-level paths, in part because two ISPs may be connected in multiple locations. In choosing the single router-level path, it is common practice for each ISP to employ hot-potato routing: sending traffic along the path that minimizes the distance through the ISP's own network—even if that path lengthens the total distance to the destination. Consider two ISPs, A and B, which each have a presence in New York, connected by a fast link with latency 5 ms; and which each have a presence in London connected by a 5 ms link. Suppose both ISPs have trans-Atlantic links connecting their two networks, but A's link has latency 100 ms and B's has latency 120 ms. When routing a message from a source in A's London network to a destination in B's New York network, A may choose to immediately send the message to B in London. This saves A the work of sending it along an expensive trans-Atlantic link, but causes the message to experience latency 125 ms when the other route would have been 20 ms faster. 6.7.4 Routing protocol A routing protocol is a protocol that specifies how routers communicate with each other to disseminate information that allows them to select routes between any two nodes on a network. Typically, each router has a prior knowledge only of its immediate neighbors. A routing protocol shares this information so that routers have knowledge of the network topology at large. For a discussion of the concepts behind routing protocols, see: Routing. The term routing protocol may refer more specifically to a protocol operating at Layer 3 of the OSI model which similarly disseminates topology information between routers. Many routing protocols used in the public Internet are defined in documents called RFCs.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

152

There are three major types of routing protocols, some with variants: link-state routing protocols, path vector protocols and distance vector routing protocols. The specific characteristics of routing protocols include the manner in which they either prevent routing loops from forming or break routing loops if they do form, and the manner in which they determine preferred routes from a sequence of hop costs and other preference factors. Routed versus routing protocols Confusion often arises between routing protocols and routed protocols. While routing protocols help the router in the decision-making on which paths to send traffic, routed protocols are responsible for the actual transfer of traffic between L3 devices. Specifically, a routed protocol is any network protocol that provides enough information in its network layer address to allow a packet to be forwarded from one host to another host based on the addressing scheme, without knowing the entire path from source to destination. Routed protocols define the format and use of the fields within a packet. Packets generally are conveyed from end system to end system. Almost all layer 3 protocols and those that are layered over them are routable, with IP being an example. Layer 2 protocols such as Ethernet are necessarily non-routable protocols, since they contain only a link-layer address, which is insufficient for routing: some higher-level protocols based directly on these without the addition of a network layer address, such as NetBIOS, are also non-routable. In some cases, routing protocols can themselves run over routed protocols: for example, BGP runs over TCP: care is taken in the implementation of such systems not to create a circular dependency between the routing and routed protocols. That a routing protocol runs over particular transport mechanism does not mean that the routing protocol is of layer (N+1) if the transport mechanism is of layer (N). Routing protocols, according to the OSI Routing framework, are layer management protocols for the network layer, regardless of their transport mechanism:    

IS-IS runs over the data link layer OSPF, IGRP, and EIGRP run directly over IP; OSPF and EIGRP have their own reliable transmission mechanism while IGRP assumed an unreliable transport RIP runs over UDP BGP runs over TCP

Check Your Progress 4. Define Datagram. 5. What is Packet switching? 6.8 Congestion Control A network has a certain carrying capacity, denoted by the maximum number of packets that it can hold at any point in time. When this limit is approached, considerable delays are experienced in packet delivery, and the network is said to be congested. Congestion can occur in all types of networks. Uncontrolled congestion can lead to outright network failure. At the node level, congestion manifests itself as packet buffers that have approached their full capacity. This happens because the node is receiving more packets than it is passing on to the next nodes, which in turn are presumably unable to receive more packets due to their buffers being full. When this situation occurs, the chances are that it will progressively get worse. The best way to deal with congestion is to avoid it. This is facilitated by putting in place measures that prevent buffer overflow. These measures include the following:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

153

1. Reducing the load on a node by disposing packets. As mentioned in earlier sections, packet disposal can be guided by a lifetime indicator which is eroded by the nodes that handle the packet. More blatant ways of disposing packets may also be employed. For example, a node that receives a packet for which it has almost no buffer space may destroy it immediately. Reducing the traffic destined for a heavily-utilized link. Nodes can monitor the traffic on their outgoing links and ask the source host to reduce the transmission rate when they feel that a link is approaching its capacity. The request can be put to the source host using a special packet. Imposing a limit on the total number of packets in the network. This approach requires some means of keeping a count of the packets in the network. Furthermore, the nodes will have to communicate to ensure that the count is kept up-to-date. Although, this approach ensures that the network cannot be overloaded with too many packets, it does not prevent an individual node from being overloaded. 6.8.1 Error Handling The extent to which the network layer attempts to deal with errors is largely dependent on the type of service being offered. The datagram service offers little in this respect. If packets do not arrive due to being corrupted or lost, or if they arrive out of order, it is the responsibility of the network user to deal with these problems. The virtual circuit service, however, handles these problems transparently. The usual approach to dealing with corrupt or lost packets is to request retransmission. This issue was covered in earlier discussions. It is not uncommon for network protocols to have additional measures built into them to enable them to better deal with errors. For example, some protocols also use a CRC check on the packet header. To communicate the cause of detected errors, some protocols use diagnostic packets. The subject area remains largely protocol dependent. 6.9 X.25 X.25 is an ITU-T standard network layer protocol for packet switched wide area network (WAN) communication. An X.25 WAN consists of packet-switching exchange (PSE) nodes as the networking hardware, and leased lines, the phone or ISDN connections as physical links. X.25 is part of the OSI protocol suite, a family of protocols that was used especially during the 1980s by telecommunication operators and in financial systems such as automated teller machines. X.25 is today to a large extent replaced by less complex and less secure protocols, especially the Internet protocol (IP) although some telephone operators offer X.25-based communication via the signalling (D) channel of ISDN lines. 6.9.1 History X.25 is one of the oldest packet-switched services available. It was developed before the OSI Reference Model, but after the equivalent Network Access Layer of the DoD protocol model. Its three layers correspond closely to the lower three layers of the OSI model. Its functionality maps directly to network layer of the OSI model. It supports functionality not found in the OSI network layer. X.25 was developed in the ITU-T (formerly CCITT) Study Group VII based upon a number of emerging data network projects. Various updates and additions were worked into the standard, eventually recorded in the ITU series of technical books describing the telecom systems. These books were published every fourth year with different colored covers. The X.25 specification is only part of the larger set of X-Series specifications on public data networks.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

154

The Public data network was the common name given to the international collection of X.25 providers, typically the various national telephone companies. Their combined network had large global coverage during the 1980s and into the 1990s. X.25 remains in use for certain applications and for some marginal transmission media performance conditions. Its major application is in transaction processing for credit card authorization and for automatic teller machines. 6.9.2 Architecture The general concept of X.25 was to create a universal and global packet-switched network. Much of the X.25 system is a description of the rigorous error correction needed to achieve this, as well as more efficient sharing of capital-intensive physical resources. The X.25 specification defines only the interface between a subscriber (DTE) and an X.25 network (DCE). X.75, a very similar protocol to X.25, defines the interface between two X.25 networks to make connections traversing two or even more networks possible. X.25 does not specify how the network operates internally—many X.25 network implementations used something very similar to X.25 or X.75 internally, but others used quite different protocols internally; but for the users this was not important due to both X.25 and X.75. Interoperability problems only occurred because not all networks supported the latest set of rules from the CCITT or ITU-T. The ISO equivalent protocol to X.25, ISO 8208, is the same as X.25, but additionally includes provision for two X.25 DTEs to be directly connected to each other with no network in between. The X.25 model was based on the traditional telephony concept of establishing reliable circuits through a shared network, but using software to create "virtual calls" through the network. These calls interconnect "data terminal equipment" (DTE) providing endpoints to users, which looked like point-to-point connections. Each endpoint can establish many separate virtual calls to different endpoints. For a brief period, the specification also included a connectionless datagram service, but this was dropped in the next revision. The "fast select facility" is intermediate between full call establishment and connectionless communication. Fast select is widely used in query-response transaction applications such as credit card authorization; the credit request is in an extended field of the call request packet, and the acceptance or declining of the charge is in an extended field of the call clearing packet. Closely related to the X.25 protocol are the protocols to connect asynchronous devices (PC's or dumb terminals) to an X.25 network: X.3, X.28 and X.29. This functionality was performed using a Packet Assembler/Disassembler or PAD (also known as a Triple-X device, referring to the three protocols used). 6.9.3 Relation to the OSI Reference Model Although X.25 predates the OSI Reference Model (OSIRM), the physical layer of the model corresponds to the X.25 physical level; the link layer, the X.25 link level; and network layer, the [10] X.25 packet level. The X.25 link-layer, LAPB, provides a reliable data path across a data link (or multiple parallel data links, multilink) which may not be reliable itself. The X.25 packet-layer, provides the virtual call mechanisms, running over X.25 LAPB. As long as the link layer does provide reliable data transmission, the packet-layer will provide error-free virtual calls. However, the packet-layer also includes mechanisms to maintain virtual calls and to signal data errors in the event that the link-layer does not provide reliable data transmission. All but the earliest versions of X.25 include facilities which provide for OSI network layer Addressing (NSAP

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

155

addressing, see below), in addition to PDN addressing (X.121), Telex addressing (F.69), PSTN addressing (E.163), ISDN addressing (E.164), Internet Protocol addresses (IANA ICP), and local IEEE 802.2 MAC addresses. User device support

A Tele video terminal model 925 made around 1982 X.25 was developed in the era of dumb terminals connecting to host computers, although it also can be used for communications between computers. Instead of dialing directly “into” the host computer — which would require the host to have its own pool of modems and phone lines, and require non-local callers to make long-distance calls — the host could have an X.25 connection to a network service provider. Now dumb-terminal users could dial into the network's local “PAD” (Packet Assembly/Disassembly facility), a gateway device connecting modems and serial lines to the X.25 link as defined by the ITU-T X.29 and X.3 standards. Having connected to the PAD, the dumb-terminal user tells the PAD which host to connect to, by giving a phone-number-like address in the X.121 address format (or by giving a host name, if the service provider allows for names that map to X.121 addresses). The PAD then places an X.25 call to the host, establishing a virtual circuit. Note that X.25 provides for virtual circuits, so appears to be a circuit switched network, even though in fact the data itself is packet switched internally, similar to the way TCP provides virtual circuits even though the underlying data is packet switched. Two X.25 hosts could, of course, call one another directly; no PAD is involved in this case. In theory, it doesn't matter whether the X.25 caller and X.25 destination are both connected to the same carrier, but in practice it was not always possible to make calls from one carrier to another. For the purpose of flow-control, a sliding window protocol is used with the default window size of 2. The acknowledgements may have either local or end to end significance. A D bit (Data Delivery bit) in each data packet indicates if the sender requires end to end acknowledgement. When D=1, it means that the acknowledgement has end to end significance and must take place only after the remote DTE has acknowledged receipt of the data. When D=0, the network is permitted (but not required) to acknowledge before the remote DTE has acknowledged or even received the data. While the PAD function defined by X.28 and X.29 specifically supported asynchronous character terminals, PAD equivalents were developed to support a wide range of proprietary intelligent communications devices, such as those for IBM System Network Architecture (SNA). Error control Error recovery procedures at the packet level assume that the frame level is responsible for retransmitting data received in error. Packet level error handling focuses on re-synchronizing the information flow in calls, as well as clearing calls that have gone into unrecoverable states: 

Level 3 Reset packets, which re-initializes the flow on a virtual circuit (but does not break the virtual circuit)

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS ďƒ˜

156

Restart packet, which clears down all switched virtual circuits on the data link and resets all permanent virtual circuits on the data link

Addressing and virtual circuits

An X.25 Modem once used to connect to the German Datex-P network. The X.121 address consists of a three-digit Data Country Code (DCC) plus a network digit, together forming the four-digit Data Network Identification Code (DNIC), followed by the National Terminal Number (NTN) of at most ten digits. Note the use of a single network digit, seemingly allowing for only 10 network carriers per country, but some countries are assigned more than one DCC to avoid this limitation. NSAP addressing was added in the X.25(1984) revision of the specification, and this enabled X.25 to better meet the requirements of OSI Layer 3. Public X.25 networks didn't make use of NSAP addressing, but some carried it transparently. For much of its history X.25 was used for permanent virtual circuits (PVCs) to connect two host computers in a dedicated link. This was common for applications such as banking, where distant branch offices could be connected to central hosts for a cost that was considerably lower than a permanent long distance telephone call. X.25 was typically billed as a flat monthly service fee depending on link speed, and then a price-per-packet on top of this. Link speeds varied, typically from 2400bit/s up to 2 Mbit/s, although speeds above 64 kbit/s were uncommon in the public networks. Publicly-accessible X.25 networks (Compuserve, Tymnet, Euronet, PSS, and Telenet) were set up in most countries during the 1970s and 80s, to lower the cost of accessing various online services, in which the user would first interact with the network interface to set up the connection. Known as switched virtual circuits (SVC) or "virtual calls" in public data networks (PDN), this use of X.25 disappeared from most places fairly rapidly as long distance charges fell in the 1990s and today's Internet started to emerge. Obsolescence With the widespread introduction of "perfect" quality digital phone services and error correction in modems, the overhead of X.25 was no longer worthwhile. The result was called Frame relay, essentially the X.25 protocol with the error correction systems removed, and somewhat better throughput as a result. The concept of virtual circuits is still used within ATM to allow for traffic engineering and network multiplexing. X.25 today X.25 networks are still in use throughout the world, although in dramatic decline, being largely supplanted by newer layer 2 technologies such as frame relay, ISDN, ATM, ADSL, POS, and the ubiquitous layer 3 Internet Protocol. X.25 however remains one of the only available reliable links in many portions of the developing world, where access to a PDN may be the most reliable and low cost way to access the Internet. A variant called AX.25 is also used widely by amateur packet radio, though there has been some movement in recent years to replace it with TCP/IP. Racal Paknet, now known as Widanet, is still in operation in many regions of the world, running on an X.25 protocol base. Used as a secure wireless low rate data transfer platform, Widanet is commonly used for GPS tracking and point-of-sale solutions currently. In some countries, like The Netherlands or Germany, it is possible to use a stripped version of X25 via the D-channel of an

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

157

ISDN-2 (or ISDN BRI) connection for low volume applications such as point-of-sale terminals. But the future of this service in The Netherlands is uncertain. X.25 packet types Packet Type

DCE -> DTE

Call setup and Cleaning Incoming Call

Data and Interrupt

Service VC PVC

Call Request

X

Call Connected

Call Accepted

X

Clear Indication

Clear Request

X

Clear Confirmation

Clear Confirmation

X

Data

Data

X

X

Interrupt

Interrupt

X

X

Interrupt Confirmation

Interrupt Confirmation

X

X

RR

X

X

RNR

X

X

REJ

X

X

Reset Indication

Reset Request

X

X

Reset Confirmation

Reset Confirmation

X

X

Restart Indication

Restart Request

X

X

Flow Control and Reset RR

RNR

Restart

DTE -> DCE

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

Restart

Restart Confirmation

Diagnostic

Registration

Restart Confirmation

158

X

X

Diagnostic

X

X

Registration Confirmation Registration Request

X

X

6.9.4 X.25 details The minimum data field length the network must support is 128 octets per packet.0 However the n network may allow the selection of the maximal length in range 16 to 4096 octets (2 values only) per virtual circuit by negotiation as part of the call setup procedure. The maximal length may be different at the two ends of the virtual circuit. ďƒ˜

Data terminal equipment constructs control packets which are encapsulated into data packets. The packets are sent to the data circuit-terminating equipment, using LAPB Protocol.

Data circuit-terminating equipment strips the layer-2 headers in order to encapsulate packets to the internal network protocol. 6.10 Internetworking Concept and X,25 architectural Models It is an inevitable aspect of communication networks that because of business, organizational, political, and historical reasons, there exist many different types of networks throughout the world. More and more, it is becoming desirable, if not necessary, to interconnect these networks. Take, for example, the LANs operated by many universities around the world. These usually differ in many respects, but there has been a strong need to interconnect them so that the international academic and research community can freely exchange information. The problem of interconnecting a set of independent networks is called internetworking. Each of the participating networks is referred to as a subnetwork (or subnet). The role of the Interworking Units (IWU) is to carry out protocol conversion between the subnets. IWUs are used for interconnecting networks that use the same architecture but employ different protocols. Because of this, they operate at the network layer and are commonly referred to as routers. Another type of protocol converter is a gateway. Unlike a router, it is used to interconnect networks of different architectures. Gateways operate at layers above the network layer and usually encompass all the layers. Internetworking.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

159

6.10.1 Network Sublayers To better address the internetworking problem, the network layer is further subdivided into three sublayers. This subdivision is intended to isolate the functions of internetworking from the functions of intranetworking. The latter is handled by the bottom sublayer, and the former by the upper two sublayers. IWUs typically need to implement all three sublayers. Network sublayers.

Depending on the packet switching mode employed by the subnets, three situations are possible: 1. All subnets use virtual circuits. In this case a network level virtual circuit protocol is used for internetworking. The IWUs perform the protocol conversion between the subnets’ network level protocols and the internetworking protocol. 2. All subnets use datagram’s. In this case a network level datagram protocol is used for internetworking. The IWUs perform the protocol conversion between the subnets’ network level protocols and the internetworking protocol. 3. Some subnets use virtual circuits, some datagram’s. This situation has no easy solution. A practical way of addressing it is to use either a network level virtual circuit protocol or a network level datagram protocol for internetworking and require all participating subnets and IWUs that do not support that mode to implement both modes. Given that the subnets may use different network architectures and different protocols, many incompatibilities need to be overcome by the IWUs and gateways, including the following:  Different types of service and network user interfaces  Different message formats  Different addressing schemes  Different packet switching modes  Different routing methods  Different error handling methods

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS ďƒ˜

160

Different security measures

6.10.2 Network Layer Standards Network services (discussed in Section 4.1) are defined by the ISO 8348 and CCITT X.213 standards. ISO 8880-2 and ISO 8880-3, respectively, provide standards for the provision and support of network services for the virtual circuit and datagram models. Internetworking and the internal breakdown of the network layer into sublayers is covered by the ISO 8648 standard. CCITT X.121 provides a numbering plan for data network addressing on an international basis. There are many other standards pertaining to the network layer. Below we will look at three influential networking and internetworking standards: X.25, X.75, and ISO 8473. CCITT X.25 X.25 is probably the most widely-known network layer standard. It has been in existence since 1974 and has been revised a number of times. X.25 enjoys widespread use in the industry for connecting DTEs to packet networks, and has found its way to many network products. X.25 encompasses the bottom three layers of the OSI model. For its bottom two layers, however, it simply refers to other standards. It recommends X.21 and X.21 bis for its physical layer, and LAP-B for its data link layer. Many vendors have also used other standards instead of these in their products. Strictly speaking, X.25 provides no routing or switching functions. It simply provides an interface between DTEs and network DCEs. As far as X.25 is concerned, the network is an abstract entity, capable of providing end-to-end connections. The following Figure illustrates the role of X.25 in a typical end-to-end connection. The two logical channels between DTEs and DCEs at either end are joined through the packet network to complete an external virtual circuit. X.25 makes no assumptions about the internal mode of the packet network. This may be virtual circuit or datagram. X.25 scope and layers.

X.25 provides three types of external virtual circuits which, although similar in many respects, serve different applications: Virtual Call (VC). Virtual calls consist of three sequential phases: (i) call setup which involves the caller and the callee exchanging call request and call accept packets, (ii) data transfer which involves the two ends exchanging data packets, and (iii) call clear which involves the exchanging of call clear packets. Permanent Virtual Circuit (PVC). The virtual circuit is permanently assigned by the network, hence the DTEs are guaranteed that a connection will always be available. There is no need for call setup or call clearing, and the data transfer is as in VC.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

161

X.25 packet format categories.

X.25 packets support all the network services described above, as well as three other services: diagnostic, registration, and restart. Diagnostic packets are used to communicate additional errorrelated information between X.25 stations. Registration packets enable X.25 subscribers to optimize the network services assigned to them by conveying appropriate requests to the network. Restart packets reset all the logical channels and all the virtual circuits; they are useful for dealing with serious network failures. Other protocols often associated with X.25 are: X.3, X.28, and X.29 (collectively referred to as triple-X). These define the functions and the interfaces for a Packet Assembler/Disassembler (PAD). The role of the PAD is to allow conventional character-based terminals to be connected to X.25. It assembles the characters from a terminal into a packet ready for transmission and disassembles an incoming packet into a character sequence for terminal consumption. The triple-X protocols.

CCITT X.75 X.25 is a subnet standard; it operates within a single network. A complementary standard, X.75, has been developed to support the internetworking of X.25 subnets, although it can also be used to interconnect subnets based on other protocols. Like X.25, X.75 consists of three layers: physical, data link, and network. For its physical and data link layers, it may use standards similar to those used by X.25 (e.g., X.21 and LAP-B). At the network level, X.75 accounts for the upper two sublayers, sitting on top of X.25 which accounts for the lower sublayer. Operationally, X.75 provides a virtual circuit end-to-end connection between two DTEs on separate subnets, by interconnecting the latter and any intermediate subnets (see Figure 4.53).

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

162

The interconnections are facilitated by Signaling Terminals (STEs) which act as partial IWUs or routers, each implementing the X.75 protocol stack. The two DTEs are connected by a virtual circuit which is comprised of a number of ‘smaller’ virtual circuits within and in between the subnets. 6.10.3 Internetworking with X.75.

X.75 packets implement the international packet-switched services for internetworking defined by the X.75 standard. In many respects, they are similar to X.25 packets, except that they are used for virtual circuits between STEs as opposed to virtual circuits between DTEs and DCEs. Check Your Progress 6. List out the three classes of routing algorithms. 7. Differentiate Static and dynamic routing. 8. What is mean by Distance vector algorithm? 6.11 Summary This lesson introduced you to the network layers in the OSI reference model. Now, you can define a Network and understood the evolution and history of network. You have also learnt that the various switching concepts used for establishing the communication between the computers. The generic method for establishing a path for point-to-point communication in a network is called switching. There are two general switching methods: circuit switching and packet switching. In circuit switching two communicating stations are connected by a dedicated communication path. packet switching communication is discrete in form of packets. 6.12 Keywords NSAP : Addresses refer to Network Service Access Points (NSAP); these denote entities at the network layer that act as the interface to service users. QOS : Quality Of Service (QOS) denotes a set of parameters (such as error rate, delays, cost, failure likelihood, throughput) which collectively describe the quality of the network service. HSCSD : High-Speed Circuit-Switched Data (HSCSD), is an enhancement to Circuit Switched Data, the original data transmission mechanism of the GSM mobile phone system, four times faster than GSM, with data rates up to 38.4 kbit/s. X.21 : sometimes referred to as X21, interface is a specification for differential communications introduced in the mid 1970’s by the ITU-T.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

163

6.14 Check Your Progress Note: Use the space provided below for your answers. Compare your answers with those given at the end. 1. Define Routing. ………………………………………………………………………………………………………………… ………………………………………… 2. What is called circuit switching concept? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… …………………. 3. What is mean byt Virtual Circuit? ………………………………………………………………………………………………………………… ………………………………………… 4. Define Datagram. ………………………………………………………………………………………………………………… ………………………………………… 5. What is Packet switching? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… …….. 6. List out the three classes of routing algorithms. ………………………………………………………………………………………………………………… ………………………………………… 7. Differentiate Static and dynamic routing. ………………………………………………………………………………………………………………… …………………………………………………………………………… 8.What is mean by Distance vector algorithm? ………………………………………………………………………………………………………………… ………………………………………… Answer to Check Your Progress. 1. Routing - The process of transferring packets received from the Data Link Layer of the source network to the Data Link Layer of the correct destination network is called routing. 2. A circuit switching network is one that establishes a fixed bandwidth circuit (or channel) between nodes and terminals before the users may communicate, as if the nodes were physically connected with an electrical circuit. 3. The virtual circuit method (also known as connection-oriented) is closer to circuit switching. Here a complete route is worked out prior to sending data packets. 4. The datagram method (also known as connectionless) does not rely on a pre- established route, instead each packet is treated independently. 5. Packet switching is a communications method in which packets (discrete blocks of data) are routed between nodes over data links shared with other traffic. 6. There are three classes of routing algorithms:  Flooding  Static Routing  Dynamic Routing

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

164

7. In static routing, a fixed routing directory is used to guide the selection of a route which remains unchanged for the duration of the connection. Dynamic routing attempts to overcome the limitations of static routing by taking network variations into account when selecting a route. 8. Distance vector algorithms use the Bellman-Ford algorithm. This approach assigns a number, the cost, to each of the links between each node in the network. 6.15 Further Reading 1. “Communication Network – Fundamental Concepts and key Architecture”, Leon Garcia and Widjaja. 2. Larry L. Peterson & Bruce S. Davie, “ Computer Networks – A System Approach”, 2 Harcourt Asia / Morgan Kaufmann, 2000.

nd

Edition,

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT-7 DATAGRAM Structure 7.0 Introduction 7.1 Objectives 7.2 Definition 7.3 IP - Unreliable Connectionless delivery 7.3.1 ISO 8473 7.3.2 Sub netting 7.3.3 Packet Structure 7.4 Datagram 7.4.1 Example: IP Packets 7.4.2 Example: The NASA Deep Space Network 7.5 Routing IP datagram’s 7.5.1 Classification of Routing Algorithms 7.5.2 Routing Algorithms 7.6 ICMP 7.6.1 Technical Details 7.6.2 ICMP Segment Structure 7.6.3 Internet Control Message Protocol (ICMP) 7.6.4 Congestion and datagram flow control 7.7 Summary 7.8 Keywords 7.9 Exercise and Questions 7.10 Check Your Progress 7.11 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

165


COMPUTER COMMUNICATION & NETWORKS

166

7.0 Introduction This lesson describes the packet structure of the IP- unreliable connection delivery and examples of datagram. This is followed by a description of data packets and their handling by the network layer. We will then turn our attention to the problem of interconnecting two networks, and discuss the protocol sub layering provided for this purpose. Finally, we will look at four widely-accepted standards for networking as well as internetworking. 7.1 Objectives    

Understand the packet structure of IP Unreliable connection delivery Appreciate the importance of congestion control Appreciate the need for internetworking and the sub layers provided to support it. Have a basic knowledge of network layer standards X.25, X.75, IP, and ISO 8473.

7.2 Definition The Network layer converts the segments into smaller protocol data units (PDUs) called packets that the network can handle. The Network layer is connectionless in that it does not guarantee that the packet will reach its destination. It is often referred to as "send and pray". The packet is sent out on the wire and we pray that it arrives. 7.3 IP – Unreliable Connectionless Delivary The Internet Protocol (IP) is a connectionless datagram protocol developed by the US Department of Defense Advanced Research Projects Agency (DARPA). It currently enjoys widespread use around the world. An IP network consists of a set of subnets and a set of intermediate stations which act as gateways. The subnets may have totally different characteristics. The gateways perform routing functions as well as handling protocol conversion between the subnets. IP datagrams are referred to as Internet Protocol Data Units, or IPDUs. This datagram is transported as user data within the packets of the subnets. When a packet is about to cross a subnet boundary, the intervening gateway extracts the datagram and encapsulates it into a new packet for use of the next subnet. Gateways and subnet stations maintain a (static or dynamic) routing table which depicts the next gateway the IPDU should be forwarded to. The interface for IP service users is quite simple, and consists of two primitives: send and deliver. An IP service user transmits data by issuing a send command which contains various IPDU fields as parameters. When the IPDU is delivered to its final destination, the receiving station is issued with a deliver command which contains the original data. IP provides status reporting, diagnostics, and flow control capabilities through its Internet Control Message PDUs (ICMPDUs). These are special PDUs used for exchanging status and diagnostics information between the stations.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

167

IPDU structure.

7.3.1 ISO 8473 ISO 8473 is a standardization and simplification of IP. Despite its many similarities to IP, the two protocols remain functionally incompatible. ISO 8473 is also referred to as the ISO Internet Protocol (IP) and the Connectionless Network Protocol (CLNP). The CLNP network service is embodied by a single service primitive, NUNITDATA, of which there are two types. CLNP makes no assumptions about the underlying sub network architecture, except that it should be able to support point-to-point transfer of data units of at least 512 octets each. For packets larger than supported by the subnet, CLNP provides the necessary segmentation and re-assembly functions. Routing can be guided by the transmitter by including a routing option in the packet, but is otherwise determined by the internetworking nodes. CLNP service primitive.

The general structure of CLNP IPDUs is shown in Figure 4.56. These are used to represent user Network Service Data Units (NSDUs). It follows that, depending on its size, an NSDU may be represented by a single IPDU or segmented into multiple IPDUs.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

168

ISO 8473 IPDU structure.

The Protocol ID field uniquely identifies the IPDU from other network packets. Segmentation is facilitated by the Segment Length and Segment Offset fields. The former is set to the length of the segment for this IPDU; the latter denotes the relative position of this segment from the beginning of the original NSDU. The Total Length field denotes the total length of the original NSDU. Corrupt and expired IPDUs are discarded immediately. This causes an error report being generated by the respective station, which is then returned back to the originating station. ISO 8473 can be used in a variety of arrangements. It can be used on top of X.25, or even directly on top of LAP-B. Special Addresses : There are some special IP addresses : 1. Broadcast Addresses They are of two types : (i) Limited Broadcast : It consists of all 1's, i.e., the address is 255.255.255.255 . It is used only on the LAN, and not for any external network. (ii) Directed Broadcast : It consists of the network number + all other bits as1's. It reaches the router corresponding to the network number, and from there it broadcasts to all the nodes in the network. This method is a major security problem, and is not used anymore. So now if we find that all the bits are 1 in the host no. field, then the packet is simply dropped. Therefore, now we can only do broadcast in our own network using Limited Broadcast. 2. Network ID = 0 It means we are referring to this network and for local broadcast we make the host ID zero. 3. Host ID = 0 This is used to refer to the entire network in the routing table. 4. Loop-back Address Here we have addresses of the type 127.x.y.z It goes down way up to the IP layer and comes back to the application layer on the same host. This is used to test network applications before they are used commercially. 7.3.2 Sub netting Sub netting means organizing hierarchies within the network by dividing the host ID as per our network. For example consider the network ID : 150.29.x.y We could organize the remaining 16 bits in any way, like : 4 bits - department

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

169

4 bits - LAN 8 bits - host This gives some structure to the host IDs. This division is not visible to the outside world. They still see just the network number, and host number (as a whole). The network will have an internal routing table which stores information about which router to send an address to. Now consider the case where we have : 8 bits - subnet number, and 8 bits - host number. Each router on the network must know about all subnet numbers. This is called the subnet mask. We put the network number and subnet number bits as 1 and the host bits as 0. Super netting This is moving towards class-less addressing. We could say that the network number is 21 bits ( for 8 class C networks ) or say that it is 24 bits and 7 numbers following that. For example : a.b.c.d / 21 This means only look at the first 21 bits as the network address. Addressing on IITK Network If we do not have connection with the outside world directly then we could have Private IP addresses ( 172.31 ) which are not to be published and routed to the outside world. Switches will make sure that they do not broadcast packets with such addressed to the outside world. The basic reason for implementing sub netting was to avoid broadcast. So in our case we can have some subnets for security and other reasons although if the switches could do the routing properly, then we do not need subnets. In the IITK network we have three subnets -CC, CSE building are two subnets and the rest of the campus is one subset . 7.3.3 Packet Structure Version Header Type Number Length Service (4 bits) (4 bits) bits)

of (8 Total Length (16 bits) Flags Flag Offset (13 bits) (3bits)

ID (16 bits) Time To Live Protocol (8 bits) bits) Source (32 bits) Destination (32 bits) Options

(8

Header Checksum (16 bits)

1. Header Length : We could have multiple sized headers so we need this field. Header will always be a multiple of 4bytes and so we can have a maximum length of the field as 15, so the maximum size of the header is 60 bytes ( 20 bytes are mandatory ). 2. Type Of Service (ToS) : This helps the router in taking the right routing decisions. The structure is : First three bits : They specify the precedence i.e. the priority of the packets. Next three bits : o o o

D bit - D stands for delay. If the D bit is set to 1, then this means that the application is delay sensitive, so we should try to route the packet with minimum delay. T bit - T stands for throughput. This tells us that this particular operation is throughput sensitive. R bit - R stands for reliability. This tells us that we should route this packet through a more reliable network.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

170

Last two bits: The last two bits are never used. Unfortunately, no router in this world looks at these bits and so no application sets them nowadays. The second word is meant for handling fragmentations. If a link cannot transmit large packets, then we fragment the packet and put sufficient information in the header for recollection at the destination. 3. ID Field : The source and ID field together will represent the fragments of a unique packet. So each fragment will have a different ID. 4. Offset : It is a 13 bit field that represents where in the packet, the current fragment starts. Each bit represents 8 bytes of the packet. So the packet size can be at most 64 kB. Every fragment except the last one must have its size in bytes as a multiple of 8 in order to ensure compliance with this structure. The reason why the position of a fragment is given as an offset value instead of simply numbering each packet is because refragmentation may occur somewhere on the path to the other node. Fragmentation, though supported by IPv4 is not encouraged. This is because if even one fragment is lost the entire packet needs to be discarded. A quantity M.T.U (Maximum Transmission Unit) is defined for each link in the route. It is the size of the largest packet that can be handled by the link. The Path-M.T.U is then defined as the size of the largest packet that can be handled by the path. It is the smallest of all the MTUs along the path. Given information about the path MTU we can send packets with sizes smaller than the path MTU and thus prevent fragmentation. This will not completely prevent it because routing tables may change leading to a change in the path. 5. Flags :It has three bits o o o

M bit : If M is one, then there are more fragments on the way and if M is 0, then it is the last fragment DF bit : If this bit is sent to 1, then we should not fragment such a packet. Reserved bit : This bit is not used.

Reassembly can be done only at the destination and not at any intermediate node. This is because we are considering Datagram Service and so it is not guaranteed that all the fragments of the packet will be sent thorough the node at which we wish to do reassembly. 6. Total Length : It includes the IP header and everything that comes after it. 7. Time To Live (TTL) : Using this field, we can set the time within which the packet should be delivered or else destroyed. It is strictly treated as the number of hops. The packet should reach the destination in this number of hops. Every router decreases the value as the packet goes through it and if this value becomes zero at a particular router, it can be destroyed. 8. Protocol : This specifies the module to which we should hand over the packet ( UDP or TCP ). It is the next encapsulated protocol. Value Protocol 0 Pv6 Hop-by-Hop Option. 1 ICMP, Internet Control Message Protocol. 2 IGMP, Internet Group Management Protocol. RGMP, Router-port Group Management Protocol. 3 GGP, Gateway to Gateway Protocol. 4 IP in IP encapsulation. 5 ST, Internet Stream Protocol. 6 TCP, Transmission Control Protocol. 7 UCL, CBT. 8 EGP, Exterior Gateway Protocol. 9 IGRP.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 10 11 12 13 14 15 16 17 18 19

171

BBN RCC Monitoring. NVP, Network Voice Protocol. PUP. ARGUS. EMCON, Emission Control Protocol. XNET, Cross Net Debugger. Chaos. UDP, User Datagram Protocol. TMux, Transport Multiplexing Protocol. DCN Measurement Subsystems.

255 9. Header Checksum : This is the usual checksum field used to detect errors. Since the TTL field is changing at every router so the header checksum ( upto the options field ) is checked and recalculated at every router. 10. Source : It is the IP address of the source node 11. Destination : It is the IP address of the destination node. 12. IP Options : The options field was created in order to allow features to be added into IP as time passes and requirements change. Currently 5 options are specified although not all routers support them. They are: (i) Securtiy: It tells us how secret the information is. In theory a military router might use this field to specify not to route through certain routers. In practice no routers support this field. (ii) Source Routing: It is used when we want the source to dictate how the packet traverses the network. It is of 2 types -> Loose Source Record Routing (LSRR): It requires that the packet traverse a list of specified routers, in the order specified but the packet may pass though some other routers as well. -> Strict Source Record Routing (SSRR): It requires that the packet traverse only the set of specified routers and nothing else. If it is not possible, the packet is dropped with an error message sent to the host.

The above is the format for SSRR. For LSRR the code is 131. (iii ) Record Routing :

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

172

In this the intermediate routers put there IP addresses in the header, so that the destination knows the entire path of the packet. Space for storing the IP address is specified by the source itself. The pointer field points to the position where the next IP address has to be written. Length field gives the number of bytes reserved by the source for writing the IP addresses. If the space provided for storing the IP addresses of the routers visited, falls short while storing these addresses, then the subsequent routers do not write their IP addresses. (iv) Time Stamp Routing :

a. It is similar to record route option except that nodes also add their timestamps to the packet. The new fields in this option are -> Flags -> Overflow: It stores the number of nodes that were unable to add their timestamps to the packet. The maximum value is 15. b. Format of the type/code field Copy Bit

Type of option

Option Number.

Copy bit: It says whether the option is to be copied to every fragment or not. a value of 1 stands for copying and 0 stands for not copying. Type: It is a 2 bit field. Currently specified values are 0 and 2. 0 means the option is a control option while 2 means the option is for measurement Option Number: It is a 5 bit field which specifies the option number.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

173

For all options a length field is put in order that a router not familiar with the option will know how many bytes to skip. Thus every option is of the form TLV: Type/Length/Value. This format is followed in not only in IP but in nearly all major protocols. The network layer is concerned with getting packets from the source all the way to the destination. The packets may require to make many hops at the intermediate routers while reaching the destination. This is the lowest layer that deals with end to end transmission. In order to achieve its goals, the network later must know about the topology of the communication network. It must also take care to choose routes to avoid overloading of some of the communication lines while leaving others idle. The main functions performed by the network layer are as follows:   

Routing Congestion Control Inter networking

7.4 Datagram Connectionless data packets are commonly referred to as datagrams and the service provided by connectionless Layer 3 protocols is referred to as datagram service. Stateless datagram service is simpler for Layer 3 entities than connection-oriented network layer services. Because there is no state information to maintain, dynamic routing protocols can be used. If a router fails during the dialogue between two communicating hosts, neighboring routers will discover this via the routing protocols and find alternate routes which bypass the failed router. There seems to be a fair amount of ambiguity between the network layer and the LLC sublayer. Both can provide connection-oriented or connectionless services to higher layers. To a large extent, if Layer 3 is explicitly implemented, there is no need for an LLC sublayer. The primary difference is in scope-LLC addresses and protocols are oriented toward a more local environment whereas network layer addresses and protocols are global in scope. 7.4.1 Example: IP packets IP packets are composed of a header and Payload. The IPv4 packet header consists of:        

4 bits that contain the version, that specifies if it's an IPv4 or IPv6 packet, 4 bits that contain the Internet Header Length which is the length of the header in multiples of 4 bytes. Ex. 5 is equal to 20 bytes. 8 bits that contain the Type of Service, also referred to as Quality of Service (QoS), which describes what priority the packet should have, 16 bits that contain the length of the packet in bytes, 16 bits that contain an identification tag to help reconstruct the packet from several fragments, 3 bits that contain a zero, a flag that says whether the packet is allowed to be fragmented or not (DF: Don't fragment), and a flag to state whether more fragments of a packet follow (MF: More Fragments) 13 bits that contain the fragment offset, a field to identify which fragment this packet is attached to, 8 bits that contain the Time to live (TTL) which is the number of hops (router, computer or device along a network) the packet is allowed to pass before it dies (for example, a packet with a TTL of 16 will be allowed to go across 16 routers to get to its destination before it is discarded),

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS    

174

8 bits that contain the protocol (TCP, UDP, ICMP, etc...) 16 bits that contain the Header Checksum, a number used in error detection, 32 bits that contain the source IP address, 32 bits that contain the destination address.

After those, optional flags can be added of varied length, which can change based on the protocol used, then the data that packet carries is added. An IP packet has no trailer. However, an IP packet is often carried as the payload inside an Ethernet frame, which has its own header and trailer. Delivery not guaranteed Many networks do not provide guarantees of delivery, non duplication of packets, or in-order delivery of packets, e.g., the UDP protocol of the Internet. However, it is possible to layer a transport protocol on top of the packet service that can provide such protection; TCP and UDP are the best examples of layer 4, the Transport Layer, of the seven layered OSI model. The header of a packet specifies the data type, packet number, total number of packets, and the sender's and receiver's IP addresses. The term frame is sometimes used to refer to a packet exactly as transmitted over the wire or radio. 7.4.2 Example: the NASA Deep Space Network The Consultative Committee for Space Data Systems (CCSDS) packet telemetry standard defines the protocol used for the transmission of spacecraft instrument data over the deep-space channel. Under this standard, an image or other data sent from a spacecraft instrument is transmitted using one or more packets.

The difference between Frames and Packets as used by the NASA Deep Space Network CCSDS packet definition A packet is a block of data with length that can vary between successive packets, ranging from 7 to 65,542 bytes, including the packet header.  

Packetized data are transmitted via frames, which are fixed-length data blocks. The size of a frame, including frame header and control information, can range up to 2048 bytes. Packet sizes are fixed during the development phase.

Because packet lengths are variable but frame lengths are fixed, packet boundaries usually do not coincide with frame boundaries.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

175

Telecom processing notes Data in a frame typically are protected from channel errors by error-correcting codes.  

Even when the channel errors exceed the correction capability of the error-correcting code, the presence of errors nearly always is detected by the error-correcting code or by a separate error-detecting code. Frames for which uncorrectable errors are detected are marked as un decodable and typically are deleted.

Handling data loss Deleted un decodable whole frames are the principal type of data loss that affects compressed data sets. There generally would be little to gain from attempting to use compressed data from a frame marked as un decodable. 

When errors are present in a frame, the bits of the subbed pixels are already decoded before the first bit error will remain intact, but all subsequent decoded bits in the segment usually will be completely corrupted; a single bit error is often just as disruptive as many bit errors. Furthermore, compressed data usually are protected by powerful, long-block length errorcorrecting codes, which are the types of codes most likely to yield substantial fractions of bit errors throughout those frames that are un decodable.

Thus, frames with detected errors would be essentially unusable even if they were not deleted by the frame processor.     

This data loss can be compensated for with the following mechanisms. If an erroneous frame escapes detection, the de-compressor will blindly use the frame data as if they were reliable, whereas in the case of detected erroneous frames, the decompressor can base its reconstruction on incomplete, but not misleading, data. Fortunately, it is extremely rare for an erroneous frame to go undetected. For frames coded by the CCSDS Reed–Solomon code, fewer than 1 in 40,000 erroneous frames can escape detection. All frames not employing the Reed–Solomon code use a cyclic redundancy check (CRC) error-detecting code, which has an undetected frame-error rate of less than 1 in 32,000.

7.5 Routing IP datagram’s Routing is the process of forwarding of a packet in a network so that it reaches its intended destination. The main goals of routing are: 1. Correctness: The routing should be done properly and correctly so that the packets may reach their proper destination. 2. Simplicity: The routing should be done in a simple manner so that the overhead is as low as possible. With increasing complexity of the routing algorithms the overhead also increases. 3. Robustness: Once a major network becomes operative, it may be expected to run continuously for years without any failures. The algorithms designed for routing should be robust enough to handle hardware and software failures and should be able to cope with changes in the topology and traffic without requiring all jobs in all hosts to be aborted and the network rebooted every time some router goes down. 4. Stability: The routing algorithms should be stable under all possible circumstances.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

176

5. Fairness: Every node connected to the network should get a fair chance of transmitting their packets. This is generally done on a first come first serve basis. 6. Optimality: The routing algorithms should be optimal in terms of throughput and minimizing mean packet delays. Here there is a trade-off and one has to choose depending on his suitability. Check Your Progress 1. Define Internetworking. 7.5.1 Classification of Routing Algorithms The routing algorithms may be classified as follows: 1. Adaptive Routing Algorithm: These algorithms change their routing decisions to reflect changes in the topology and in traffic as well. These get their routing information from adjacent routers or from all routers. The optimization parameters are the distance, number of hops and estimated transit time. This can be further classified as follows: (i) Centralized: In this type some central node in the network gets entire information about the network topology, about the traffic and about other nodes. This then transmits this information to the respective routers. The advantage of this is that only one node is required to keep the information. The disadvantage is that if the central node goes down the entire network is down, i.e. single point of failure. (ii) Isolated: In this method the node decides the routing without seeking information from other nodes. The sending node does not know about the status of a particular link. The disadvantage is that the packet may be send through a congested route resulting in a delay. Some examples of this type of algorithm for routing are: a. Hot Potato: When a packet comes to a node, it tries to get rid of it as fast as it can, by putting it on the shortest output queue without regard to where that link leads. A variation of this algorithm is to combine static routing with the hot potato algorithm. When a packet arrives, the routing algorithm takes into account both the static weights of the links and the queue lengths. b. Backward Learning: In this method the routing tables at each node gets modified by information from the incoming packets. One way to implement backward learning is to include the identity of the source node in each packet, together with a hop counter that is incremented on each hop. When a node receives a packet in a particular line, it notes down the number of hops it has taken to reach it from the source node. If the previous value of hop count stored in the node is better than the current one then nothing is done but if the current value is better then the value is updated for future use. The problem with this is that when the best route goes down then it cannot recall the second best route to a particular node. Hence all the nodes have to forget the stored in formations periodically and start all over again. c. Distributed: In this the node receives information from its neighboring nodes and then takes the decision about which way to send the packet. The disadvantage is that if in between the interval it receives information and sends the packet something changes then the packet may be delayed. 2. Non-Adaptive Routing Algorithm: These algorithms do not base their routing decisions on measurements and estimates of the current traffic and topology. Instead the route to be taken in going from one node to the other is computed in advance, off-line, and downloaded to the routers when the network is booted. This is also known as static routing. This can be further classified as:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

177

(i) Flooding: Flooding adapts the technique in which every incoming packet is sent on every outgoing line except the one on which it arrived. One problem with this method is that packets may go in a loop. As a result of this a node may receive several copies of a particular packet which is undesirable. Some techniques adapted to overcome these problems are as follows: Sequence Numbers: Every packet is given a sequence number. When a node receives the packet it sees its source address and sequence number. If the node finds that it has sent the same packet earlier then it will not transmit the packet and will just discard it. Hop Count: Every packet has a hop count associated with it. This is decremented (or incremented) by one by each node which sees it. When the hop count becomes zero (or a maximum possible value) the packet is dropped. Spanning Tree: The packet is sent only on those links that lead to the destination by constructing a spanning tree routed at the source. This avoids loops in transmission but is possible only when all the intermediate nodes have knowledge of the network topology. Flooding is not practical for general kinds of applications. But in cases where high degree of robustness is desired such as in military applications, flooding is of great help. 3. Random Walk: In this method a packet is sent by the node to one of its neighbors randomly. This algorithm is highly robust. When the network is highly interconnected, this algorithm has the property of making excellent use of alternative routes. It is usually implemented by sending the packet onto the least queued link. Delta Routing Delta routing is a hybrid of the centralized and isolated routing algorithms. Here each node computes the cost of each line (i.e. some functions of the delay, queue length, utilization, bandwidth etc) and periodically sends a packet to the central node giving it these values which then computes the k best paths from node i to node j. Let Cij1 be the cost of the best i-j path, Cij2 the cost of the next best path and so on. If Cijn - Cij1 < delta, (Cijn - cost of n'th best i-j path, delta is some constant) then path n is regarded equivalent to the best i-j path since their cost differ by so little. When delta -> 0 this algorithm becomes centralized routing and when delta -> infinity all the paths become equivalent. Multipath Routing In the above algorithms it has been assumed that there is a single best path between any pair of nodes and that all traffic between them should use it. In many networks however there are several paths between pairs of nodes that are almost equally good. Sometimes in order to improve the performance multiple paths between single pair of nodes are used. This technique is called multipath routing or bifurcated routing. In this each node maintains a table with one row for each possible destination node. A row gives the best, second best, third best, etc outgoing line for that destination, together with a relative weight. Before forwarding a packet, the node generates a random number and then chooses among the alternatives, using the weights as probabilities. The tables are worked out manually and loaded into the nodes before the network is brought up and not changed thereafter. Hierarchical Routing In this method of routing the nodes are divided into regions based on hierarchy. A particular node can communicate with nodes at the same hierarchical level or the nodes at a lower level and

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

178

directly under it. Here, the path from any source to a destination is fixed and is exactly one if the hierarchy is a tree. Non-Hierarchical Routing In this type of routing, interconnected networks are viewed as a single network, where bridges, routers and gateways are just additional nodes.  

Every node keeps information about every other node in the network In case of adaptive routing, the routing calculations are done and updated for all the nodes.

The above two are also the disadvantages of non-hierarchical routing, since the table sizes and the routing calculations become too large as the networks get bigger. So this type of routing is feasible only for small networks. Hierarchical Routing This is essentially a 'Divide and conquer' strategy. The network is divided into different regions and a router for a particular region knows only about its own domain and other routers. Thus, the network is viewed at two levels: 1. The Sub-network level, where each node in a region has information about its peers in the same region and about the region's interface with other regions. Different regions may have different 'local' routing algorithms. Each local algorithm handles the traffic between nodes of the same region and also directs the outgoing packets to the appropriate interface. 2. The Network Level, where each region is considered as a single node connected to its interface nodes. The routing algorithms at this level handle the routing of packets between two interface nodes, and are isolated from intra-regional transfer. Networks can be organized in hierarchies of many levels; e.g. local networks of a city at one level, the cities of a country at a level above it, and finally the network of all nations. In Hierarchical routing, the interfaces need to store information about:   

All nodes in its region which are at one level below it. Its peer interfaces. At least one interface at a level above it, for outgoing packages.

Advantages of Hierarchical Routing:  

Smaller sizes of routing tables. Substantially lesser calculations and updates of routing tables.

Disadvantage: 

Once the hierarchy is imposed on the network, it is followed and possibility of direct paths is ignored. This may lead to sub optimal routing.

Source Routing Source routing is similar in concept to virtual circuit routing. It is implemented as under:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS  

179

Initially, a path between nodes wishing to communicate is found out, either by flooding or by any other suitable method. This route is then specified in the header of each packet routed between these two nodes. A route may also be specified partially, or in terms of some intermediate hops.

Advantages:  

Bridges do not need to lookup their routing tables since the path is already specified in the packet itself. The throughput of the bridges is higher, and this may lead to better utilization of bandwidth, once a route is established.

Disadvantages:  

Establishing the route at first needs an expensive search method like flooding. To cope up with dynamic relocation of nodes in a network, frequent updates of tables are required; else all packets would be sent in wrong direction. This too is expensive.

Policy Based Routing In this type of routing, certain restrictions are put on the type of packets accepted and sent. e.g.. The IIT- K router may decide to handle traffic pertaining to its departments only, and reject packets from other routes. This kind of routing is used for links with very low capacity or for security purposes. Shortest Path Routing Here, the central question dealt with is 'How to determine the optimal path for routing?' Various algorithms are used to determine the optimal routes with respect to some predetermined criteria. A network is represented as a graph, with its terminals as nodes and the links as edges. A 'length' is associated with each edge, which represents the cost of using the link for transmission. Lower the cost, more suitable is the link. The cost is determined depending upon the criteria to be optimized. Some of the important ways of determining the cost are:  

Minimum number of hops: If each link is given a unit cost, the shortest path is the one with minimum number of hops. Such a route is easily obtained by a breadth first search method. This is easy to implement but ignores load, link capacity etc. Transmission and Propagation Delays: If the cost is fixed as a function of transmission and propagation delays, it will reflect the link capacities and the geographical distances. However these costs are essentially static and do not consider the varying load conditions. Queuing Delays: If the cost of a link is determined through its queuing delays, it takes care of the varying load conditions, but not of the propagation delays.

Ideally, the cost parameter should consider all the above mentioned factors, and it should be updated periodically to reflect the changes in the loading conditions. However, if the routes are changed according to the load, the load changes again. This feedback effect between routing and load can lead to undesirable oscillations and sudden swings. 7.5.2 Routing Algorithms As mentioned above, the shortest paths are calculated using suitable algorithms on the graph representations of the networks. Let the network be represented by graph G ( V, E ) and let the

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

180

number of nodes be 'N'. For all the algorithms discussed below, the costs associated with the links are assumed to be positive. A node has zero cost w.r.t itself. Further, all the links are assumed to be symmetric, i.e. if di,j = cost of link from node i to node j, then d i,j = d j,i . The graph is assumed to be complete. If there exists no edge between two nodes, then a link of infinite cost is assumed. The algorithms given below find costs of the paths from all nodes to a particular node; the problem is equivalent to finding the cost of paths from a source to all destinations. Bellman-Ford Algorithm This algorithm iterates on the number of edges in a path to obtain the shortest path. Since the number of hops possible is limited (cycles are implicitly not allowed), the algorithm terminates giving the shortest path. Notation: d i,j = h = D[ i,h] = D[ 1,h] =

Length of path between nodes i and j, indicating the cost of the link. Number of hops. Shortest path length from node i to node 1, with up to 'h' hops. 0 for all h .

Algorithm : Initial condition :

D[ i, 0] = infinity, for all i ( i != 1 )

Iteration

:

D[i, h+1] = min { di,j + D[j,h] }

Termination

:

The algorithm terminates when D[i, h] = D [ i, h+1]

over all values of j .

for all i .

Principle: For zero hops, the minimum length path has length of infinity, for every node. For one hop the shortest-path length associated with a node is equal to the length of the edge between that node and node 1. Hereafter, we increment the number of hops allowed, (from h to h+1 ) and find out whether a shorter path exists through each of the other nodes. If it exists, say through node 'j', then its length must be the sum of the lengths between these two nodes (i.e. di,j ) and the shortest path between j and 1 obtainable in up to h paths. If such a path doesn't exist, then the path length remains the same. The algorithm is guaranteed to terminate, since there are utmost N 3 nodes, and so N-1 paths. It has time complexity of O ( N ) . Dijkstra's Algorithm Notation: Di = Length of shortest path from node 'i' to node 1. di,j = Length of path between nodes i and j . Algorithm Each node j is labeled with Dj, which is an estimate of cost of path from node j to node 1. Initially, let the estimates be infinity, indicating that nothing is known about the paths. We now iterate on the length of paths, each time revising our estimate to lower values, as we obtain them. Actually, we divide the nodes into two groups ; the first one, called set P contains the nodes whose shortest distances have been found, and the other Q containing all the remaining nodes. Initially P contains only the node 1. At each step, we select the node that has minimum cost path to node 1. This node is transferred to set P. At the first step, this corresponds to shifting the node

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

181

closest to 1 in P. Its minimum cost to node 1 is now known. At the next step, select the next closest node from set Q and update the labels corresponding to each node using: Dj

= min [ Dj , Di + dj,i ]

Finally, after N-1 iterations, the shortest paths for all nodes are known, and the algorithm terminates.

Principle Let the closest node to 1 at some step be i. Then i is shifted to P. Now, for each node j , the closest path to 1 either passes through i or it doesn't. In the first case Dj remains the same. In the second case, the revised estimate of Dj is the sum Di + di,j . So we take the minimum of these two cases and update Dj accordingly. As each of the nodes get transferred to set P, the estimates get closer to the lowest possible value. When a node is transferred, its shortest path length is known. So finally all the nodes are in P and the D j 's represent the minimum costs. The 2 algorithm is guaranteed to terminate in N-1 iterations and its complexity is O( N ). The Floyd Warshall Algorithm This algorithm iterates on the set of nodes that can be used as intermediate nodes on paths. This set grows from a single node (say node 1 ) at start to finally all the nodes of the graph. At each iteration, we find the shortest path using given set of nodes as intermediate nodes, so that finally all the shortest paths are obtained. Notation Di,j [n] = Length of shortest path between the nodes i and j using only the nodes 1,2,....n as intermediate nodes. Initial Di,j[0]

Condition =

di,j

Algorithm Initially, n = 0. Di,j[n + 1]

for all nodes i,j .

At each iteration, add next node to n. i.e. For n = 1,2, .....N-1 ,

= min { Di,j[n] , Di,n+1[n] + Dn+1,j[n] }

Principle Suppose the shortest path between i and j using nodes 1,2,...n is known. Now, if node n+1 is allowed to be an intermediate node, then the shortest path under new conditions either passes through node n+1 or it doesn't. If it does not pass through the node n+1, then D i,j[n+1] is same as Di,j[n] . Else, we find the cost of the new route, which is obtained from the sum, Di,n+1[n] + Dn+1,j[n]. So we take the minimum of these two cases at each step. After adding all the nodes to the set of intermediate nodes, we obtain the shortest paths between all pairs of nodes together. 3 The complexity of Floyd-Warshall algorithm is O ( N ). It is observed that all the three algorithms mentioned above give comparable performance, depending upon the exact topology of the network. 7.6 ICMP The Internet Control Message Protocol (ICMP) is one of the core protocols of the Internet protocol suite. It is chiefly used by networked computers' operating systems to send error

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

182

messages—indicating, for instance, that a requested service is not available or that a host or router could not be reached. ICMP relies on IP to perform its tasks, and it is an integral part of IP. It differs in purpose from transport protocols such as TCP and UDP in that it is typically not used to send and receive data between end systems. It is usually not used directly by user network applications, with some notable exceptions being the ping tool and trace route. 7.6.1 Technical details Internet control message protocol is part of the Internet protocol suite as defined in RFC 792. ICMP messages are typically generated in response to errors in IP datagram’s or for diagnostic or routing purposes. The version of ICMP for Internet Protocol version 4 is also known as ICMPv4, as it is part of IPv4. IPv6 has an equivalent protocol, ICMPv6. ICMP messages are constructed at the IP layer, usually from a normal IP datagram that has generated an ICMP response. IP encapsulates the appropriate ICMP message with a new IP header (to get the ICMP message back to the original sending host) and transmits the resulting datagram in the usual manner. 7.6.2 ICMP segment structure Header The ICMP header starts after bit 160 of the IP header (unless IP options are used).

Bits 160-167 168-175 176-183 184-191

160 Type

192 ID

    

Code

Checksum

Sequence

Type - ICMP type as specified below. Code - further specification of the ICMP type; e.g. : an ICMP Destination Unreachable might have this field set to 1 through 15 each bearing different meaning. Checksum - This field contains error checking data calculated from the ICMP header+data, with value 0 for this field. ID - This field contains an ID value, should be returned in case of ECHO REPLY. Sequence - This field contains a sequence value, should be returned in case of ECHO REPLY.

Padding data After the ICMP header follows padding data (in octets):

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS  

183

The Linux "ping" utility pads ICMP to a total size of 64 in addition to the 8 octet header. Windows "ping.exe" pads to a total size of 40 in addition to the 8 octet header.

7.6.3 Internet Control Message Protocol (ICMP) The Internet Control Message Protocol (ICMP) [RFC792] protocol is classic example of a client server application. The ICMP server executes on all IP end system computers and all IP intermediate systems (i.e routers). The protocol is used to report problems with delivery of IP datagram’s within an IP network. It can be sued to show when a particular End System (ES) is not responding, when an IP network is not reachable, when a node is overloaded, when an error occurs in the IP header information, etc. The protocol is also frequently used by Internet managers to verify correct operations of End Systems (ES) and to check that routers are correctly routing packets to the specified destination address.

ICMP messages generated by router R1, in response to message sent by H0 to H1 and forwarded by R0. This message could, for instance be generated if the MTU of the link between R0 and R1 was smaller than size of the IP packet, and the packet had the Don't Fragment (DF) bit set in the IP packet header. The ICMP message is returned to H0, since this is the source address specified in the IP packet that suffered the problem.

An ICMP message consisting of 4 bytes of PCI and an optional message payload. The format of an ICMP message is shown above. The 8-bit type code identifies the types of message. This is followed by the first few bytes of the packet that resulted in generation of the error message. This payload is, for instance used by a sender that receives the ICMP message to perform Path MTU Discovery so that it may determine IP destination address of the packet that resulted in the error. The figure below shows the encapsulation of ICMP over an Ethernet LAN using an IP network layer header, and a MAC link layer header and trailer containing the 32-bit checksum:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

184

Encapsulation for a complete ICMP packet (not showing the Ethernet preamble) It is the responsibility of the network layer (IP) protocol to ensure that the ICMP message is sent to the correct destination. This is achieved by setting the destination address of the IP packet carrying the ICMP message. The source address is set to the address of the computer that generated the IP packet (carried in the IP source address field) and the IP protocol type is set to "ICMP" to indicate that the packet is to be handled by the remote end system's ICMP client interface. A version of ICMP has also been defined for IPv6, called ICMPv6. This subsumes all the equivalent functions of ICMP for IPv4 and adds other network-layer functions. The Ping Application The "ping" program contains a client interface to ICMP. It may be used by a user to verify an endto-end Internet Path is operational. The ping program also collects performance statistics (i.e. the measured round trip time and the number of times the remote server fails to reply. Each time an ICMP echo reply message is received, the ping program displays a single line of text. The text printed by ping shows the received sequence number, and the measured round trip time (in milliseconds). Each ICMP Echo message contains a sequence number (starting at 0) that is incremented after each transmission, and a timestamp value indicating the transmission time.

Use of the ping program to test whether a particular computer ("sysa") is operational. The operation of ICMP is illustrated in the frame transition diagram shown above. In this case there is only one Intermediate System (IS) (i.e. IP router). In this case two types of message are involved the ECHO request (sent by the client) and the ECHO reply (the response by the server). Each message may contain some optional data. When data are sent by a server, the server returns the data in the reply which is generated. ICMP packets are encapsulated in IP for transmission across an internet. The Traceroute Application The "traceroute" program also contains a client interface to ICMP. Like the "ping" program, it may be used by a user to verify an end-to-end Internet Path is operational, but also provides

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

185

information on each of the Intermediate Systems (i.e. IP routers) to be found along the IP Path from the sender to the receiver. Traceroute uses ICMP echo messages. These are addressed to the target IP address. The sender manipulates the TTL (hop count) value at the IP layer to force each hop in turn to return an error message. 

 

The program starts by sending an ICMP Echo request message with an IP destination address of the system to be tested and with a Time To Live (TTL) value set to 1. The first system that receives this packet decrements the TTL and discards the message, since this now has a value of zero. Before it deletes the message, the system constructs an ICMP error message (with an ICMP message type of "TTL exceeded") and returns this back to the sender. Receipt of this message allows the sender to identify which system is one link away along the path to the specified destination. The sender repeats this two more times, each time reporting the system that received the packet. If all packets travel along the same path, each ICMP error message will be received from the same system. Where two or more alternate paths are being used, the results may vary. If the system that responded was not the intended destination, the sender repeats the process by sending a set of three identical messages, but using a TTL value that is one larger than the previous attempt. The first system forwards the packet (decrementing the TTL value in the IP header), but a subsequent system that reduces the TTL value to zero, generates an ICMP error message with its own source address. In this way, the sender learns the identity of another system along the IP path to the destination. This process repeats until the sender receives a response from the intended destination (or the maximum TTL value is reached). Some Routers are configured to discard ICMP messages, while others process them but do not return ICMP Error Messages. Such routers hide the "topology" of the network, but also can impact correct operation of protocols. Some routers will process the ICMP Messages, providing that they do not impose a significant load on the routers, such routers do not always respond to ICMP messages. When "traceroute" encounters a router that does not respond, it prints a "*" character.

ICMP This protocol discusses a mechanism that gateways and hosts use to communicate control or error information. The Internet protocol provides unreliable, connectionless datagram service, and that a datagram travels from gateway to gateway until it reaches one that can deliver it directly to its final destination. If a gateway cannot route or deliver a datagram, or if the gateway detects an unusual condition, like network congestion, that affects its ability to forward the datagram, it needs to instruct the original source to take action to avoid or correct the problem. The Internet Control Message Protocol allows gateways to send error or control messages to other gateways or hosts; ICMP provides communication between the Internet Protocol software on one machine and the Internet Protocol software on another. This is a special purpose message mechanism added by the designers to the TCP/IP protocols. This is to allow gateways in an internet to report errors or provide information about unexpected circumstances. The IP protocol itself contains nothing to help the sender test connectivity or learn about failures. Error Reporting vs Error Correction ICMP only reports error conditions to the original source; the source must relate errors to individual application programs and take action to correct problems. It provides a way for gateway to report the error It does not fully specify the action to be taken for each possible error. ICMP is restricted to communicate with the original source but not intermediate sources.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

186

ICMP Message Delivery ICMP messages travel across the internet in the data portion of an IP datagram, which itself travels across the internet in the data portion of an IP datagram, which itself travels across each physical network in the data portion of a frame. Datagram’s carry in ICMP messages are routed exactly like datagram’s carrying information for users; there is no additional reliability or priority. An exception is made to the error handling procedures if an IP datagram carrying an ICMP messages are not generated for errors that result from datagram’s carrying ICMP error messages. ICMP Message Format It has three fields; an 8-bit integer message TYPE field that identifies the message, an 8-bit CODE field that provides further information about the message type, and a 16-bit CHECKSUM field(ICMP uses the same additive checksum algorithm as IP, but the ICMP checksum only covers the ICMP message).In addition , ICMP messages that report errors always include the header and first 64 data bits of the datagram causing the problem. The ICMP TYPE field defines the meaning of the message as well as its format. The Types include : TYPE FIELD 0 3 4 5 8 11 12 13 14 15 16 17 18

ICMP MESSAGE TYPE ECHO REPLY DESTINATION UNREACHABLE SOURCE QUENCH REDIRECT(CHANGE A ROUTE) ECHO REQUEST TIME EXCEEDED FOR A DATAGRAM PARAMETER PROBLEM ON A DATAGRAM TIMESTAMP REQUEST TIMESTAMP REPLY INFORMATION REQUEST(OBSOLETE) INFORMATION REPLY(OBSOLETE) ADDRESS MASK REQUEST ADDRESS MASK REPLY TESTING DESTINATION

Reachabilty and Status :

TCP/IP protocols provide facilities to help network managers or users identify network problems. One of the most frequently used debugging tools invokes the ICMP echo request and echo reply messages. A host or gateway sends an ICMP echo request message to a specified destination. Any machine that receives an echo request formulates an echo reply and returns to the original sender. The request contains an optional data area; the reply contains a copy of the data sent in the request. The echo request and associated reply can be used to test whether a destination is reachable and responding. Echo Request and Reply The field listed OPTIONAL DATA is a variable length field that contains data to be returned to the sender. An echo reply always returns exactly the same data as was received in the request. Fields IDENTIFIER and SEQUENCE NUMBER are used by the sender to match replies to request. The value of the TYPE field specifies whether the message is a request(8) or a reply(0).

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

187

Reports of Unreachable Destinations The Code field in a destination unreachable message contains an integer that further describes th problem. Possible values are : CODE VALUE

MEANING

0 NETWORK UNREACHABLE 1 HOST UNREACHABLE 2 PROTOCOL UNREACHABLE 3 PORT UNREACHABLE 4 FRAGMENTATION NEEDED AND DF SET 5 SOURCE ROOT FAILED 6 DESTINATION NETWORK UNKNOWN 7 DESTINATION HOST UNKNOWN 8 SOURCE HOST ISOLATED 9 COMMUNICATION WITH DESTINATION NETWORK ADMINISTRATIVELY PROHIBITED 10 COMMUNICATION WTTH DESTINATION HOST ADMINISTRATIVELY PROHIBITED 11 NETWORK UNREACHABLE FOR TYPE OF SERVICE 12 HOST UNREACHABLE FOR TYPE OF SERVICE Whenever an error prevents a gateway from routing or delivering a datagram, the gateway sends a destination unreachable message back to the source and then drops the datagram. Network unreachable errors usually imply routing failures; host unreachable errors imply delivery failures. Because the message contains a short prefix of the datagram that caused the problem, the source will know exactly which address is unreachable. Destinations may be unreachable because hardware is temporarily out of service, because the sender specified a nonexistent destination address, or because the gateway does not have a route to the destination network. Although gateways send destination unreachable messages if they cannot route or deliver datagram’s, not all such errors can be detected. If the datagram contains the source route option with an incorrect route, it may trigger a source route failure message. If a gateway needs to fragment a datagram but the "don't fragment" bit is set, the gateway sends a fragmentation needed message back to the source.

7.6.4 Congestion and Datagram Flow Control: Gateways cannot reserve memory or communication resources in advance of receiving datagram’s because IP is connectionless. The result is, gateways can overrun with traffic, a condition known as congestion. Congestion arises due to two reasons : 1. A high speed computer may be able to generate traffic faster than a network can transfer it . 2. If many computers simultaneously need to send datagram’s through a single gateway , the gateway can experience congestion, even though no single source causes the problem. When datagram’s arrive too quickly for a host or a gateway to process, it enquires them in memory temporarily. If the traffic continues, the host or gateway eventually exhausts memory must discard additional datagram’s that arrive. A machine uses ICMP source quench messages to relieve congestion. A source quench message is a request for the source to reduce its current rate of datagram transmission. There is no ICMP messages to reverse the effect of a source quench.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

188

Source Quench : Source quench messages have a field that contains a datagram prefix in addition to the usual ICMP TYPE,CODE,CHECKSUM fields. Congested gateways send one source quench message each time they discard a datagram; the datagram prefix identifies the datagram that was dropped. Route Change Requests From Gateways : Internet routing tables are initialized by hosts from a configuration file at system startup, and system administrators seldom make routing changes during normal operations. Gateways exchange routing information periodically to accommodate network changes and keep their routes up-to-date. The general rule is , Gateways are assumed to know correct routes; host begin with minimal routing information and learn new routes from gateways. The GATEWAY INTERNET ADDRESS field contains the address of a gateway that the host is to use to reach the destination mentioned in the datagram header. The INTERNET HEADER field contains IP header plus the next 64 bits of the datagram that triggered the message. The CODE field of an ICMP redirect message further specifies how to interpret the destination address, based on values assigned as follows : Code Value Meaning 0 REDIRECT DATAGRAMS FOR THE NET 1 REDIRECT DATAGRAMS FOR THE HOST 2 REDIRECT DATAGRAMS FOR THE TYPE OF SERVICE AND NET 3 REDIRECT DATAGRAMS FOR THE TYPE OF SERVICE AND HOST Gateways only send ICMP redirect requests to hosts and not to other gateways. Detecting Circular or Excessively Long Routes : Internet gateways compute a next hop using local tables, errors in routing tables can produce a routing cycle for some destination. A routing cycle can consist of two gateways that each route a datagram for a particular destination to other, or it can consist of several gateways. To prevent datagram’s from circling forever in a TCP/IP internet, each IP datagram contains a time-to-live counter , sometimes called a hop count. A gateway decrements the time-to-live counter whenever it processes the datagram and discards the datagram when the count reaches zero. Whenever a gateway discards a datagram because its hop count has reached zero or because a timeout occurred while waiting for fragments of a datagram ,it sends an ICMP time exceeded message back to the datagram's source, A gateway sends this message whenever a datagram is discarded because the time-to-live field in the datagram header has reached zero or because its reassembly timer expired while waiting for fragments. The code field explains the nature of the timeout : Code Value Meaning 0 TIME-TO-LIVE COUNT EXCEEDED 1 FRAGMENT REASSEMBLY TIME EXCEEDED Fragment reassembly refers to the task of collecting all the fragments from a datagram. Reporting Other Problems : When a gateway or host finds problems with a datagram not covered by previous ICMP error messages it sends a parameter problem message to the original source. To make the message unambiguous, the sender uses the POINTER field in the message header to identify the octet in the datagram that caused the problem. Code 1 is used to report that a required option is missing; the POINTER field is not used for code 1.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

189

Obtaining a Subnet Mask: Subnet addressing is used by the hosts to extract some bits in the hosted portion of their IP address to identify a physical network. To participate in subnet addressing, hosts need to know which bits of the 32-bit internet address correspond to the physical network and which correspond to host identifiers. The information needed to interpret the address is represented in a 32-bit quantity called the subnet mask. To learn the subnet mask used for the local network, a machine can send an address mask request message to a gateway and receive an address mask reply. The TYPE field in an address mask message specifies whether the message is a request ( 17 ) or a reply ( 18 ). A reply contains the networks’ subnet address mask in the ADDRESS MASK field. The IDENTIFIER and SEQUENCE NUMBER fields allow a machine to associate replies with requests. Check Your Progress 2. What is the use of ICMP protocol ? 7.7 Summary The packets are handled by the intermediate nodes in a store-and-forward fashion. Packet switching is either based on virtual circuits or on datagram’s. Next we discussed about the types of transmission media. At the end of this lesson we were described the problem of interconnecting two networks, and discuss the protocol sub layering provided for this purpose. In the same way, we looked at four widely-accepted standards for networking as well as internetworking. 7.8 Keywords Flooding : In flooding, every possible path between the source and the destination station are exercised. Each node, upon receiving a packet, forwards copies of it to all its neighboring nodes. Routing Metric: A routing metric is a value used by a routing algorithm to determine whether one route should perform better than another. Subnet: Each of the participating networks is referred to as a subnetwork (or subnet). IWU : The role of the Interworking Units (IWU) is to carry out protocol conversion between the subnets.

7.10 Check Your Progress 1. Define Internetworking. ………………………………………………………………………………………………………………… ………………………………………… 2. What is the use of ICMP protocol ? ………………………………………………………………………………………………………………… ……………………………………………………………………………………………………………….. Answer to Check Your Progress 1. The problem of interconnecting a set of independent networks is called internetworking.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

190

2. The Internet Control Message Protocol (ICMP) is one of the core protocols of the Internet protocol suite. It is chiefly used by networked computers' operating systems to send error messages—indicating, for instance, that a requested service is not available or that a host or router could not be reached. 7.11 Further Reading 1. “Communication Network – Fundamental Concepts and key Architecture”, Leon Garcia and Widjaja. 2. Larry L. Peterson & Bruce S. Davie, “ Computer Networks – A System Approach”, 2 Harcourt Asia / Morgan Kaufmann, 2000.

nd

Edition,

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT- 8 TRANSPORT LAYER Structure 8.0 Introduction 8.1 Objectives 8.2 Definition 8.3 Transport layer 8.3.1 User Datagram Protocol 8.3.2 Transmission Control protocol 8.4 Reliable delivery service 8.4.1 Network Types 8.4.2 Transport Protocol 8.4.3 Classes of Protocol 8.4.4 Multiplexing 8.4.5 Splitting and Recombining 8.4.6 Addressing 8.4.7 Flow Control 8.4.8 Error Checking 8.5 Congestion Control 8.5.1 Theory of Congestion Control 8.5.2 Classification of congestion control algorithms 8.5.3 Bandwidth Management 8.5.4 Fixing the problem 8.5.5 TCP congestion avoidance algorithm 8.6 Connection Establishment 8.6.1 Connection Establishment 8.6.2 Salient Features of TCP 8.6.3 State Diagram 8.6.4 Other implementation details 8.7 Summary 8.8 Keywords 8.9 Exercise and Questions 8.10 Check Your Progress 8.11 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

191


COMPUTER COMMUNICATION & NETWORKS

192

8.0 Introduction This lesson describes the transport layer of the OSI model. The transport layer is concerned with the provision of host-to-host user connections for the reliable and cost effective transfer of user data. Although it may use a variety of networks and network protocols to realize its service, it hides such details from its users by offering a transparent transport service. The transport layer is similar to the network layer in that both attempt to offer a reliable data transfer service. Their main difference lies in that the transport layer looks at the problem from an end-user’s point of view (denoted by a process running on a host), while the network layer’s perspective encompasses lower level concerns (such as routing, connection modes, and internetworking). Also, the transport layer operates only on the end hosts, while the network layer also operates on intermediate network nodes. 8.1 Objectives After studying this lesson you should able to    

Understand the introduction about the transport layer Explain the reliable delivery service Describe about the Congestion control in the transport layer Discuss in detail the connection establishment in TCP protocol

8.2 Definition The transport layer is the lowest user layer in the OSI model. Its scope of responsibility is a function of the quality of service required of it by the user and the quality of service provided by the network layer. The transport layer simply has to bridge this gap, which may be wide in some cases and negligible or nil in other cases. 8.3 Transport Layer The protocol layer just above the Internet Layer is the Host-to-Host Transport Layer. This name is usually shortened to Transport Layer. The two most important protocols in the Transport Layer are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). TCP provides reliable data delivery service with end-to-end error detection and correction. UDP provides lowoverhead, connectionless datagram delivery service. Both protocols deliver data between the Application Layer and the Internet Layer. Applications programmers can choose whichever service is more appropriate for their specific applications. 8.3.1 User Datagram Protocol The User Datagram Protocol gives application programs direct access to a datagram delivery service, like the delivery service that IP provides. This allows applications to exchange messages over the network with a minimum of protocol overhead. UDP is an unreliable, connectionless datagram protocol. As noted previously, "unreliable" merely means that there are no techniques in the protocol for verifying that the data reached the other end of the network correctly. Within your computer, UDP will deliver data correctly. UDP uses 16-bit Source Port and Destination Port numbers in word 1 of the message header, to deliver data to the correct applications process.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

193

UDP message format

Why do applications programmers choose UDP as a data transport service? There are a number of good reasons. If the amount of data being transmitted is small, the overhead of creating connections and ensuring reliable delivery may be greater than the work of re-transmitting the entire data set. In this case, UDP is the most efficient choice for a Transport Layer protocol. Applications that fit a query-response model are also excellent candidates for using UDP. The response can be used as a positive acknowledgment to the query. If a response isn't received within a certain time period, the application just sends another query. Still other applications provide their own techniques for reliable data delivery, and don't require that service from the transport layer protocol. Imposing another layer of acknowledgment on any of these types of applications is inefficient. 8.3.2 Transmission Control Protocol Applications that require the transport protocol to provide reliable data delivery use TCP because it verifies that data is delivered across the network accurately and in the proper sequence. TCP is a reliable, connection-oriented, byte-stream protocol. Let's look at each of the terms - reliable, connection-oriented, and byte-stream - in more detail. TCP provides reliability with a mechanism called Positive Acknowledgment with Re-transmission (PAR). Simply stated, a system using PAR sends the data again, unless it hears from the remote system that the data arrived okay. The unit of data exchanged between cooperating TCP modules is called a segment .Each segment contains a checksum that the recipient uses to verify that the data is undamaged. If the data segment is received undamaged, the receiver sends a positive acknowledgment back to the sender. If the data segment is damaged, the receiver discards it. After an appropriate time-out period, the sending TCP module re-transmits any segment for which no positive acknowledgment has been received. TCP segment format

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

194

TCP is connection-oriented. It establishes a logical end-to-end connection between the two communicating hosts. Control information, called a handshake, is exchanged between the two endpoints to establish a dialogue before data is transmitted. TCP indicates the control function of a segment by setting the appropriate bit in the Flags field in word 4 of the segment header. The type of handshake used by TCP is called a three-way handshake because three segments are exchanged. Figure 1.10 shows the simplest form of the three-way handshake. Host A begins the connection by sending host B a segment with the "Synchronize sequence numbers" (SYN) bit set. This segment tells host B that A wishes to set up a connection, and it tells B what sequence number host A will use as a starting number for its segments. (Sequence numbers are used to keep data in the proper order.) Host B responds to A with a segment that has the "Acknowledgment" (ACK) and SYN bits set. B's segment acknowledges the receipt of A's segment, and informs A which Sequence Number host B will start with. Finally, host A sends a segment that acknowledges receipt of B's segment, and transfers the first actual data. Three-way handshake

After this exchange, host A's TCP has positive evidence that the remote TCP is alive and ready to receive data. As soon as the connection is established, data can be transferred. When the cooperating modules have concluded the data transfers, they will exchange a three-way handshake with segments containing the "No more data from sender" bit (called the FIN bit) to close the connection. It is the end-to-end exchange of data that provides the logical connection between the two systems. TCP views the data it sends as a continuous stream of bytes, not as independent packets. Therefore, TCP takes care to maintain the sequence in which bytes are sent and received. The Sequence Number and Acknowledgment Number fields in the TCP segment header keep track of the bytes. The TCP standard does not require that each system start numbering bytes with any specific number; each system chooses the number it will use as a starting point. To keep track of the data stream correctly, each end of the connection must know the other end's initial number. The two ends of the connection synchronize byte-numbering systems by exchanging SYN segments during the handshake. The Sequence Number field in the SYN segment contains the Initial Sequence Number (ISN), which is the starting point for the byte-numbering system. For security reasons the ISN should be a random number, though it is often 0. Each byte of data is numbered sequentially from the ISN, so the first real byte of data sent has a sequence number of ISN+1. The Sequence Number in the header of a data segment identifies the sequential position in the data stream of the first data byte in the segment. For example, if the first byte in the data stream was sequence number 1 (ISN=0) and 4000 bytes of data have

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

195

already been transferred, then the first byte of data in the current segment is byte 4001, and the Sequence Number would be 4001. The Acknowledgment Segment (ACK) performs two functions: positive acknowledgment and flow control. The acknowledgment tells the sender how much data has been received, and how much more the receiver can accept. The Acknowledgment Number is the sequence number of the next byte the receiver expects to receive. The standard does not require an individual acknowledgment for every packet. The acknowledgment number is a positive acknowledgment of all bytes up to that number. For example, if the first byte sent was numbered 1 and 2000 bytes have been successfully received, the Acknowledgment Number would be 2001. The Window field contains the window, or the number of bytes the remote end is able to accept. If the receiver is capable of accepting 6000 more bytes, the window would be 6000. The window indicates to the sender that it can continue sending segments as long as the total number of bytes that it sends is smaller than the window of bytes that the receiver can accept. The receiver controls the flow of bytes from the sender by changing the size of the window. A zero window tells the sender to cease transmission until it receives a non-zero window value. The receiving system has received and acknowledged 2000 bytes, so the current Acknowledgment Number is 2001. The receiver also has enough buffer space for another 6000 bytes, so it has advertised a window of 6000. The sender is currently sending a segment of 1000 bytes starting with Sequence Number 4001. The sender has received no acknowledgment for the bytes from 2001 on, but continues sending data as long as it is within the window. If the sender fills the window and receives no acknowledgment of the data previously sent, it will, after an appropriate time-out, send the data again starting from the first unacknowledged byte. TCP is also responsible for delivering data received from IP to the correct application. The application that the data is bound for is identified by a 16-bit number called the port number. The Source Port and Destination Port are contained in the first word of the segment header. Correctly passing data to and from the Application Layer is an important part of what the Transport Layer services do. TCP data stream

8.4 Reliable Delivery Service As with the network layer, the transport services are defined in terms of transport primitives. The following Figure summarizes the primitives together with their possible types and parameters.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

196

Transport service primitives.

Addresses refer to Transport Service Access Points (TSAP); these denote entities at the transport layer to which connections are made. Quality Of Service (QOS) denotes a set of parameters (such as connection/release/transfer delay, connection/release/transfer probability of failure, throughput, error rate) which collectively describe the quality of the transport service requested by the user. User data refers to actual user data provided by the service user for transfer by the service provider. Option refers to the expedited data function being available in the service. Sample scenario of transport services.

Transport service user A first requests a connection, which is indicated to transport service user B by the service provider. B responds to the request and the service provider confirms with A. Then B expedites some data which is indicated to A by the service provider. Two normal data transfers from A to B, and from B to A then follow. Finally, the service provider simultaneously sends disconnect requests to both A and B and terminates the connection. 8.4.1 Network Types When a transport service user requests a connection, the request includes a QOS parameter. This parameter indicates the user’s expected quality of service. The transport layer may have a number of different network services at its disposal, each offering a different quality of service. It needs to match the requested QOS against a network QOS and possibly also perform additional work to fulfill user needs. For this reason, networks, depending on their QOS, are divided into three broad Categories. Data errors refer to discrepancies in the bit stream due to transmission errors that have not been detected by the lower-level error checking procedures (e.g., CRC

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

197

checks). Signaled errors refer to failures in the switching mechanisms, resulting, for example, in lost packets. Network types.

It should be clear from this table that much of the transport layer’s complexity is due to it having to cope with type B and C networks. Compensating for their unacceptable error rates may be a formidable task. However, when the transport layer is unable to bridge the gap between the QOS of these networks and the requested QOS, it may simply refuse to establish the connection. 8.4.2 Transport Protocol The transport protocol is essentially connection oriented, regardless of whether the underlying network layer uses virtual circuits or datagram’s. After establishing a connection (always fullduplex), the two user processes may exchange data in the normal or expedited fashion. The sequence of exchanged messages is always maintained, except for expedited data, which do not obey the normal flow control. For messages larger than can be handled by the network layer, the transport layer performs the necessary segmentation and re-assembly. TPDUs Transport layer messages are exchanged by the network layer using Transport Protocol Data Units (TPDUs). In a manner similar to the network sublayers 2 and 3, a user PDU may be represented by a single TPDU, or segmented into multiple TPDUs when it is too large. The general structure of a TPDU is shown in the following Figure . General TPDU structure.

A TPDU consists of a variable length header and user Data. The contents of the header is dependent on the TPDU Type. 8.4.3 Classes of Protocol To facilitate the mapping of the user-requested QOS to an appropriate network QOS, five classes (0-4) of transport protocols are defined. The protocol class is selected during the establishment of a transport connection according to the requested QOS. The choice is made transparently by the

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

198

transport layer without the user’s knowledge or involvement. Class 0 is the most basic transport service and is suitable for type A networks. Although Class 0 is capable of detecting data errors (e.g., corrupted, lost, duplicated data) and signaled errors (e.g., failure in switching nodes), it cannot recover from them. These errors therefore result in the connection being terminated. This is the simplest form of transport service and must be supported by all transport layer implementations. (Supporting other classes is optional.) Class 1 protocol supports recovery from signaled errors. It also supports segmentation, expedited data transfer, and acknowledgment using sequence numbers. Class 2 is like class 0, except that it also supports multiplexing and flow control. Class 3 is like class 2, except that it also supports recovery from signaled errors. Finally, Class 4 provides the widest set of features, making its suitable for C type networks. It is the only class that supports the re sequencing of TPDUs, and the splitting of the transport connection into multiple network connections to improve throughput. The following Figure summarizes the five protocol classes and some of their main features, together with the network type suitable for supporting each class. Transport protocol classes.

MUX = Multiplexing of multiple transport connections onto a single network connection DER = Data Error Recovery SER = Signaled Error Recovery FC = Flow Control ACK = Acknowledgments SPL = Resequencing of TPDUs / Splitting and Recombining connections EXP = Expedited data transfer Segmentation The Transport Service Data Units (TSDUs) which the transport service users work with may be larger than can be handled by the network layer packets. For this reason, all protocol classes support the segmenting of TSDUs into multiple TPDUs and reassembling them at the receiver end. This facility is very similar to the segmentation facility supported by the internetworking sublayers. 8.4.4 Multiplexing Given the limited number of ports available on a host and the potentially large number of transport connections usually required by the users, multiple transport connections are often multiplexed onto a single network connection. The price to be paid for this is additional complexity: transport connections need to carry identifiers for distinguishing between connections over the same circuit, and each connection needs to be separately flow controlled. Multiplexing is supported by Class 2, 3, and 4 protocols.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

199

8.4.5 Splitting and Recombining In situations where a transport connection is needed to have a bandwidth of size that cannot be provided by a single network connection, the connection may be split over multiple network connections. This is the opposite of multiplexing and introduces an additional complexity: with multiple network connections, the TPDUs may arrive out of order and need to be resequenced. Thus splitting can only be supported when the transport protocol can support resequencing of TPDUs. Class 4 is the only protocol that can provide this function. 8.4.6 Addressing A transport layer source or destination address uniquely identifies a TSAP within a network-wide name space. The TSAP denotes a port on a host to which transport connections are made. Transport addresses are not directly used by TPDUs; references are used instead. These are much shorter identifiers that are mapped to the transport addresses (in the variable part of the TPDU header) during connection establishment, allowing the two parties to agree on a common mapping. The source and destination references are used for the remainder of the connection. Also, for user convenience, a higher level mapping is often provided by a name server, which maps meaningful service names for frequently-used transport addresses to the addresses themselves. When wishing to connect to one of these services, the user specifies the service name and the network software looks up the name using the name server to come up with the corresponding transport address, which is then used for initiating a transport connection. 8.4.7 Flow Control As mentioned earlier, supporting multiplexing in Class 2, 3, and 4 protocols requires transport connections to be separately flow controlled. The flow control protocol used for this purpose is called credit. It is similar to the sliding window protocol of the data link and network layers in that a window is used. However, unlike the sliding window protocol where the receiver window size remains fixed, here the window size may vary. The sender is informed of the new window size through a data acknowledgment TPDU which includes a new credit value (the Credit field in the TPDU) which denotes the number of TPDUs that can be received. 8.4.8 Error Checking The transport layer may do additional error checking on top of the ones carried out by lower layers. This may serve one of two reasons: the lower layers might not be performing any error checking, or the network service may be very unreliable and deliver packets that have undetected errors. To address these needs (especially for the Class 4 protocol) a checksum option is available in the TPDUs. This is a 16-bit checksum that covers the entire TPDU and is designed to be much simpler than CRC so that it can be efficiently computed in software. Check Your Progress 1. Write short notes on TCP. 2. What is called Address and Quality of Service (QOS)? 8.5 Congestion Control Congestion control concerns controlling traffic entry into a telecommunications network, so as to avoid congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets. It should not be confused with flow control, which prevents the sender from overwhelming the receiver.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

200

8.5.1 Theory of congestion control The modern theory of congestion control was pioneered by Frank Kelly, who applied microeconomic theory and convex optimization theory to describe how individuals controlling their own rates can interact to achieve an "optimal" network-wide rate allocation. Examples of "optimal" rate allocation are max-min fair allocation and Kelly's suggestion of proportional fair allocation, although many others are possible. The mathematical expression for optimal rate allocation is as follows. Let xi be the rate of flow i, Cl be the capacity of link l, and rli be 1 if flow i uses link l and 0 otherwise. Let x, c and R be the corresponding vectors and matrix. Let U(x) be an increasing, strictly convex function, called the utility, which measures how much benefit a user obtains by transmitting at rate x. The optimal rate allocation then satisfies

such that The Lagrange dual of this problem decouples, so that each flow sets its own rate, based only on a "price" signaled by the network. Each link capacity imposes a constraint, which gives rise to a Lagrange multiplier, pl. The sum of these Lagrange multipliers, yi = ∑ plrli, L is the price to which the flow responds. Congestion control then becomes a distributed optimization algorithm for solving the above problem. Many current congestion control algorithms can be modeled in this framework, with pl being either the loss probability or the queuing delay at link l. A major weakness of this model is that it assumes all flows observe the same price, while sliding window flow control causes "burrstones" which causes different flows to observe different loss or delay at a given link. 8.5.2 Classification of congestion control algorithms There are many ways to classify congestion control algorithms:    

By the type and amount of feedback received from the network: Loss; delay; single-bit or multi-bit explicit signals By incremental deploy ability on the current Internet: Only sender needs modification; sender and receiver need modification; only router needs modification; sender, receiver and routers need modification. By the aspect of performance it aims to improve: high bandwidth-delay product networks; loss links; fairness; advantage to short flows; variable-rate links By the fairness criterion it uses: max-min, proportional, "minimum potential delay"

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

201

8.5.3 Bandwidth management In computer networking, bandwidth management is the process of measuring and controlling the communications (traffic, packets) on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance. Overview Almost everyone who has an Internet connection has at some time downloaded a large file, or run a peer-to-peer file sharing program, and noticed that Web pages start to load very slowly, or fail to load. The reason is, of course, that the channel capacity (or bandwidth) of their Internet connection is limited, like the size of a highway, and when one tries to send too much information down it, more than its capacity, a virtual traffic jam results. This is also known as network congestion. This analogy is important to understand the terms used: channel capacity is the width of the road, and traffic is the amount of data trying to use it. Controlling or managing traffic reduces capacity use, and is often described as bandwidth management, also known as bandwidth control, traffic control, congestion control, traffic shaping or traffic management. Finding the culprit The user of a single computer on a dedicated connection will probably know what application has caused a problem or, barring spyware that hides itself deep within a system, figure it out pretty quickly. This task is much harder for a network administrator who often does not know what applications others are running or how the applications use the network. More sophisticated Bandwidth Management techniques use a macro approach that manages traffic "per-user" rather than "per-application". This frees the network provider from having to constantly identify what clients/customers are doing, and avoids some of the legal concerns and public outcry about providers dictating what customers can do. This approach acknowledges that on ISP-type networks, "fairness" is a per client issue. By managing per-client, no single user can use more bandwidth than their allocation, no matter what application they may be running or how many users are on their endpoint. Typically a single user will not need bandwidth management. The real problem is when multiple users and applications are downloading simultaneously. Because TCP windows are large, these applications all throw a large amount of data into the same queue at your upstream provider. While the traffic arrives at this queue randomly; it is processed sequentially, resulting in choppy download speeds. The more applications that are downloading simultaneously, the larger the backlog. When the backlog grows too high, packets must be dropped to avoid having TCP retransmissions overflow the queue and wasting bandwidth with duplicate traffic. Avoiding dropped packets is the most critical function of bandwidth management. You can reduce this backlog using window shaping technology, which reduces the amount of traffic that each flow can transmit, thus reducing the queue depths and the necessity to drop packets. Troubleshooting network performance is a critical task for network administrators. An individual downloading large files on a dedicated network connection can happily consume as much bandwidth as the network is capable. On a shared network, if one user monopolizes the network, others will complain about any number of things related to the network responding slowly or timing out completely.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

202

8.5.4 Fixing the problem To keep your Internet connection working fast and smoothly, you must control your use of bandwidth, to stay below the maximum capacity of the network link. To control something, you must be able to measure it. These tasks are usually viewed separately: much software exists for network traffic measurement and network traffic control, but these are normally not integrated. And indeed it may not be necessary to integrate them. Once the cause of the heavy traffic is identified, it is usually simpler, and may be more effective, to shut it down or reschedule it than to try to manage its bandwidth use. Many aspects of the Internet protocol suite prevent communications links from reaching their maximum capacity in practice. Therefore, it is necessary to keep the link utilisation below the maximum theoretical capacity of the link, in order to ensure fast responsiveness and eliminate bottleneck queues at the link endpoints, which increase latency. This is called congestion avoidance. Some issues which limit the performance of a given link are:       

TCP determines the capacity of a connection by flooding it until packets start being dropped (Slow-start) Queuing in routers results in higher latency and jitter as the network approaches (and occasionally exceeds) capacity TCP global synchronizations when the network reaches capacity results in waste of bandwidth Burrstones of web traffic requires spare bandwidth to rapidly accommodate the bursts traffic Lack of widespread support for explicit congestion notification and Quality of Service management on the Internet Internet Service Providers typically retain control over queue management and quality of service at their end of the link Window Shaping allows higher end products to reduce traffic flows, which reduce queue depth and allow more users to share more bandwidth fairly

An alternative approach to improved performance is to reduce the amount of traffic generated whilst browsing. This can be achieved by removing photographic and other bandwidth intensive content from web pages and render the page as text-only. This can be particularly beneficial on low bandwidth connections, for instance in the developing world. Disabling Internet Explorer options such as show pictures allow pages to be downloaded and viewed minus the pictures. The website Loband can be used to deliver a simplified page, with colors removed and images replaced by links. Tools and techniques Software for measuring network traffic can be divided into two broad classes: packet snuffers, which look at individual packets, and management applications which give a broader overview of network traffic. Packet snuffers are very useful for network experts tracking down tricky problems. But the volume of information they generate is enormous. A fast broadband connection can transmit thousands or millions of packets per second, and inspecting each one in detail is unlikely to help you make your network faster. In addition, understanding the output of these analyzers requires a detailed understanding of network protocols such as TCP/IP and HTTP. For most network administrators, the broad overview is likely to be more useful, at least as a starting point for tracking down rogue users of their networks.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

203

Many companies sell expensive solutions to help manage a network, which may or may not include managing the bandwidth of an upstream connection. There are also a few lower cost options. Some are researched and described on the network traffic measurement page. Typically, lower end bandwidth management devices will delay packets using queues that release packets at intervals that can be defined by user policies. This works well on small to medium networks where traffic flows do not have to be reduced to achieve good results. One major problem with delay techniques is when traffic is delayed multiple times by multiple devices in a stream. This can cause retransmissions to occur because a particular packet is delayed for too long, which can significantly slow a connection. Some higher end bandwidth management devices use TCP window shaping to reduce the overall flows in your network. By "fooling" the upstream server sending the traffic with a smaller window request, the server will send less data. This has a "pacing" effect on the traffic, and reduces the amount of traffic in your upstream queues without requiring a separate device to manage it. Since the queues are less clogged, traffic flows with less jitter at a naturally lower speed without having to use delay techniques. Window shaping can increase the capacity of your network by 20-40 times (a window of 64K will allow 42 full packets to be sent by downloading servers; this can be adaptively reduced to 1 with window shaping). Of course TCP window shaping only is effective on TCP traffic, so most high-end devices use some combination of delay queues and TCP window shaping. Companies with products employing bandwidth management             

Allot Communications Cisco Systems Ericsson F5 Networks Fortinet Juniper Networks LogiSense Corporation OPNET Technologies Packeteer Radware Sandvine Incorporated Strangeloop Networks Symantec (formerly TurnTide)

8.5.5 TCP congestion avoidance algorithm The TCP uses a network congestion avoidance algorithm that includes various aspects of an additive-increase-multiplicative-decrease (AIMD) scheme, with other schemes such as slow-start in order to achieve congestion avoidance. TCP Tahoe and Reno Two such variations are those offered by TCP Tahoe and Reno. TCP specifies a maximum segment size (MSS). The sender maintains a congestion window, limiting the total number of unacknowledged packets that may be in transit end-to-end. To avoid congestion collapse, TCP makes a slow start when the connection is initialized and after a timeout. It starts with a window of 2 MSS. Although the initial rate is low, the rate of increase is very rapid: for every packet ACKed, the congestion window increases by 1 MSS so that for every round trip time (RTT), the congestion window has doubled. When the congestion window exceeds a threshold ssthresh the algorithm enters a new state, called congestion avoidance. In

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

204

some implementations (e.g. Linux), the initial ssthresh is large, and so the first slow start usually ends after a loss. However, ssthresh is updated at the end of each slow start, and will often affect subsequent slow starts triggered by timeouts. Congestion Avoidance: As long as non-duplicate ACKs are received, the congestion window is additively increased by one MSS every round trip time. When a packet is lost, duplicate ACKs will be received. The behaviour of Tahoe and Reno differ in how they detect and react to packet loss: ďƒ˜ ďƒ˜

Tahoe: Loss is detected when a timeout expires before an ACK is received. Tahoe will then reduce congestion window to 1 MSS, and reset to slow-start state. Reno: If three duplicate ACKs are received (i.e., three ACKs acknowledging the same packet, which are not piggybacked on data, and do not change the receiver's advertised window), Reno will halve the congestion window, perform a "fast retransmit", and enter a phase called Fast Recovery. If an ACK times out, slow start is used as it is with Tahoe.

Fast Recovery. (Reno Only) In this state, TCP retransmits the missing packet that was signaled by 3 duplicate ACKs, and waits for an acknowledgment of the entire transmit window before returning to congestion avoidance. If there is no acknowledgment, TCP Reno experiences a timeout and enters the slow-start state. Both algorithms reduce congestion window to 1 MSS on a timeout event. TCP Vegas Up until the mid 1990s, all TCPs set timeouts and measured round-trip delays were based upon only the last transmitted packet in the transmit buffer. With TCP Vegas, timeouts were set and round-trip delays were measured for every packet in the transmit buffer. In addition, TCP Vegas uses additive increases and additive decreases in the congestion window. TCP New Reno TCP New Reno improves retransmission during the fast recovery phase of TCP Reno. During fast recovery, for every duplicate ACK that is returned to TCP New Reno, a new unsent packet from the end of the congestion window is sent, to keep the transmit window full. For every ACK that makes partial progress in the sequence space, the sender assumes that the ACK points to a new hole, and the next packet beyond the ACKed sequence number is sent. Because the timeout timer is reset whenever there is progress in the transmit buffer, this allows New Reno to fill large holes, or multiple holes, in the sequence space - much like TCP SACK. Because New Reno can send new packets at the end of the congestion window during fast recovery, high throughput is maintained during the hole-filling process, even when there are multiple holes, of multiple packets each. When TCP enters fast recovery it records the highest outstanding unacknowledged packet sequence number. When this sequence number is acknowledged, TCP returns to the congestion avoidance state. A problem occurs with New Reno when there are no packet losses but instead, packets are reordered by more than 3 packet sequence numbers. When this happens, New Reno mistakenly enters fast recovery, but when the reordered packet is delivered, ACK sequence-number progress occurs and from there until the end of fast recovery, every bit of sequence-number progress produces a duplicate and needless retransmission that is immediately ACKed. New Reno performs as well as SACK at low packet error rates, and substantially outperforms Reno at high error rates.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

205

TCP Hybla TCP Hybla aims to eliminate penalization of TCP connections that incorporate a high-latency terrestrial or satellite radio link, due to their longer round trip times. It stems from an analytical evaluation of the congestion window dynamics, which suggests the necessary modifications to remove the performance dependence on RTT. TCP BIC Binary Increase Congestion control is an implementation of TCP with an optimized congestion control algorithm for high speed networks with high latency (LFN: Long Fat Networks). BIC is used by default in Linux kernels 2.6.8 through 2.6.18. TCP CUBIC CUBIC is a less aggressive and more systematic derivative of BIC, in which the window is a cubic function of time since the last congestion event, with the inflection point set to the window prior to the event. CUBIC is used by default in Linux kernels since version 2.6.19. Other TCP congestion avoidance algorithms              

Compound TCP Fast TCP H-TCP High Speed TCP HSTCP-LP TCP-Illinois TCP-LP TCP SACK Scalable TCP TCP Veno Westwood Westwood+ XCP YeAH-TCP

TCP New Reno is the most commonly implemented algorithm, SACK support is very common and is an extension to Reno/New Reno. Most others are competing proposals which still need evaluation. Starting with 2.6.8 the Linux kernel switched the default implementation from Reno to BIC. The default implementation was again changed to CUBIC in the 2.6.19 version. When the per-flow product of bandwidth and latency increases, regardless of the queuing scheme, TCP becomes inefficient and prone to instability. This becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links. TCP Interactive (iTCP) allows applications to subscribe to TCP events and respond accordingly enabling various functional extensions to TCP from outside TCP layer. Most TCP congestion schemes work internally. iTCP additionally enables advanced applications to directly participate in congestion control such as to control the source generation rate. 8.6 Connection Establishment The "three-way handshake" is the procedure used to establish a connection. This procedure normally is initiated by one TCP and responded to by another TCP. The procedure also works if two TCP simultaneously initiate the procedure. When simultaneous attempt occurs, each TCP receives a "SYN" segment which carries no acknowledgment after it has sent a "SYN". Of course,

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

206

the arrival of an old duplicate "SYN" segment can potentially make it appear, to the recipient, that a simultaneous connection initiation is in progress. Proper use of "reset" segments can disambiguate these cases. The three-way handshake reduces the possibility of false connections. It is the implementation of a trade-off between memory and messages to provide information for this checking. The simplest three-way handshake is shown in figure below. The figures should be interpreted in the following way. Each line is numbered for reference purposes. Right arrows (-->) indicate departure of a TCP segment from TCP A to TCP B, or arrival of a segment at B from A. Left arrows (<--), indicate the reverse. Ellipsis (...) indicates a segment which is still in the network (delayed). TCP states represent the state AFTER the departure or arrival of the segment (whose contents are shown in the center of each line). Segment contents are shown in abbreviated form, with sequence number, control flags, and ACK field. Other fields such as window, addresses, lengths, and text have been left out in the interest of clarity. TCP A 1. CLOSED 2. SYN-SENT

TCP B LISTEN --> <SEQ=100><CTL=SYN>

--> SYN-RECEIVED

3. ESTABLISHED <-- <SEQ=300><ACK=101><CTL=SYN,ACK> <-- SYN-RECEIVED 4. ESTABLISHED --> <SEQ=101><ACK=301><CTL=ACK>

--> ESTABLISHED

5. ESTABLISHED --> <SEQ=101><ACK=301><CTL=ACK><DATA> --> ESTABLISHED Basic 3-Way Handshake for Connection Synchronization In line 2 of above figure, TCP A begins by sending a SYN segment indicating that it will use sequence numbers starting with sequence number 100. In line 3, TCP B sends a SYN and acknowledges the SYN it received from TCP A. Note that the acknowledgment field indicates TCP B is now expecting to hear sequence 101, acknowledging the SYN which occupied sequence 100. At line 4, TCP A responds with an empty segment containing an ACK for TCP B's SYN; and in line 5, TCP A sends some data. Note that the sequence number of the segment in line 5 is the same as in line 4 because the ACK does not occupy sequence number space .

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

207

Simultaneous initiation is only slightly more complex, as is shown in figure below. Each TCP cycles from CLOSED to SYN-SENT to SYN-RECEIVED to ESTABLISHED. TCP A

TCP B

1. CLOSED 2. SYN-SENT

CLOSED --> <SEQ=100><CTL=SYN>

...

3. SYN-RECEIVED <-- <SEQ=300><CTL=SYN> 4.

... <SEQ=100><CTL=SYN>

<-- SYN-SENT

--> SYN-RECEIVED

5. SYN-RECEIVED --> <SEQ=100><ACK=301><CTL=SYN,ACK> ... 6. ESTABLISHED <-- <SEQ=300><ACK=101><CTL=SYN,ACK> <-- SYN-RECEIVED 7.

... <SEQ=101><ACK=301><CTL=ACK>

--> ESTABLISHED

Problem regarding 2-way handshake The only real problem with a 2-way handshake is that duplicate packets from a previous connection( which has been closed) between the two nodes might still be floating on the network. After a SYN has been sent to the responder, it might receive a duplicate packet of a previous connection and it would regard it as a packet from the current connection which would be undesirable. Again spoofing is another issue of concern if a two way handshake is used. Suppose there is a node C which sends connection request to B saying that it is A. Now B sends an ACK to A which it rejects & asks B to close connection. Between these two events C can send a lot of packets which will be delivered to the application.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

208

Some Conventions 1. The ACK contains 'x+1' if the sequence number received is 'x'. 2. If 'ISN' is the sequence number of the connection packet then 1st data packet has the seq number 'ISN+1' 3. Sequence numbers are 32 bit. They are byte sequence number. With a packet 1st sequence number and length of the packet is sent. 4. Acknowledgements are cumulative. 5. Acknowledgements have a Sequence number of their own but with a length 0.So the next data packet have the Sequence number same as ACK.

8.6.1 Connection Establish

   

The sender sends a SYN packet with sequence number say 'x'. The receiver on receiving SYN packet responds with SYN packet with sequence number 'y' and ACK with sequence number 'x+1' On receiving both SYN and ACK packet, the sender responds with ACK packet with sequence number 'y+1' The receiver when receives ACK packet, initiates the connection.

Connection Release

The initiator sends a FIN with the current sequence and acknowledgement number.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS  

209

The responder on receiving this informs the application program that it will receive no more data and sends an acknowledgement of the packet. The connection is now closed from one side. Now the responder will follow similar steps to close the connection from its side. Once this is done the connection will be fully closed.

TCP connection is a duplex connection. That means there is no difference between two sides once the connection is established.

8.6.2 Salient Features of TCP 

Piggybacking of acknowledgments: The ACK for the last received packet need not be sent as a new packet, but gets a free ride on the next outgoing data frame(using the ACK field in the frame header). The technique is temporarily delaying outgoing ACKs so that they can be hooked on the next outgoing data frame is known as piggybacking. But ACK can't be delayed for a long time if receiver(of the packet to be acknowledged) does not have any data to send. Flow and congestion control: TCP takes care of flow control by ensuring that both ends have enough resources and both can handle the speed of data transfer of each other so that none of them gets overloaded with data. The term congestion control is used in almost the same context except that resources and speed of each router is also taken care of. The main concern is network resources in the latter case. Multiplexing / Demultiplexing: Many applications can be sending/receiving data at the same time. Data from all of them has to be multiplexed together. On receiving some data from lower layer, TCP has to decide which application is the recipient. This is called demultiplexing. TCP uses the concept of port number to do this.

TCP segment header:

Explanation of header fields:  

Source and destination port :These fields identify the local endpoint of the connection. Each host may decide for itself how to allocate its own ports starting at 1024. The source and destination socket numbers together identify the connection. Sequence and ACK number : This field is used to give a sequence number to each and every byte transferred. This has an advantage over giving the sequence numbers to every packet because data of many small packets can be combined into one at the time of retransmission, if needed. The ACK signifies the next byte expected from the source and not the last byte received. The ACKs are cumulative instead of selective. Sequence

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

210

number space is as large as 32-bit although 17 bits would have been enough if the packets were delivered in order. If packets reach in order, then according to the following formula: (sender's window size) + (receiver's window size) < (sequence number space) the sequence number space should be 17-bits. But packets may take different routes and reach out of order. So, we need a larger sequence number space. And for optimization, this is 32-bits.  

Header length :This field tells how many 32-bit words are contained in the TCP header. This is needed because the options field is of variable length. Flags : There are six one-bit flags. 1. URG : This bit indicates whether the urgent pointer field in this packet is being used. 2. ACK :This bit is set to indicate the ACK number field in this packet is valid. 3. PSH : This bit indicates Pushed data. The receiver is requested to deliver the data to the application upon arrival and not buffer it until a full buffer has been received. 4. RST : This flag is used to reset a connection that has become confused due to a host crash or some other reason. It is also used to reject an invalid segment or refuse an attempt to open a connection. This causes an abrupt end to the connection, if it existed. 5. SYN : This bit is used to establish connections. The connection request(1st packet in 3-way handshake) has SYN=1 and ACK=0. The connection reply (2nd packet in 3-way handshake) has SYN=1 and ACK=1. 6. FIN : This bit is used to release a connection. It specifies that the sender has no more fresh data to transmit. However, it will retransmit any lost or delayed packet. Also, it will continue to receive data from other side. Since SYN and FIN packets have to be acknowledged, they must have a sequence number even if they do not contain any data. Window Size : Flow control in TCP is handled using a variable-size sliding window. The Window Size field tells how many bytes may be sent starting at the byte acknowledged. Sender can send the bytes with sequence number between (ACK#) to (ACK# + window size - 1) A window size of zero is legal and says that the bytes up to and including ACK# -1 have been received, but the receiver would like no more data for the moment. Permission to send can be granted later by sending a segment with the same ACK number and a nonzero Window Size field. Checksum : This is provided for extreme reliability. It checksums the header, the data, and the conceptual pseudo header. The pseudo header contains the 32-bit IP address of the source and destination machines, the protocol number for TCP(6), and the byte count for the TCP segment (including the header).Including the pseudo header in TCP checksum computation helps detect undelivered packets, but doing so violates the protocol hierarchy since the IP addresses in it belong to the IP layer, not the TCP layer.

8.6.3 State Diagram The state diagram approach to view the TCP connection establishment and closing simplifies the design of TCP implementation. The idea is to represent the TCP connection state, which progresses from one state to other as various messages are exchanged. To simplify the matter, we considered two state diagrams, viz., for TCP connection establishment and TCP connection closing. Figure shows the state diagram for the TCP connection establishment and associated table briefly explains each state.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

211

The table gives brief description of each state of the above diagram. State Description Table 1. Represents the state when waiting for connection request from any remote host and port. This specifically applies to a Server. Listen

Syn-Sent

Syn_Rcvd

Estab

From this state, the server can close the service or actively open a connection by sending SYN. Represents waiting for a matching for a connection request after having sent a connection request. This applies to both server and client side. Even though server is considered as the one with passive open, it can also send a SYN packet actively. Represents waiting for a confirmation connection request acknowledgment after having both received and sent connection request. Represents an open connection. Data transfer can take place from this point onwards.

After the connection has been established, two end-points will exchange useful information and terminate the connection. The following Figure shows the state diagram for terminating an active connection.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

212

TCP Connection termination

State Description Table 2

FIN-WAIT-1

Represents connection termination request from the remote TCP peer, or an acknowledgment of the connection termination request previously sent. This state is entered when server issues close call.

FIN-WAIT-2

Represents waiting for a connection termination request from the remote TCP.

CLOSING

Represents connection termination request acknowledgment from the remote TCP.

TIME_WAIT

This represents waiting time enough for the packets to reach their destination. This waiting time is usually 4 min.

CLOSE_WAIT

Represents a state when the server receives a FIN from the remote TCP , sends ACK and issues close call sending FIN

LAST_ACK

Represents waiting for an ACK for the previously sent FIN-ACK to the remote TCP

CLOSE

Represents a closed TCP connection having received all the ACKs

8.6.4 Other implementation details Quite Time It might happen that a host currently in communication crashes and reboots. At startup time, all the data structures and timers will be reset to an initial value. To make sure that earlier connection packets are gracefully rejected, the local host is not allowed to make any new connection for a small period at startup. This time will be set in accordance with reboot time of the operating system.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

213

Initial Sequence number: Initial sequence number used in the TCP communication will be initialized at boot time randomly, rather than to 0. This is to ensure that packets from old connection should not interfere with a new connection. So the recommended method is to   

Initialize the ISN at boot time by a random number For every 500 ms, increment ISN by 64K With every SYN received, increment ISN by 64K

Maximum Request backlog at server As we have seen in Unix Networking programming, listen (sd,n), sets a maximum to the number of requests to be obliged by the server at any time. So if there are already n requests for connection, and n+1 request comes, two things can be done.  

Drop the packet silently Ask the peer to send the request later.

The first option is recommended here because, the assumption is that this queue for request is a coincident and some time later, the server should be free to process the new request. Hence if we drop the packet, the client will go through the time-out and retransmission and server will be free to process it. Also, Standard TCP does not define any strategy/option of knowing who requested the connection. Only Solaris 2.2 supports this option. Delayed Acknowledgment TCP will piggyback the acknowledgment with its data. But if the peer does not have the any data to send at that moment, the acknowledgment should not be delayed too long. Hence a timer for 200 ms will be used. At every 200 ms, TCP will check for any acknowledgment to be sent and send them as individual packets. Small packets TCP implementation discourages small packets. Especially if a previous relatively large packet has been sent and no acknowledgment has been received so far, then this small packet will be stored in the buffer until the situation improves. But there are some applications for which delayed data is worse than bad data. For example, in telnet, each key stroke will be processed by the server and hence no delay should be introduced. As we have seen in Unix Networking programming, options for the socket can be set as NO_DELAY, so that small packets are not discouraged. ICMP Source Quench We have seen in ICMP that ICMP Source Quench message will be send for the peer to slow down. Some implementations discard this message, but few set the current window size to 1. But this is not a very good idea. Retransmission Timeout In some implementation (E.g.. Linux), RTO = RTT + 4 * delay variance is used to instead of constant 2.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

214

Also instead of calculating RTT(est) from the scratch, cache will be used to store the history from which new values are calculated as discussed in the previous classes. Standard values for Maximum Segment Life (MSL) will be between 0.5 to 2 minutes and Time wait state = f(MSL) Keep Alive Time Another important timer in TCP is keep alive timer. It is basically used by a TCP peer to check whether the other end is up or down. It periodically checks this connection. If the other end did not respond, then that connection will be closed. Persist Timer As we saw in TCP window management, when source sends one full window of packets, it will set its window size to 0 and expects an ACK from remote TCP to increase its window size. Suppose such an ACK has been sent and is lost. Hence source will have current window size = 0 and cannot send & destination is expecting next byte. To avoid such a deadlock, a Persist Timer will be used. When this timer goes off, the source will send the last one byte again. So we hope that situation has improved and an ACK to increase the current window size will be received. TCP Congestion Control If the receiver advertises a large window-size, larger than what the network en route can handle, then there will invariably be packet losses. So there will be re-transmissions as well. However, the sender cannot send all the packets for which ACK has not been received because this way it will be causing even more congestion in the network. Moreover, the sender at this point of time cannot be sure about how many packets have actually been lost. It might be that this is the only one that has been lost , and some following it have actually been received and buffered by the receiver. In that case, the sender will have unnecessarily sent a number of packets. Clark's Solution to this problem We try to prevent the sender from advertising very small windows. The sender should try to wait until it has accumulated enough space in the window to send a full segment or half the receiver's buffer size, which it can estimate from the pattern of window updates that it received in the past. Another problem: What if the same behavior is shown by an interactive application at the sender's end ? That is , what if the sender keeps sending in segments of very small size? Nagle's algorithm when data comes to the sender one byte at a time , send the first byte and buffer all the remaining bytes till the outstanding byte is acknowledged. Then send all the buffered characters in one segment and start buffering again till they are acknowledged. It can help reduce the bandwidth usage for example when the user is typing quickly into a telnet connection and the network is slow. Persistent Timer Consider the following deadlock situation. The receiver sends an ACK with 0 sized windows, telling the sender to wait. Later it send an ACK with non-zero window, but this ACK packet gets lost. Then both the receiver and the sender will be waiting for each other to do something. So we keep another timer. When this timer goes off, the sender transmits a probe packet to the sender with an ACK number that is old. The receiver responds with an ACK with updated window size and transmission resumes. Problem of Random Losses How do we know if a loss is a congestion related loss or random loss ?If our window size is very large then we cannot say that one packet loss is random loss. So we need to have some

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

215

mechanism to find what packets are lost. Cumulative Acknowledgement is not a good idea for this. Solutions Selective Acknowledgement We need a selective acknowledgement but that creates a problem in TCP because we use byte sequence numbers .So what we do is that we send the sequence number and the length. We may have to send a large number of such Selective Acknowledgements which will increase the overhead So whenever we get out of sequence packets we send the information a few time not in all the packets anyway. So we cannot rely on Selective Acknowledgement anyway. If we have 32 bit sequence number and 32 bit length, then already we will have too much of overhead .One proposal is to use 16 bit length field. If we have very small gaps then we will think that random losses are there and we need to fill them .If large gaps are there we assume that congestion is there and we need to slow down. TCP Timestamps Option TCP is a symmetric protocol, allowing data to be sent at any time in either direction, and therefore timestamp echoing may occur in either direction. For simplicity and symmetry, we specify that timestamps always be sent and echoed in both directions. For efficiency, we combine the timestamp and timestamp reply fields into a single TCP Timestamps Option. Kind: 8 Length: 10 bytes +-------+-------+---------------------+---------------------+ |Kind=8 | 10 | TS Value | TS Echo Reply | +-------+-------+---------------------+---------------------+ 1 1 4 4 (length in bytes) The Timestamps option carries two four-byte timestamp fields. The Timestamp Value field (TSval) contains the current value of the timestamp clock of the TCP sending the option. The Timestamp Echo Reply field (TSecr) is only valid if the ACK bit is set in the TCP header; if it is valid, it echos a times- tamp value that was sent by the remote TCP in the TSval field of a Timestamps option. When TSecr is not valid, its value must be zero. The TSecr value will generally be the time stamp for the last in-sequence packet received. Example: Sequence of packet send : 1 (t1) sequence of packets received: 1 time stamp copied in ACK:

2 (t2) 2 t1

3 (t3) 4 t2

4 (t4) 3 t3

5 (t5) 5

6 (t6) 6

PAWS: Protect Against Wrapped Sequence Numbers PAWS operates within a single TCP connection, using state that is saved in the connection control block. PAWS uses the same TCP Timestamps option as the RTTM mechanism described earlier, and assumes that every received TCP segment (including data and ACK segments) contains a timestamp SEG. TSval whose values are monotone non-decreasing in time. The basic idea is that a segment can be discarded as an old duplicate if it is received with a timestamp SEG. TSval less than some timestamp recently received on this connection. In both the PAWS and the RTTM mechanism, the "timestamps" are 32- bit unsigned integers in a modular 32-bit space. Thus, "less than" is defined the same way it is for TCP sequence numbers, and the same implementation techniques apply. If s and t are timestamp values, s < t if 0 < (t - s) < 2**31, computed in unsigned 32-bit arithmetic.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

216

The choice of incoming timestamps to be saved for this comparison must guarantee a value that is monotone increasing. For example, we might save the timestamp from the segment that last advanced the left edge of the receive window, i.e., the most recent in- sequence segment. Instead, we choose the value TS.Recent for the RTTM mechanism, since using a common value for both PAWS and RTTM simplifies the implementation of both. TS.Recent differs from the timestamp from the last in-sequence segment only in the case of delayed ACKs, and therefore by less than one window. Either choice will therefore protect against sequence number wrap-around. RTTM was specified in a symmetrical manner, so that TSval timestamps are carried in both data and ACK segments and are echoed in TSecr fields carried in returning ACK or data segments. PAWS submits all incoming segments to the same test, and therefore protects against duplicate ACK segments as well as data segments. (An alternative un-symmetric algorithm would protect against old duplicate ACKs: the sender of data would reject incoming ACK segments whose TSecr values were less than the TSecr saved from the last segment whose ACK field advanced the left edge of the send window. This algorithm was deemed to lack economy of mechanism and symmetry.) TSval timestamps sent on {SYN} and {SYN,ACK} segments are used to initialize PAWS. PAWS protects against old duplicate non-SYN segments, and duplicate SYN segments received while there is a synchronized connection. Duplicate {SYN} and {SYN,ACK} segments received when there is no connection will be discarded by the normal 3-way handshake and sequence number checks of TCP. Header Prediction As we want to know that from which TCP connection this packet belongs. So for each new packet we have to match the header of each packet to the database that will take a lot of time so what we do is we first compare this header with the header of last received packet and on an average this will reduce the work. Assuming that this packet is from the same TCP connection from where we have got the last one (locality principal). UDP -- like its cousin the Transmission Control Protocol (TCP) -- sits directly on top of the base Internet Protocol (IP). In general, UDP implements a fairly "lightweight" layer above the Internet Protocol. It seems at first site that similar service is provided by both UDP and IP, namely transfer of data. But we need UDP for multiplexing/demultiplexing of addresses. UDP's main purpose is to abstract network traffic in the form of datagram. A datagram comprises one single "unit" of binary data; the first eight (8) bytes of a datagram contain the header information and the remaining bytes contain the data itself. UDP Headers The UDP header consists of four (4) fields of two bytes each:

   

Source Port

Destination Port

length

checksum

source port number destination port number datagram size checksum

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

217

UDP port numbers allow different applications to maintain their own "channels" for data; both UDP and TCP use this mechanism to support multiple applications sending and receiving data concurrently. The sending application (that could be a client or a server) sends UDP datagrams through the source port, and the recipient of the packet accepts this datagram through the destination port. Some applications use static port numbers that are reserved for or registered to the application. Other applications use dynamic (unregistered) port numbers. Because the UDP port headers are two bytes long, valid port numbers range from 0 to 65535; by convention, values above 49151 represent dynamic ports. The datagram size is a simple count of the number of bytes contained in the header and data sections . Because the header length is a fixed size, this field essentially refers to the length of the variable-sized data portion (sometimes called the payload). The maximum size of a datagram varies depending on the operating environment. With a two-byte size field, the theoretical maximum size is 65535 bytes. However, some implementations of UDP restrict the datagram to a smaller number -- sometimes as low as 8192 bytes. UDP checksums work as a safety feature. The checksum value represents an encoding of the datagram data that is calculated first by the sender and later by the receiver. Should an individual datagram be tampered with (due to a hacker) or get corrupted during transmission (due to line noise, for example), the calculations of the sender and receiver will not match, and the UDP protocol will detect this error. The algorithm is not fool-proof, but it is effective in many cases. In UDP, check summing is optional -- turning it off squeezes a little extra performance from the system -- as opposed to TCP where checksums are mandatory. It should be remembered that check summing is optional only for the sender, not the receiver. If the sender has used checksum then it is mandatory for the receiver to do so. Usage of the Checksum in UDP is optional. In case the sender does not use it, it sets the checksum field to all 0's. Now if the sender computes the checksum then the recipient must also compute the checksum an set the field accordingly. If the checksum is calculated and turns out to be all 1's then the sender sends all 1's instead of all 0's. This is since in the algorithm for checksum computation used by UDP, a checksum of all 1's if equivalent to a checksum of all 0's. Check Your Progress 3. Define Congestion Control. 4. What is mean by Bandwidth management?

8.7 Summary This lesson introduced you to the transport layer of the OSI model. The transport layer is the lowest user layer in the OSI model. Its scope of responsibility is a function of the quality of service required of it by the user and the quality of service provided by the network layer. Now, you can define a transport layer and trace the evolution and uses of the transport layer. You have also learnt that the various types of data transmission media and their terminology. 8.8 Keywords TPDUs : Transport layer messages are exchanged by the network layer using Transport Protocol Data Units (TPDUs). Persist Timer : When a receiver advertises a window size of 0, the sender stops sending data and starts the persist timer.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

218

RTT : In addition, senders employ a retransmission timer that is based on the estimated roundtrip time (or RTT) between the sender and receiver, as well as the variance in this round trip time.

8.10 Check Your Progress Note: Use the space provided below for your answers. Compare your answers with those given at the end. 1. Write short notes on TCP. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………………… 2. What is called Address and Quality of Service (QOS)? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………………… 3. Define Congestion Control. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………………… 4. What is mean by Bandwidth management? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………………… Answer to Check Your Progress. 1. Applications that require the transport protocol to provide reliable data delivery use TCP because it verifies that data is delivered across the network accurately and in the proper sequence. TCP is a reliable, connection-oriented, byte-stream protocol. 2. (i) Addresses refer to Transport Service Access Points (TSAP); these denote entities at the transport layer to which connections are made. (ii) Quality Of Service (QOS) denotes a set of parameters (such as connection/release/transfer delay, connection/release/transfer probability of failure, throughput, error rate) which collectively describe the quality of the transport service requested by the user. 3. Congestion control concerns controlling traffic entry into a telecommunications network, so as to avoid congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets. It should not be confused with flow control, which prevents the sender from overwhelming the receiver. 4. In computer networking, bandwidth management is the process of measuring and controlling the communications (traffic, packets) on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance. 8.11 Further Reading 1. “Communication Network – Fundamental Concepts and Key Architecture”, Leon Greia and Widjaja.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 2. William Stallings, “Data and Computer Communications”, 5 1997.

219 th

Edition, Pearson Education,

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT-9 TRANSMISSION CONTROL PROTOCOL Structure 9.0 Introduction 9.1 Objectives 9.2 Definition 9.3 Flow Control 9.3.1 Transmit Flow Control 9.3.2 Open-loop Flow Control 9.3.3 Closed loop flow control 9.4 Transmission Control Protocol 9.4.1 Reason for TCP 9.4.2 TCP segment structure 9.4.3 Data Transfer 9.4.4 Vulnerabilities 9.4.5 Fields used to compute the checksum 9.5 User Datagram Protocol 9.5.1 Ports 9.5.2 Packet Structure 9.5.3 Difference between the TCP and UDP 9.6 Summary 9.7 Keywords 9.8 Exercise and Questions 9.9 Check Your Progress 9.10 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

220


COMPUTER COMMUNICATION & NETWORKS

221

9.0 Introduction This lesson describes the transport layer is the lowest user layer in the OSI model. Its scope of responsibility is a function of the quality of service required of it by the user and the quality of service provided by the network layer. The transport layer simply has to bridge this gap, which may be wide in some cases and negligible or nil in other cases. We will first look at the transport service primitives and see how these define the transport service. Then we will describe the transport protocol and related issues, such as segmentation, multiplexing, addressing, error control, and flow control. Finally, we will discuss TCP as an example of a popular transport layer standard. 9.1 Objectives After studying this lesson you should able to  Discuss in detail the connection establishment in TCP protocol  Write brief note on Transmission Control Protocol  Understand about the Flow Control  Explain in detail about the User Datagram Protocol 9.2 Definition The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite. TCP provides reliable, in-order delivery of a stream of bytes, making it suitable for applications like file transfer and e-mail. It is so important in the Internet protocol suite that sometimes the entire suite is referred to as "TCP/IP." 9.3 Flow Control In computer networking, flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from over running a slow receiver. This should be distinguished from congestion control, which is used for controlling the flow of data when congestion has actually occurred. Flow control mechanisms can be classified by whether or not the receiving node sends feedback to the sending node. Flow control is important because it is possible for a sending computer to transmit information at a faster rate than the destination computer can receive and process them. This can happen if the receiving computers have a heavy traffic load in comparison to the sending computer, or if the receiving computer has less processing power than the sending computer. 9.3.1 Transmit flow control Transmit flow control may occur between data on and off terminal equipment (DTE) and a switching center, via data circuit-terminating equipment (DCE), or between two DTEs. The transmission rate may be controlled because of network or DTE requirements. Transmit flow control can occur independently in the two directions of data transfer, thus permitting the transfer rates in one direction to be different from the transfer rates in the other direction. Transmit flow control can be either stop-and-go or use a sliding window. Flow control can be done either by control lines in a data communication interface (see serial port and RS 232), or by reserving in-band control characters to signal flow start and stop (such as the ASCII codes for XON/XOFF). Common RS 232 control lines are RTS (Request To Send)/CTS (Clear To Send) and DSR (Data Set Ready)/DTR (Data Terminal Ready), which is usually referred to as "hardware flow control". XON/XOFF is usually referred to as "software flow control". In the old mainframe days, modems were called "data sets", hence the survival of the term.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

222

Hardware flow control typically works by the DTE or master end first raising or asserting its line, such as RTS, which signals the opposite end (the slave end such as a DCE) to begin monitoring its data input line. When ready for data, the slave end will raise its complementary line, CTS in this example, which signals the master to start sending data, and for the master to begin monitoring the slave's data output line. If either end needs to stop the data, it lowers its respective line. For PC-to-modem and similar links, DTR/DSR are raised for the entire modem session (say a dialup internet call), and RTS/CTS are raised for each block of data. 9.3.2 Open-loop flow control The open-loop flow control mechanism is characterized by having no feedback between the receiver and the transmitter. This simple means of control is widely used. The allocation of resources must be a “prior reservation” or “hop-to-hop” type. The Open Loop flow control has inherent problems with maximizing the utilization of the ATM network resources. Resource allocation is made at connection setup using a CAC (Connection Admission Control) and this allocation is made using information that is already “old news” during the lifetime of the connection. Often there is an over-allocation of resources. Open-Loop flow control is used by the . CBR, VBR and UBR services (see traffic contract and congestion control) 9.3.3 Closed-loop flow control The Closed Loop flow control mechanism is characterized by the ability of the network to report pending network congestion back to the transmitter. This information is then used by the transmitter in various ways to adapt its activity to existing network conditions. Closed Loop flow control is used by ABR (see traffic contract and congestion control). Transmit Flow Control described above is a form of Closed-loop flow control. 9.4 Transmission Control Protocol The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite. TCP provides reliable, in-order delivery of a stream of bytes, making it suitable for applications like file transfer and e-mail. It is so important in the Internet protocol suite that sometimes the entire suite is referred to as "TCP/IP." TCP manages a large fraction of the individual conversations between Internet hosts, for example between web servers and web clients. It is also responsible for controlling the size and rate at which messages are exchanged between the server and the client. 9.4.1 Reason for TCP The Internet Protocol (IP) works by exchanging groups of information called packets. Packets are short sequences of bytes consisting of a header and a body. The header describes the packet's destination, which routers on the Internet use to pass the packet along, generally in the right direction, until it arrives at its final destination. The body contains the application data. In cases of congestion, the IP can discard packets, and, for efficiency reasons, two consecutive packets on the Internet can take different routes to the destination. In that case, the packets can arrive at the destination in the wrong order. The TCP software libraries use the IP and provide a simpler interface to applications by hiding most of the underlying packet structures, rearranging out-of-order packets, minimizing network congestion, and re-transmitting discarded packets. Thus, TCP very significantly simplifies the task of writing network applications. Applicability of TCP TCP is used extensively by many of the Internet's most popular application protocols and resulting applications, including the World Wide Web, E-mail, File Transfer Protocol, Secure Shell, and some streaming media applications. However, because TCP is optimized for accurate

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

223

delivery rather than timely delivery, TCP sometimes incurs relatively long delays (in the order of seconds) while waiting for out-of-order messages or retransmissions of lost messages, and it is not particularly suitable for real-time applications such as Voice over IP. For such applications, protocols like the Real-time Transport Protocol (RTP) running over the User Datagram Protocol [1] (UDP) are usually recommended instead. TCP is a reliable stream delivery service that guarantees delivery of a data stream sent from one host to another without duplication or losing data. Since packet transfer is not reliable, a technique known as positive acknowledgment with retransmission is used to guarantee reliability of packet transfers. This fundamental technique requires the receiver to respond with an acknowledgment message as it receives the data. The sender keeps a record of each packet it sends, and waits for acknowledgment before sending the next packet. The sender also keeps a timer from when the packet was sent, and retransmits a packet if the timer expires. The timer is needed in case a packet becomes lost or corrupt. TCP (Transmission Control Protocol) consists of a set of rules, the protocol, that are used with the Internet Protocol, the IP, to send data “in a form of message units” between computers over the Internet. At the same time that the IP takes care of handling the actual delivery of the data, the TCP takes care of keeping track of the individual units of data “packets” that a message is divided into for efficient routing through the net. For example, when an HTML file is sent to you from a Web server, the TCP program layer of that server takes the file as a stream of bytes and divides it into packets, numbers the packets, and then forwards them individually to the IP program layer. Even though every packet has the same destination IP address, they can get routed differently through the network. When the client program in your computer gets them, the TCP stack (implementation) reassembles the individual packets and ensures they are correctly ordered as it streams them to an application. 9.4.2 TCP segment structure A TCP segment consists of two sections:  

header data

The TCP header[2] consists of 11 fields, of which only 10 are required. The eleventh field is optional (pink background in table) and aptly named "options".

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

224

TCP Header

Bit offset

Bits 4–7 0–3

0

Source port

32

Sequence number

64

Acknowledgment number

96

Data Reserved CWR ECE URG ACK PSH RST SYN FIN Window Size offset

128

Checksum

160

Options (optional)

8–15

16–31

Destination port

Urgent pointer

160/192+ Data

      

 

Source port (16 bits) – identifies the sending port Destination port (16 bits) – identifies the receiving port Sequence number (32 bits) – has a dual role If the SYN flag is present, then this is the initial sequence number and the first data byte is the sequence number plus 1 If the SYN flag is not present, then the first data byte is the sequence number Acknowledgement number (32 bits) – if the ACK flag is set then the value of this field is the next expected byte that the receiver is expecting. Data offset (4 bits) – specifies the size of the TCP header in 32-bit words. The minimum size header is 5 words and the maximum is 15 words thus giving the minimum size of 20 bytes and maximum of 60 bytes. This field gets its name from the fact that it is also the offset from the start of the TCP packet to the data. Reserved (4 bits) – for future use and should be set to zero Flags (8 bits) (aka Control bits) – contains 8 1-bit flags

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS                    

225

CWR (1 bit) – Congestion Window Reduced (CWR) flag is set by the sending host to indicate that it received a TCP segment with the ECE flag set (added to header by RFC 3168). ECE (ECN-Echo) (1 bit) – indicate that the TCP peer is ECN capable during 3-way handshake (added to header by RFC 3168). URG (1 bit) – indicates that the URGent pointer field is significant ACK (1 bit) – indicates that the ACKnowledgment field is significant PSH (1 bit) – Push function RST (1 bit) – Reset the connection SYN (1 bit) – Synchronize sequence numbers FIN (1 bit) – No more data from sender Window (16 bits) – the size of the receive window, which specifies the number of bytes (beyond the sequence number in the acknowledgment field) that the receiver is currently willing to receive (see Flow control) Checksum (16 bits) – The 16-bit checksum field is used for error-checking of the header and data Urgent pointer (16 bits) – if the URG flag is set, then this 16-bit field is an offset from the sequence number indicating the last urgent data byte Options (Variable bits) – the total length of the option field must be a multiple of a 32-bit word and the data offset field adjusted appropriately 0 - End of options list 1 - No operation (NOP, Padding) 2 - Maximum segment size (see maximum segment size) 3 - Window scale (see window scaling for details) 4 - Selective Acknowledgement ok (see selective acknowledgments for details) Timestamp (see TCP Timestamps for details) The last field is not a part of the header. The contents of this field are whatever the upper layer protocol wants but this protocol is not set in the header and is presumed based on the port selection. Data (Variable bits): As you might expect, this is the payload, or data portion of a TCP packet. The payload may be any number of application layer protocols. The most common are HTTP, Telnet, SSH, FTP, but other popular protocols also use TCP.

Protocol operation Unlike TCP's traditional counterpart, User Datagram Protocol, which can immediately start sending packets, TCP provides connections that need to be established before sending data. TCP connections have three phases:   

connection establishment data transfer connection termination

Before describing these three phases, a note about the various states of a connection end-point or Internet socket:         

LISTEN SYN-SENT SYN-RECEIVED ESTABLISHED FIN-WAIT-1 FIN-WAIT-2 CLOSE-WAIT CLOSING LAST-ACK

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS  

226

TIME-WAIT CLOSED

LISTEN represents waiting for a connection request from any remote TCP and port. (usually set by TCP servers) SYN-SENT represents waiting for the remote TCP to send back a TCP packet with the SYN and ACK flags set. (usually set by TCP clients) SYN-RECEIVED represents waiting for the remote TCP to send back an acknowledgment after having sent back a connection acknowledgment to the remote TCP. (usually set by TCP servers) ESTABLISHED represents that the port is ready to receive/send data from/to the remote TCP. (set by TCP clients and servers) TIME-WAIT represents waiting for enough time to pass to be sure the remote TCP received the acknowledgment of its connection termination request. According to RFC 793 a connection can stay in TIME-WAIT for a maximum of four minutes. Connection establishment To establish a connection, TCP uses a three-way handshake. Before a client attempts to connect with a server, the server must first bind to a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate an active open. To establish a connection, the three-way (or 3-step) handshake occurs:   

The active open is performed by the client sending a SYN to the server. In response, the server replies with a SYN-ACK. Finally the client sends an ACK back to the server.

At this point, both the client and server have received an acknowledgment of the connection. Example:  

The initiating host (client) sends a synchronization packet (SYN flag set to 1) to initiate a connection. It sets the packet's sequence number to a random value x. The other host receives the packet, records the sequence number x from the client, and replies with an acknowledgment and synchronization (SYN-ACK). The Acknowledgment is a 32-bit field in TCP segment header. It contains the next sequence number that this host is expecting to receive (x + 1). The host also initiates a return session. This includes a TCP segment with its own initial Sequence Number of value y. The initiating host responds with the next Sequence Number (x + 1) and a simple Acknowledgment Number value of y + 1, which is the Sequence Number value of the other host + 1.

9.4.3 Data transfer     

There are a few key features that set TCP apart from User Datagram Protocol: Ordered data transfer - the destination host rearranges according to sequence number Retransmission of lost packets - any cumulative stream not acknowledged will be retransmitted Discarding duplicate packets Error-free data transfer

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS ďƒ˜ ďƒ˜

227

Flow control - limits the rate a sender transfers data to guarantee reliable delivery. When the receiving host's buffer fills, then next acknowledgement contains a 0 in the window size, to stop transfer and allow the data in the buffer to be processed. Congestion control - sliding window

Ordered data transfer, retransmission of lost packets and discarding duplicate packets TCP uses a sequence number to identify each byte of data. The sequence number identifies the order of the bytes sent from each computer so that the data can be transferred reliably and in order, regardless of any fragmentation, disordering, or packet loss that occurs during transmission. For every byte transmitted the sequence number must be incremented. In the first two steps of the 3-way handshaking, both computers exchange an initial sequence number (ISN). This number can be arbitrary, and should in fact be unpredictable, in order to avoid a TCP Sequence Prediction Attack. TCP primarily uses a cumulative acknowledgment scheme, where the receiver sends an acknowledgment signifying that the receiver has received all data preceding the acknowledged sequence number. Essentially, the first data byte in a segment is assigned a sequence number, which is inserted in the sequence number field, and the receiver sends an acknowledgment specifying the sequence number of the next byte they expect to receive. For example, if computer A sends 4 bytes with a sequence number of 100 (conceptually, the four bytes would have a sequence number of 100, 101, 102, & 103 assigned) then the receiver would send back an acknowledgment of 104 since that is the next byte it expects to receive in the next packet. By sending an acknowledgment of 104, the receiver is signaling that it received bytes 100, 101, 102, & 103 correctly. If, by some chance, the last two bytes were corrupted then an acknowledgment value of 102 would be sent since 100 & 101 were received successfully. In addition to cumulative acknowledgments, TCP receivers can also send selective acknowledgments to provide further information (see selective acknowledgments). If the sender infers that data has been lost in the network, it retransmits the data. Error-free data transfer Sequence numbers and acknowledgments cover discarding duplicate packets, retransmission of lost packets, and ordered-data transfer. To assure correctness a checksum field is included. The TCP checksum is a quite weak check by modern standards. Data Link Layers with high bit error rates may require additional link error correction/detection capabilities. If TCP were to be redesigned today, it would most probably have a 32-bit cyclic redundancy check specified as an error check instead of the current checksum. The weak checksum is partially compensated for by the common use of a CRC or better integrity check at layer 2, below both TCP and IP, such as is used in PPP or the Ethernet frame. However, this does not mean that the 16-bit TCP checksum is redundant: remarkably, introduction of errors in packets between CRC-protected hops is [2] common, but the end-to-end 16-bit TCP checksum catches most of these simple errors . This is the end-to-end principle at work.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

228

A Simplified TCP State Diagram. See * TCP EFSM diagram for a more detailed state diagram including the states inside the ESTABLISHED state. Flow control TCP uses an end-to-end flow control protocol to avoid having the sender send data too fast for the TCP receiver to reliably receive and process it. Having a mechanism for flow control is essential in an environment where machines of diverse network speeds communicate. For example, when a fast PC sends data to a slow hand-held PDA, the PDA needs to regulate the [1] influx of data, or protocol software would be overrun quickly. Similarly, flow control is essential if the application that is receiving the data is reading it more slowly than the sending application is sending it. TCP uses a sliding window flow control protocol. In each TCP segment, the receiver specifies in the receive window field the amount of additional received data (in bytes) that it is willing to buffer for the connection. The sending host can send only up to that amount of data before it must wait for an acknowledgment and window update from the receiving host.

TCP sequence numbers and receive windows behave very much like a clock. The receive window shifts each time the receiver receives and acknowledges a new segment of data. Once it runs out of sequence numbers, the sequence number loops back to 0. When a receiver advertises a window size of 0, the sender stops sending data and starts the persist timer. The persist timer is used to protect TCP from a deadlock situation that could arise if the window size update from the receiver is lost and the receiver has no more data to send

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

229

while the sender is waiting for the new window size update. When the persist timer expires the TCP sender sends a small packet so that the receiver sends an acknowledgement with the new window size. If a receiver is processing incoming data in small increments, it may repeatedly advertise a small receive window. This referred to as the silly window syndrome, since it is inefficient to send only a few bytes of data in a TCP segment, given the relatively large overhead of the TCP header. TCP senders and receivers typically employ flow control logic to specifically avoid repeatedly sending small segments. The sender-side silly window syndrome avoidance logic is referred to as Nagle's algorithm. Congestion control The final main aspect of TCP is congestion control. TCP uses a number of mechanisms to achieve high performance and avoid 'congestion collapse', where network performance can fall by several orders of magnitude. These mechanisms control the rate of data entering the network, keeping the data flow below a rate that would trigger collapse. Acknowledgments for data sent, or lack of acknowledgments, are used by senders to infer network conditions between the TCP sender and receiver. Coupled with timers, TCP senders and receivers can alter the behavior of the flow of data. This is more generally referred to as congestion control and/or network congestion avoidance. Modern implementations of TCP contain four intertwined algorithms: Slow-start, congestion avoidance, fast retransmit, and fast recovery (RFC2581). In addition, senders employ a retransmission timer that is based on the estimated round-trip time (or RTT) between the sender and receiver, as well as the variance in this round trip time. The behavior of this timer is specified in RFC 2988. There are subtleties in the estimation of RTT. For example, senders must be careful when calculating RTT samples for retransmitted packets; typically they use Karn's Algorithm or TCP timestamps (see RFC 1323). These individual RTT samples are then averaged over time to create a Smoothed Round Trip Time (SRTT) using Jacobson's algorithm. This SRTT value is what is finally used as the round-trip time estimate. Enhancing TCP to reliably handle loss, minimize errors, manage congestion and go fast in very high-speed environments are ongoing areas of research and standards development. As a result, there are a number of TCP congestion avoidance algorithm variations. Selective acknowledgments Relying purely on the cumulative acknowledgment scheme employed by the original TCP protocol can lead to inefficiencies when packets are lost. For example, suppose 10,000 bytes are sent in 10 different TCP packets, and the first packet is lost during transmission. In a pure cumulative acknowledgment protocol, the receiver cannot say that it received bytes 1,000 to 9,999 but only that it failed to receive the first packet, containing bytes 0 to 999. Thus the sender would then have to resend all 10,000 bytes. In order to solve this problem TCP employs the selective acknowledgment (SACK) option, defined in RFC 2018, which allows the receiver to acknowledge discontiguous blocks of packets that were received correctly, in addition to the sequence number of the last contiguous byte received successively, as in the basic TCP acknowledgment. The acknowledgement can specify a number of SACK blocks, where each SACK block is conveyed by the starting and ending sequence numbers of a contiguous range that the receiver correctly received. In the example above, the receiver would send SACK with sequence numbers 1,000 and 10,000. The sender will thus retransmit only the first packet, bytes 0 to 999.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

230

The SACK option is not mandatory and it is used only if both parties support it. This is negotiated when connection is established. SACK uses the optional part of the TCP header (see TCP segment structure for details). The use of SACK is widespread - all popular TCP stacks support it. Selective acknowledgment is also used in SCTP. Window scaling For more efficient use of high bandwidth networks, a larger TCP window size may be used. The TCP window size field controls the flow of data and is limited to between 2 and 65,535 bytes. Since the size field cannot be expanded, a scaling factor is used. The TCP window scale option, as defined in RFC 1323, is an option used to increase the maximum window size from 65,535 bytes to 1 Gigabyte. Scaling up to larger window sizes is a part of what is necessary for TCP Tuning. The window scale option is used only during the TCP 3-way handshake. The window scale value represents the number of bits to left-shift the 16-bit window size field. The window scale value can be set from 0 (no shift) to 14. Many routers and packet firewalls rewrite the window scaling factor during a transmission. This causes sending and receiving sides to assume different TCP window sizes. The result is non-stable traffic that is very slow. The problem is visible on some sending and receiving sites which are behind the path of broken routers. TCP window scaling can be a particular problem on Linux and Windows Vista systems. TCP Timestamps TCP timestamps, defined in RFC 1323, help TCP compute the round-trip time between the sender and receiver. Timestamp options include a 4-byte timestamp value, where the sender inserts its current value of its timestamp clock, and a 4-byte echo reply timestamp value, where the receiver generally inserts the most recent timestamp value that it has received. The sender uses the echo reply timestamp in an acknowledgment to compute the total elapsed time since the acknowledged segment was sent. TCP timestamps are also used to help in the case where TCP 32 sequence numbers encounter their 2 bound and "wrap around" the sequence number space. This scheme is known as Protect Against Wrapped Sequence numbers, or PAWS. Out of Band Data One is able to interrupt or abort the queued stream instead of waiting for the stream to finish. This is done by specifying the data as urgent. This will tell the receiving program to process it immediately, along with the rest of the urgent data. When finished, TCP informs the application and resumes back to the stream queue. An example is when TCP is used for a remote login session, the user can send a keyboard sequence that interrupts or aborts the program at the other end. These signals are most often needed when a program on the remote machine fails to operate correctly. The signals must be sent without waiting for the program to finish its current transfer. Unfortunately, TCP OOB data was not designed for the modern Internet. The urgent pointer only alters the processing on the remote host and doesn't expedite any processing on the network itself. When it gets to the remote host there are two slightly different interpretations of the protocol which means only single bytes of OOB data are reliable. This is assuming it's reliable at all as it's one of the least commonly used protocol elements and tends to be poorly implemented.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

231

Forcing Data Delivery Normally, TCP waits for the buffer to exceed the maximum segment size before sending any data. This creates serious delays when the two sides of the connection are exchanging short messages and need to receive the response before continuing. For example, the login sequence at the beginning of a session begins with the short message "Login," and the session cannot make any progress until these five characters have been transmitted and the response has been received. This process can be seriously delayed by TCP's normal behavior. However, an application can force delivery of octets to the output stream using a push operation [1] provided by TCP to the application layer. This operation also causes TCP to set the PSH flag or control bit to ensure that data will be delivered immediately to the application layer by the receiving transport layer. In the most extreme cases, for example when a user expects each keystroke to be echoed by the receiving application, the push operation can be used each time a keystroke occurs. More generally, application programs use this function to force output to be sent after writing a character or line of characters. By forcing the data to be sent immediately, delays and wait time are reduced. Connection termination The connection termination phase uses, at most, a four-way handshake, with each side of the connection terminating independently. When an endpoint wishes to stop its half of the connection, it transmits a FIN packet, which the other end acknowledges with an ACK. Therefore, a typical tear down requires a pair of FIN and ACK segments from each TCP endpoint. A connection can be "half-open", in which case one side has terminated its end, but the other has not. The side that has terminated can no longer send any data into the connection, but the other side can. It is also possible to terminate the connection by a 3-way handshake, when host A sends a FIN and host B replies with a FIN & ACK (merely combines 2 steps into one) and host A replies with an ACK. This is perhaps the most common method. It is possible for both hosts to send FINs simultaneously then both just have to ACK. This could possibly be considered a 2-way handshake since the FIN/ACK sequence is done in parallel for both directions. Some application protocols may violate the OSI model layers, using the TCP open/close handshaking for the application protocol open/close handshaking - these may find the RST problem on active close. As an example: s = connect(remote); send (s, data); close(s); For a usual program flow like above, a TCP/IP stack like that described above does not guarantee that all the data will arrive to the other application unless the programmer is sure that the remote side will not send anything. 9.4.4 Vulnerabilities Vulnerability to Denial of Service By using a spoofed IP address and repeatedly sending purposely assembled SYN packets attackers can cause the server to consume large amounts of resources keeping track of the bogus connections. This is known as a SYN flood attack. Proposed solutions to this problem include SYN cookies and Cryptographic puzzles.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

232

Connection hijacking An attacker who is able to eavesdrop a TCP session and redirect packets can hijack a TCP connection. To do so, the attacker learns the sequence number from the ongoing communication and forges a false packet that looks like the next packet in the stream. Such a simple hijack can result in one packet being erroneously accepted at one end. When the receiving host acknowledges the extra packet to the other side of the connection, synchronization is lost. Hijacking might be combined with ARP or routing attacks that allow taking control of the packet flow, so as to get permanent control of the hijacked TCP connection. Impersonating a different IP address was possible prior to RFC 1948, when the initial sequence number was easily guessable. That allowed an attacker to blindly send a sequence of packets that the receiver would believe to come from a different IP address, without the need to deploy ARP or routing attacks: it is enough to ensure that the legitimate host of the impersonated IP address is down, or bring it to that condition using denial of service attacks. This is why the sequence number is chosen at random. TCP ports TCP uses the notion of port numbers to identify sending and receiving application end-points on a host, or Internet sockets. Each side of a TCP connection has an associated 16-bit unsigned port number (1-65535) reserved by the sending or receiving application. Arriving TCP data packets are identified as belonging to a specific TCP connection by its sockets, that is, the combination of source host address, source port, destination host address, and destination port. This means that a server computer can provide several clients with several services simultaneously, as long as a client takes care of initiating any simultaneous connections to one destination port from different source ports. Port numbers are categorized into three basic categories: well-known, registered, and dynamic/private. The well-known ports are assigned by the Internet Assigned Numbers Authority (IANA) and are typically used by system-level or root processes. Well-known applications running as servers and passively listening for connections typically use these ports. Some examples include: FTP (21), ssh (22), TELNET (23), SMTP (25) and HTTP (80). Registered ports are typically used by end user applications as ephemeral source ports when contacting servers, but they can also identify named services that have been registered by a third party. Dynamic/private ports can also be used by end user applications, but are less commonly so. Dynamic/private ports do not contain any meaning outside of any particular TCP connection. Development of TCP TCP is a complex and evolving protocol. However, while significant enhancements have been made and proposed over the years, its most basic operation has not changed significantly since its first specification RFC 675 in 1974, and the v4 specification RFC 793, published in September 1981.[3] RFC 1122, Host Requirements for Internet Hosts, clarified a number of TCP protocol implementation requirements. RFC 2581, TCP Congestion Control, one of the most important TCP related RFCs in recent years, describes updated algorithms to be used in order to avoid undue congestion. In 2001, RFC 3168 was written to describe explicit congestion notification (ECN), a congestion avoidance signalling mechanism. The original TCP congestion avoidance algorithm was known as "TCP Tahoe", but many alternative algorithms have since been proposed (including TCP Reno, Vegas, FAST TCP, New Reno, and Hybla). Another scheme looked how to engineer various extensions into TCP. TCP Interactive (iTCP) allows applications to subscribe to TCP events and respond accordingly

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

233

enabling various functional extensions to TCP from outside TCP layer including application assisted congestion control. TCP over wireless TCP has been optimized for wired networks. Any packet loss is considered to be the result of congestion and the congestion window size is reduced dramatically as a precaution. However, wireless links are known to experience sporadic and usually temporary losses due to fading, shadowing, hand off, and other radio effects, that cannot be considered congestion. After the (erroneous) back-off of the congestion window size, due to wireless packet loss, there can be a congestion avoidance phase with a conservative decrease in window size. This causes the radio link to be underutilized. Extensive research has been done on the subject of how to combat these harmful effects. Suggested solutions can be categorized as end-to-end solutions (which require modifications at the client and/or server), link layer solutions (such as RLP in CDMA2000), or proxy based solutions (which require some changes in the network without modifying end nodes). Hardware TCP implementations One way to overcome the processing power requirements of TCP is to build hardware implementations of it, widely known as TCP Offload Engines (TOE). The main problem of TOEs is that they are hard to integrate into computing systems, requiring extensive changes in the operating system of the computer or device. The first company to develop such a device was Alacritech. Debugging TCP A packet sniffer, which intercepts TCP traffic on a network link, can be useful in debugging networks, network stacks and applications which use TCP by showing the user what packets are passing through a link. Some networking stacks support the SO_DEBUG socket option, which can be enabled on the socket using setsockopt. That option dumps all the packets, TCP states and events on that socket which will be helpful in debugging. net stat is another utility that can be used for debugging. Alternatives to TCP For many applications TCP is not appropriate. One big problem (at least with normal implementations) is that the application cannot get at the packets coming after a lost packet until the retransmitted copy of the lost packet is received. This causes problems for real-time applications such as streaming multimedia (such as Internet radio), real-time multiplayer games and voice over IP (VoIP) where it is sometimes more useful to get most of the data in a timely fashion than it is to get all of the data in order. For both historical and performance reasons, most storage area networks (SANs) prefer to use Fibre Channel protocol (FCP) instead of TCP/IP. Also for embedded systems, network booting and servers that serve simple requests from huge numbers of clients (e.g. DNS servers) the complexity of TCP can be a problem. Finally some tricks such as transmitting data between two hosts that are both behind NAT (using STUN or similar systems) are far simpler without a relatively complex protocol like TCP in the way. Generally where TCP is unsuitable the User Datagram Protocol (UDP) is used. This provides the application multiplexing and checksums that TCP does, but does not handle building streams or retransmission giving the application developer the ability to code those in a way suitable for the situation and/or to replace them with other methods like forward error correction or interpolation.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

234

SCTP is another IP protocol that provides reliable stream oriented services not so dissimilar from TCP. It is newer and considerably more complex than TCP so has not yet seen widespread deployment. However, it is especially designed to be used in situations where reliability and nearreal-time considerations are important. Venturi Transport Protocol (VTP) is a patented proprietary protocol that is designed to replace TCP transparently in order to overcome perceived inefficiencies related to wireless data transport. TCP also has some issues in high bandwidth utilization environments. The TCP congestion avoidance algorithm works very well for ad-hoc environments where it is not known who will be sending data, but if the environment is predictable, a timing based protocol such as ATM can avoid the overhead of the retransmits that TCP needs. 9.4.5 Fields used to compute the checksum TCP checksum using IPv4 The checksum field is the 16 bit one's complement of the one's complement sum of all 16-bit words in the header and text. If a segment contains an odd number of header and text octets to be check summed, the last octet is padded on the right with zeros to form a 16-bit word for checksum purposes. The pad is not transmitted as part of the segment. While computing the checksum, the checksum field itself is replaced with zeros. In other words, all 16-bit words are summed together using one's complement (with the checksum field set to zero). The sum is then one's complemented. This final value is then inserted as the checksum field. Algorithmically speaking, this is the same as for IPv6. The difference is in the data used to make the checksum. When computing the checksum, a pseudoheader that mimics the IPv4 header is shown in the table below.

TCP pseudo-header (IPv4)

Bit offset

Bits 0– 4–7 3

0

Source address

32

Destination address

64

Zeros

96

Source port

128

Sequence number

8–15

16–31

Protocol

TCP length

Destination port

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

160

Acknowledgement number

192

Data offset

224

Checksum

256

Options (optional)

Reserved Flags

235

Window

Urgent pointer

The source and destination addresses are those in the IPv4 header. The TCP length field is the length of the TCP header and data. TCP checksum using IPv6 When TCP runs over IPv6, the method used to compute the checksum is changed, as per RFC 2460: Any transport or other upper-layer protocol that includes the addresses from the IP header in its checksum computation must be modified for use over IPv6, to include the 128-bit IPv6 addresses instead of 32-bit IPv4 addresses. When computing the checksum, a pseudo-header that mimics the IPv6 header is shown in the table below.

TCP pseudo-header (IPv6)

Bit offset Bits 0 - 7

8–15

16–23

24–31

0

32 Source address 64

96

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

236

128

160 Destination address 192

224

256

TCP length

288

Zeros

320

Source port

352

Sequence number

384

Acknowledgement number

416

Data offset

448

Checksum

480

Options (optional)

 

Next header

Destination port

Reserved Flags

Window

Urgent pointer

Source address – the one in the IPv6 header Destination address – the final destination; if the IPv6 packet doesn't contain a Routing header, that will be the destination address in the IPv6 header, otherwise, at the originating node, it will be the address in the last element of the Routing header, and, at the receiving node, it will be the destination address in the IPv6 header. TCP length – the length of the TCP header and data;

Check Your Progress 1. Write the use of Flow control.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

237

2. List out the segments of TCP. 9.5 User Datagram Protocol User Datagram Protocol (UDP) is one of the core protocols of the Internet protocol suite. Using UDP, programs on networked computers can send short messages sometimes known as datagrams (using Datagram Sockets) to one another. UDP is sometimes called the Universal Datagram Protocol. The protocol was designed by David P. Reed in 1980. UDP does not guarantee reliability or ordering in the way that TCP does. Datagrams may arrive out of order, appear duplicated, or go missing without notice. Avoiding the overhead of checking whether every packet actually arrived makes UDP faster and more efficient, for applications that do not need guaranteed delivery. Time-sensitive applications often use UDP because dropped packets are preferable to delayed packets. UDP's stateless nature is also useful for servers that answer small queries from huge numbers of clients. Unlike TCP, UDP is compatible with packet broadcast (sending to all on local network) and multicasting (send to all subscribers). Common network applications that use UDP include: the Domain Name System (DNS), streaming media applications such as IPTV, Voice over IP (VoIP), Trivial File Transfer Protocol (TFTP) and online games. 9.5.1 Ports UDP uses ports to allow application-to-application communication. The port field is a 16 bit value, allowing for port numbers to range between 0 and 65,535. Port 0 is reserved, but is a permissible source port value if the sending process does not expect messages in response. Ports 1 through 1023 (hex 3FF) are named "well-known" ports and on Unix-derived operating systems, binding to one of these ports requires root access. Ports 1024 through 49,151 (hex BFFF) are registered ports. Ports 49,152 through 65,535 (hex FFFF) are used as temporary ports primarily by clients when communicating to servers. 9.5.2 Packet structure In the Internet protocol suite, UDP provides a very simple interface between a network layer below (e.g., IPv4) and a session layer or application layer above. UDP provides no guarantees to the upper layer protocol for message delivery and a UDP sender retains no state on UDP messages once sent (for this reason UDP is sometimes called the Unreliable Datagram Protocol). UDP adds only application multiplexing and check summing of the header and payload. If any kind of reliability for the information transmitted is needed, it must be implemented in upper layers. +

Bits 0 - 15

16 - 31

0

Source Port

Destination Port

32 Length

Checksum

64 Data

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

238

The UDP header consists of only 4 fields. The use of two of those is optional (pink background in table). Source port This field identifies the sending port when meaningful and should be assumed to be the port to reply to if needed. If not used, then it should be zero. Destination port This field identifies the destination port and is required. Length A 16-bit field that specifies the length in bytes of the entire datagram: header and data. The minimum length is 8 bytes since that's the length of the header. The field size sets a theoretical limit of 65,535 bytes for the data carried by a single UDP datagram. The practical limit for the data length which is imposed by the underlying IPv4 protocol is 65,507 bytes. Checksum The 16-bit checksum field is used for error-checking of the header and data. With IPv4 When UDP runs over IPv4, the method used to compute the checksum is defined within RFC 768: Checksum is the 16-bit one's complement of the one's complement sum of a pseudo header of information from the IP header, the UDP header, and the data, padded with zero octets at the end (if necessary) to make a multiple of two octets. In other words, all 16-bit words are summed together using one's complement (with the checksum field set to zero). The sum is then one's complemented. This final value is then inserted as the checksum field. Algorithmically speaking, this is the same as for IPv6. The difference is in the data used to make the checksum. Included is a pseudo-header that contains information from the IPv4 header: +

Bits 0 - 7

0

Source address

32

Destination address

64

Zeros

96

Source Port

128 Length

8 - 15

Protocol

16 - 23

24 - 31

UDP length Destination Port Checksum

160 Data The source and destination addresses are those in the IPv4 header. The protocol is that for UDP (see List of IPv4 protocol numbers): 17. The UDP length field is the length of the UDP header and data. If the checksum is calculated to be zero (all 0s) it should be sent as negative zero (all 1's). If a checksum is not used it should be sent as zero (all 0s) as zero indicates an unused checksum. With IPv6

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

239

When UDP runs over IPv6, the checksum is no longer considered optional, and the method used to compute the checksum is changed, as per RFC 2460: Any transport or other upper-layer protocol that includes the addresses from the IP header in its checksum computation must be modified for use over IPv6, to include the 128-bit IPv6 addresses instead of 32-bit IPv4 addresses. When computing the checksum, a pseudo-header that mimics the IPv6 header is included: + Bits 0 - 7 8 - 15 16 - 23 24 - 31 0 32 64

Source address

96 128 160 192

Destination address

224 256 UDP length 288 Zeros

Next Header

320 Source Port

Destination Port

352 Length

Checksum

384 Data The source address is the one in the IPv6 header. The destination address is the final destination; if the IPv6 packet doesn't contain a Routing header, that will be the destination address in the IPv6 header; otherwise, at the originating node, it will be the address in the last element of the Routing header, and, at the receiving node, it will be the destination address in the IPv6 header. The Next Header value is the protocol value for UDP: 17. The UDP length field is the length of the UDP header and data. Lacking any congestion avoidance and control mechanisms, network-based mechanisms are required to minimize potential congestion collapse effects of uncontrolled, high rate UDP traffic loads. In other words, since UDP senders cannot detect congestion, network-based elements such as routers using packet queuing and dropping techniques will often be the only tool available to slow down excessive UDP traffic. The Datagram Congestion Control Protocol (DCCP) is being designed as a partial solution to this potential problem by adding end host TCPfriendly congestion control behavior to high-rate UDP streams such as streaming media. While the total amount of UDP traffic found on a typical network is often in the order of only a few percent, numerous key applications use UDP, including: the Domain Name System (DNS) (since most DNS queries only consist of a single request followed by a single reply), the simple network management protocol (SNMP), the Dynamic Host Configuration Protocol (DHCP) and the Routing Information Protocol (RIP). Sample code (Python) The following, minimalistic example shows how to use UDP for client/server communication:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS The server: import socket PORT = 10000 BUFLEN = 512 server = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) server.bind(('', PORT)) while True: (message, address) = server.recvfrom(BUFLEN) print 'Received packet from %s:%d' % (address[0], address[1]) print 'Data: %s' % message The client (replace "127.0.0.1" by the IP address of the server): import socket SERVER_ADDRESS = '127.0.0.1' SERVER_PORT = 10000 client = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) for i in range(3): print 'Sending packet %d' % i message = 'This is packet %d' % i client.sendto(message, (SERVER_ADDRESS, SERVER_PORT)) client.close() Sample code (C++ – Windows-specific) The following, minimalistic example shows how to use UDP for client/server communication: The server: #include <winsock.h> #include <stdio.h> #pragma comment(lib,"ws2_32.lib") int main() { WSADATA wsaData; SOCKET RecvSocket; sockaddr_in RecvAddr; int Port = 2345; char RecvBuf[1024]; int BufLen = 1024; sockaddr_in SenderAddr; int SenderAddrSize = sizeof(SenderAddr); WSAStartup(MAKEWORD(2,2), &wsaData); RecvSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP); RecvAddr.sin_family = AF_INET; RecvAddr.sin_port = htons(Port);

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

240


COMPUTER COMMUNICATION & NETWORKS

241

RecvAddr.sin_addr.s_addr = INADDR_ANY; bind(RecvSocket, (SOCKADDR *) &RecvAddr, sizeof(RecvAddr)); recvfrom(RecvSocket,RecvBuf, BufLen,0,(SOCKADDR *)&SenderAddr,&SenderAddrSize); printf("%s\n",RecvBuf); closesocket(RecvSocket); WSACleanup(); } The client: #include <winsock.h> #pragma comment(lib,"ws2_32.lib") int main() { WSADATA wsaData; SOCKET SendSocket; sockaddr_in RecvAddr; int Port = 2345; char ip[] = "127.0.0.1"; char SendBuf[] = "hello"; WSAStartup(MAKEWORD(2,2), &wsaData); SendSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP); RecvAddr.sin_family = AF_INET; RecvAddr.sin_port = htons(Port); RecvAddr.sin_addr.s_addr = inet_addr(ip); sendto(SendSocket,SendBuf,strlen(SendBuf)+1,0,(SOCKADDR &RecvAddr,sizeof(RecvAddr)); WSACleanup(); }

*)

Voice and Video Traffic Voice and video traffic is generally transmitted using UDP. Real-time video and audio streaming protocols are designed to handle occasional lost packets, so only slight degradation in quality (if any) occurs rather than large delays as lost packets are retransmitted. Because both TCP and UDP run over the same network, many businesses are finding that a recent increase in UDP traffic from these real-time applications is hindering the performance of applications using TCP, such as point of sale, accounting, and database systems. When TCP detects packet loss, it will throttle back its bandwidth usage which allows the UDP applications to consume even more bandwidth, worsening the problem. Since both real-time and business applications are both [1] important to businesses, developing quality of service solutions is crucial. 9.5.3 Difference between TCP and UDP TCP ("Transmission Control Protocol") is a connection-oriented protocol, which means that upon communication it requires handshaking to set up end-to-end connection. A connection can be made from client to server, and from then on any data can be sent along that connection. ďƒ˜

ďƒ˜

Reliable - TCP manages message acknowledgment, retransmission and timeout. Many attempts to reliably deliver the message are made. If it gets lost along the way, the server will re-request the lost part. In TCP, there's either no missing data, or, in case of multiple timeouts, the connection is dropped. Ordered - if two messages are sent along a connection, one after the other, the first message will reach the receiving application first. When data packets arrive in the wrong

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

 

242

order, the TCP layer holds the later data until the earlier data can be rearranged and delivered to the application. Heavyweight - TCP requires three packets just to set up a socket, before any actual data can be sent. It handles connections, reliability and congestion control. It is a large transport protocol designed on top of IP. Streaming - Data is read as a "stream," with nothing distinguishing where one packet ends and another begins. Packets may be split or merged into bigger or smaller data streams arbitrarily.

UDP is a simpler message-based connectionless protocol. In connectionless protocols, there is no effort made to setup a dedicated end-to-end connection. Communication is achieved by transmitting information in one direction, from source to destination without checking to see if the destination is still there, or if it is prepared to receive the information. With UDP messages (packets) cross the network in independent units.    

Unreliable - When a message is sent, it cannot be known if it will reach its destination; it could get lost along the way. There is no concept of acknowledgment, retransmission and timeout. Not ordered - If two messages are sent to the same recipient, the order in which they arrive cannot be predicted. Lightweight - There is no ordering of messages, no tracking connections, etc. It is a small transport layer designed on top of IP. Datagram’s - Packets are sent individually and are guaranteed to be whole if they arrive. Packets have definite bounds and no split or merge into data streams may exist.

Check Your Progress 3. What is the use of UDP protocol? 4. Differentiate TCP and UDP. 9.6 Summary This lesson explained the transport layer operates only on the end hosts, while the network layer also operates on intermediate network nodes. From this lesson, we understood all the services of the transport layer, primitives of the transport layer and the transport layers protocols. Then we described the transport protocol and related issues, such as segmentation, multiplexing, addressing, error control, and flow control. Finally, we discussed TCP as an example of a popular transport layer standard. 9.7 Keywords SACK : TCP employs the selective acknowledgment (SACK) option, defined in RFC 2018, which allows the receiver to acknowledge discontiguous blocks of packets that were received correctly. UDP : User Datagram Protocol (UDP) is one of the core protocols of the Internet protocol.

9.9 Check Your Progress Note: Use the space provided below for your answers. Compare your answers with those given at the end.

1. Write the use of Flow control.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

243

………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………………… 2. List out the segments of TCP. ………………………………………………………………………………………………………………… …………………………………….. 3. What is the use of UDP protocol? ………………………………………………………………………………………………………………… …………………………………….. 4. Differentiate TCP and UDP. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………………… Answer to Check Your Progress 1. In computer networking, flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from over running a slow receiver. This should be distinguished from congestion control, which is used for controlling the flow of data when congestion has actually occurred. 2. A TCP segment consists of two sections:  

header data

3. User Datagram Protocol (UDP) is one of the core protocols of the Internet protocol suite. Using UDP, programs on networked computers can send short messages sometimes known as datagrams (using Datagram Sockets) to one another. UDP is sometimes called the Universal Datagram Protocol. 4. TCP ("Transmission Control Protocol") is a connection-oriented protocol, which means that upon communication it requires handshaking to set up end-to-end connection. A connection can be made from client to server, and from then on any data can be sent along that connection.    

Reliable Ordered Heavyweight Streaming

UDP is a simpler message-based connectionless protocol. In connectionless protocols, there is no effort made to setup a dedicated end-to-end connection.    

Unreliable Not ordered Lightweight Datagram’s

9.10 Further Reading 1. “Communication Network – Fundamental Concepts and Key Architecture”, Leon Greia and Widjaja.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 2. William Stallings, “Data and Computer Communications”, 5 1997.

244 th

Edition, Pearson Education,

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

245

UNIT – 10 APPLICATION SERVICES 10.0 Introduction 10.1 Objectives 10.2 Definition 10.3 Applications 10.3.1 Application Services 10.3.2 Reliable Transfer 10.3.3 Remote Operations 10.4 Sessions and Presentations aspects 10.4.1 Session Services 10.4.2 Session Protocol 10.4.3 Synchronization 10.4.4 The presentation layer 10.4.5 Presentation Services 10.5 DNS 10.5.1 Uses 10.5.2 History 10.5.3 Structure 10.5.4 Address resolution mechanism 10.5.5 Protocol Details 10.6 Summary 10.7 Keywords 10.8 Exercise and Questions 10.9 Check Your Progress 10.10 Further Reading 10.0 Introduction This lesson describes the application layer is comprised of a variety of standards, each of which provides a set of useful services for the benefit of end-user applications. Only those services which can be provided in a system-independent fashion are subject to standardization. These include: virtual terminal handling, message handling, file transfer, job transfer, and many others. Describing all existing application layer standards is beyond the scope of this lesson, as they are numerous and detailed. -The aim is to illustrate the flavor of the services provided rather than provide an exhaustive coverage of the standards. 10.1 Objectives After studying this lesson you should able to  Understand about the Applications of the application layer  Describe the Sessions and Presentation Aspects  Explain the Domain Name Service (DNS) 10.2 Definition The session layer provides a structured means for data exchange between user processes (or applications) on communicating hosts. This layer uses the term session instead of connection to signify the point that communication is studied from an application rather than a host point of view.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

246

10.3 Applications 10.3.1 Application Services Depending on their scope of use, application services may be divided into two broad categories: 1. General, low-level services which are used by most applications. This group includes three sets of services: association control, reliable transfer, and remote operations. 2. Specific, high-level services which are designed to support task-oriented application requirements. Examples include: virtual terminal handling, message handling, and file transfer. These directly utilize the general application services. User applications should make as much use of the above services as possible so that they can benefit from the standardizations. A user application can be imagined as consisting of two parts: (i) a standardized part, called Application Entity (AE), which is immersed in the application layer and is implemented in terms of the application layer services, and (ii) a non-standard part, called Application Process(AP), which remains outside the scope of the application layer. Typically, the application entity deals with the inter-application communication, and the application process provides the user front-end and interfaces to the local environment. User applications in relation to the application layer.

The above Figure illustrates a further point. Unlike earlier layers, the application layer does not provide connections. This should not come as a surprise, because there are no further layers above the application layer that would otherwise use such a connection. Instead, two applications use an association between their application entities for exchange of information over the presentation connection. An association serves as an agreement so that the two parties are aware of each other’s expectations. Application Entity As mentioned earlier, an application entity accounts for that part of a user application which falls within the scope of the application layer. An application entity consists of one or more application service elements and a control function . An Application Service Element (ASE) represents an application service and its associated standard protocol. A Control Function (CF) regulates the activities of an application entity. This includes managing the ASEs and the association with the peer application entity. Peer application entities must share a common application context by ensuring that either entity uses exactly the same set of ASEs and uses them in the same fashion. There are two classes of ASEs:  A Common Application Service Element (CASE) denotes a general application service.  A Specific Application Service Element (SASE) denotes a specific application service.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

247

Makeup of an application entity.

Common Application Service Elements This section describes three CASEs used by many other ASEs. Association Control The Association Control Service Element (ACSE) is used by every application entity. It provides a set of functions for managing the association between peer application entities. Since managing an association is independent of the activities the association is used for, ACSE serves as an appropriate and highly useful standard. An association is established using the A-ASSOCIATE service primitive. This maps to the P-CONNECT primitive and hence also establishes a corresponding presentation connection. The application context and the abstract syntaxes are also conveyed during this process. The A-RELEASE primitive maps to P-RELEASE and handles the orderly release of an association as well as the release of the corresponding presentation connection after ensuring that all data in transit has been delivered. Error-initiated releases are managed by A-ABORT and A-P-ABORT which, respectively, map to and are similar to P-UABORT and P-P-ABORT . The ACSE service primitives are implemented as Application Protocol Data Units (APDUs). Each of these represents a unique application-wide type and is tagged accordingly. ISO 8649 and CCITT X.217 standards describe the ACSE service. The ACSE protocol is described in ISO 8650 and CCITT X.227 standards. 10.3.2 Reliable Transfer The Reliable Transfer Service Element (RTSE) hides much of the underlying complexity of the session dialogue management services by providing high-level error handling capabilities to other ASEs for data transfer. RTSE segments a data transfer APDU into smaller PDUs, each of which is marked by a confirmed minor synchronization point. The transfer of an APDU is managed as one session activity. RTSE uses the ACSE to establish an association: its RT-OPEN service primitive maps to the A-ASSOCIATE primitive. RT-TRANSFER is a confirmed primitive and manages the transfer of an APDU as segments. It maps to a sequence of PACTIVITY, P-DATA and P-SYNC-MINOR primitives. 10.3.3 Remote Operations The Remote Operations Service Element (ROSE) serves the needs of distributed applications in invoking remote operations. An example is an application which requires access to a remote database. ROSE enables a requester AE to submit an operation to a replier AE, then waits for a response, and finally delivers the response to the application. The response may indicate the result of successful completion of the operation, an error condition, or complete rejection of the operation. ROSE supports two modes of operation: (i) synchronous, whereby the requester AE waits for a result before submitting the next operation, and (ii) asynchronous, whereby the requester may submit further operations before waiting for earlier ones to be completed.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

248

ROSE provides a remote operation notation which other ASEs may use to define their operation interface. The ROSE protocol specifies how its four APDUs are transferred using the presentation service (which maps them to P-DATA primitives) or the RTSE service (which maps them to RTTRANSFER primitives). ISO 9072.1 and CCITT X.219 standards describe the ROSE service. The ROSE protocol is described in ISO 9072.2 and CCITT X.229 standards. Specific Application Service Elements This sections describes three widely-used SASEs. Virtual Terminal There are many different makes and types of character-based terminals in use throughout the world. Very few of them use the same set of commands to control the display or obtain input from keyboards or other input devices. Because of these incompatibilities, terminal dependency has been a common problem in applications. The aim of the Virtual Terminal (VT) standards is to facilitate terminal independency by providing a model for connecting applications and terminals which hides the device specific information from applications. VT consist of two standards: ISO 9040 describes the VT services, and ISO 9041 describes the VT protocol. VT employs a model in which terminal access is provided through a Conceptual Communication Area (CCA). CCA provides data abstractions for the terminal screen, keyboard, etc., in form of objects, of which there are three types:   

Display Object. All terminal data is routed through a display object. The display object reflects the state of the terminal display and/or its related input devices. Device Object. The device object specifies the physical characteristics of a device. Naturally, the information provided by a device object is device dependent and outside the scope of the standard. Control Object. A control object manages a specific VT function. There are generally many control objects responsible for functions such as interrupts, character echoing, and field definition.

Using the device objects, the display and control objects are mapped to the actual terminal device. VT maintains a copy of CCA at both the terminal and the application end, and ensures that these two copies reflect the same picture by exchanging updates between them as they take place. VT supports synchronous (called S-mode) and asynchronous (called A-mode) communication between terminals and applications. In the S-mode the same display object is used for input and output paths. In the A-mode two display objects are employed, one for the input device and one for the output device. The VT service contains a set of facilities for managing the communication process. These are used to establish and terminate VT associations, negotiate VT functional units, transfer data, exchange synchronization and acknowledgment information, and manage access rights.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

249

CCA and its objects.

Message Handling Systems Electronic mail (e-mail) represents one of the most successful classes of network applications currently enjoyed by many users. Early e-mail systems were network dependent, and their use was limited to the private networks of individual organizations. The CCITT X.400 and the ISO 10021 series of standards for Message Handling Systems (MHS) have paved the way for standardized and widely-available e-mail services. At the center of MHS is a Message Transfer System (MTS) which handles the delivery of messages. The system consists of the following components: 

A Message Transfer Agent (MTA) is responsible for the routing of complete e-mail messages (called envelopes) through the MTS. MTAs handle envelopes in a store-andforward fashion.  A User Agent (UA) manages a user’s mailbox. It enables the user to create, submit, and receive messages. The UA may serve an application or provide a user interface for direct interaction. UAs typically run on multi-user systems (e.g., mainframes).  A Message Store (MS) acts on behalf of a UA running on a system which may not be available on a continuous basis (e.g., personal computers). MSs are typically used within a LAN environment, serving a collection of personal computers. MHS architecture.

UAs and MSs make it possible for users to receive messages when they are not personally present, and while even their terminal or personal computer is not switched on. They simply store the messages and notify the user at the earliest opportunity. Each user has its own UA. Furthermore, each user is identified by a unique address which is structured in a hierarchical fashion (similar to a postal address). The address structure reflects the division of MTAs into domains. Domains exist at various level of abstraction: country, organization, unit, etc. It consists

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

250

of contents and addressing information. The contents consists of two parts: heading and body. A heading is comprised of fields such as recipients’ addresses, addresses to which the message should be copied, originator’s address, subject, etc. Some of the heading information is provided by the user (e.g., recipients’ addresses and subject), others are automatically inserted by the UA (e.g., date and originator’s address). The body contains the message information itself. It may consist of more than one part, each of which may be of a different type (e.g., text, digitized voice, digitized image). An envelope is constructed by a UA by deriving envelope addressing information from the heading and adding it to the contents. MTAs only deal with envelopes and their addressing. They have no interest in the contents of an envelope. Each receiving MTA looks at the addressing of an envelope, records on it the last leg of the route so far, time stamps it, and hands it over to the next MTA. The envelope therefore bears a complete trace of its route through the network of MTAs. Envelope structure.

As would be expected, the MHS service is defined by a set of service primitives.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

251

MHS service primitives.

MHS uses four protocols (P1, P2, P3, and P7) to provides two types of service: ďƒ˜ ďƒ˜

The Message Transfer (MT) service supports the handling of envelopes. This service operates between MTAs (P1 protocol), between UAs and MTA/MSs (P3 protocol), and between UAs and MSs (P7 protocol). The Inter-Personal Messaging (IPM) service supports the handling of the contents of envelopes. This service operates between UAs (P2 protocol). IPM depends on the MT service for its operation. These services are organized as service groups, each of which contains related

10.4 Sessions and Presentation Aspects This lesson describes the session layer of the OSI model. The session layer provides a structured means for data exchange between user processes (or applications) on communicating hosts. This layer uses the term session instead of connection to signify the point that communication is studied from an application rather than a host point of view. More specifically, a session imposes a set of rules on the way applications should communicate. Also of interest are: how a session is negotiated between two applications, the synchronization and control of message exchanges between applications (e.g., how they should take turns), the context of messages (e.g., whether they relate to records from a database or keystrokes on a terminal), dealing with transport failures, and the bracketing of messages as required by some applications.

10.4.1 Session Services Addresses refer to Session Service Access Points (SSAP); these typically are the same as their corresponding transport addresses. Quality Of Service (QOS) denotes a set of parameters which collectively describe the quality of the session service requested by the user. These are essentially the same as those used for the transport layer, with the addition of a few new ones

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

252

(e.g., concatenation of service requests). Result denotes the acceptance or rejection of the connection. Requirements are used for the negotiation and selection of functional units that are to be effective for the duration of the session. Serial number denotes the initial serial number used for synchronization purposes. Token refers to the side to which the tokens are initially assigned. User data denotes actual user data provided by the service user for transfer by the service provider. Session service user A first requests a (half-duplex) connection, which is indicated to session service user B by the service provider. B responds to the request and the service provider confirms with A. Two normal data transfers from A to B follow, with a minor synchronization cycle in between. B then asks A for the data token, which A hands over to B. B sends some data to A, and then asks for the session to be aborted.

Sample scenario of session services.

Session Layer Role The exact role of the session layer in relation to other layers is worth some consideration. Although the session layer is positioned below and receives its requests from the presentation layer, its primary role is to serve the application layer by allowing applications to converse in a structured manner. The presentation layer is completely transparent to this process. The interface between the session layer and the transport layer is very simple. It provides for basic opening and closing of transport connections (a session connection directly maps to a transport connection) and reliable transfer of data over the connections. This simple interface is enriched by the session layer into a diverse set of services for the ultimate use by the application layer.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

253

Role of the session layer.

Functional Units It is generally rare for an application to require the use of all session services. Often a relatively small subset will suffice. To facilitate this, the session layer services are divided into 13 functional units. Each functional unit is a logical grouping of related session services. Functional units are negotiated and selected during connection establishment using the requirements parameter in the connection request. The functional units selected determine what services will be available for the duration of the session. Kernel represents the most basic set of services that applications could live with. All session layer implementations must provide this minimal subset. 10.4.2 Session Protocol As with the transport protocol, the session protocol is essentially connection oriented. It consists of three phases: a connection establishment phase, a data transfer phase, and a connection release phase. The data transfer is by far the most complex of the three, accounting for most of the service primitives. Below, we will look at various protocol-related concepts which primarily deal with the data transfer phase. Tokens Tokens provide a mechanism for limiting the use of certain session services to one of the two session users at a time. Four tokens are provided: 

Data Token. This is used for half-duplex connections. The service user possessing the token has the exclusive right to issue S-DATA requests. Data exchanges using SEXPEDITED-DATA and S-TYPED-DATA requests are not affected by this token. This token is irrelevant to and unavailable in full duplex connections. Release Token. This is used for connections which have successfully negotiated the use of the Negotiated Release functional unit. The service user possessing the token has the exclusive right to issue an S-RELEASE request. Disconnections using S-U-ABORT requests are not affected by this token. Sync-Minor Token. This is used for connections which have successfully negotiated the use of the Minor Synchronize functional unit. The service user possessing the token has the exclusive right to issue S-SYNC-MINOR requests. This token is irrelevant and unavailable when the Symmetric Synchronize functional unit is being used instead. Sync-Major/Activity Token. This is used for connections which have successfully negotiated the use of the Major Synchronize or the Activity Management functional unit. The service user possessing the token has the exclusive right to issue S-SYNC-MAJOR and S-ACTIVITY requests. Token distribution is managed by three service primitives. STOKEN-PLEASE is used by a service user to request the possession of one or more

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

254

tokens from the other user. S-TOKEN-GIVE is used by the possessor of the token(s) to forward them to the other user. Finally, S-CONTROL-GIVE enables a service user to forward all its tokens to the other user. Activities and Dialogue Units An activity represents a convenient way of referring to a set of related tasks as a single entity. It has a clear beginning and a clear end, both of which are marked by major synchronization points. An activity consists of one or more atomic tasks, called dialogue units. Like an activity, the beginning and end of each dialogue unit is marked by a major synchronization point. Dialogue units (and therefore activities) can be interrupted and resumed later. A transaction issued by a banking application to an account database is an example of an activity. It may consist of one or more records, each of which represents a dialogue unit . A transaction ‘activity’.

The activity service is based on seven service primitives for managing activities. An S-ACTIVITYSTART request is issued by one service user to another to indicate the commencement of an activity. It causes the synchronization serial numbers to be set to 1. Each activity is identified by a user-specified identifier. An activity is completed by the service user issuing an S-ACTIVITY-END request, subject to confirmation. Both the activity start and end primitives involve implicit major synchronization points. A service user may interrupt an activity by issuing an S-INTERRUPT-ACTIVITY request, subject to confirmation. Data in transit may be lost, but the activity can be later resumed by issuing an SACTIVITY-RESUME request and specifying the identifier of the interrupted activity. A current activity can be discarded by issuing an A-ACTIVITY-DISCARD request, subject to confirmation. The rest of an activity is occupied by data transfer (using mostly S-DATA) and synchronization, which is discussed next. Check Your Progress 1. What are the categories of the application services? 2. Define Control Object. 3. What is called Message Store? 4. Define Activity.

10.4.3 Synchronization The synchronization service of the session layer is based upon the use of synchronization points. These are markers that are inserted in the data flow to coordinate the exchange of data between applications. There are two types of synchronization points: 

Major Synchronization Points. These are used to delimit dialogue units, and hence activities. This service is supported by the S-SYNCMAJOR primitive. When this request

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

255

is used by a service user, no further data exchange can take place until it is confirmed by the receiving user. The primitive takes two parameters: a serial number and an arbitrary block of user data. Minor Synchronization Points. These can be issued at arbitrary points in time, and are used for coordinating data exchange within dialogue units. This service is supported by the S-SYNC-MINOR primitive. The primitive takes one parameter (type) in addition to the two used by S-SYNCMAJOR; it determines whether the request is to be confirmed.

Minor synchronization points.

All synchronization points are serially numbered for ease of reference. The serial numbers are managed by the service provider. For minor synchronization, when the functional unit Symmetric Synchronize is selected instead of Minor Synchronize at connection time, two serial numbers are employed, one for each data flow direction. Error Reporting and Resynchronization The error reporting service is based on two primitives. S-U-EXCEPTION is issued by a service user to report an error condition to the other user. Error conditions detected by the service provider are reported to service users using the S-PEXCEPTION primitive. All data in transit is lost as a consequence. Also, unless the error is handled by the service users, no further activity is permitted. Depending on the nature of the error, it may be handled in a number of ways: issuing an S-UABORT, S-ACTIVITY-INTERRUPT, S-ACTIVITY-DISCARD, or re-synchronizing. Resynchronization is a service used for recovering from errors as well as for resolving disagreements between service users. It allows the service users to agree on a synchronization point, and start from that point. All data in transit is lost as a result. This service is supported by the S-RESYNCHRONIZE primitive, which may be issued by either user and requires confirmation. It takes four parameters: type, serial number (for a synchronization point), tokens (for distribution of tokens after resynchronization), and user data (an unlimited block). The type parameter may be one of:   

Abandon. Aborts the current dialogue by re-synchronizing to a new synchronization point with a serial number greater than all previous ones. Restart. Re-attempts a dialogue by retreating to an earlier synchronization point, provided it is no earlier than the last confirmed major synchronization point. Reset. Aborts the current dialogue and sets the synchronization serial number to a new value subject to negotiation.

SPDUs Session layer messages are exchanged by the transport layer using Session Protocol Data Units (SPDUs). Most primitives are implemented as one or two SPDUs (the additional SPDU is used where acknowledgment is required, i.e., in confirmed primitives). Some SPDUs are individually mapped onto TSDUs (e.g., Connect and Disconnect SPDUs). Others are always

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

256

concatenated with Please- Token and Give-Token SPDUs and then mapped onto TSDUs (e.g., all Activity SPDUs). The general structure of an SPDU is shown in Figure 6.69. The exact parameters depend on the type of SPDU. As indicated in the figure, parameters may be grouped together. Furthermore, parameter groups may contain subgroups. 10.4.4 The Presentation Layer Applications use a variety of forms of data, ranging in complexity from very simple (e.g., textual) to elaborate and complex (e.g., nested data structures). Communication between applications involves the exchange of such data. However, all forms of data have inherent programming language and machine dependencies, which means that unless application data is accurately converted to a format acceptable by the peer application, the meaning of the data will be lost during transmission. The role of the presentation layer is to facilitate semantic-preserving data exchange between two peer applications. The presentation layer achieves this in two stages: (i) by having the two peer applications adopt a common high-level syntax for data definition, and (ii) by negotiating a transfer syntax for the sole purpose of transmission, which both applications can convert to and from. We will first look at the notion of abstract data syntax, and then describe presentation service primitives and functional units. ASN.1 will be presented as a standard abstract syntax definition notation for the use of applications, together with BER for defining data at the binary level. Use of ASN.1 and BER will be illustrated using some simple examples. Next, we will discuss the presentation protocol, and conclude by listing a number of presentation standards.

10.4.5 Presentation Services The notion of syntax is central to understanding the presentation services. This is discussed below first, followed by a description of presentation service primitives and service functional units. Syntax Data is structured according to a set of rules, called syntax. Depending on their level of abstraction, syntax rules may be classified into two categories: abstract and concrete. Abstract syntax is a high-level specification of data which makes no assumptions about its machine representation. Abstract syntax describes the essential characteristics of data in general terms. Concrete syntax, on the other hand, is a low-level (bit-level) specification of data according to some specific machine representation. In general, a concrete syntax is derived from an abstract syntax by applying a set of encoding rules to the latter. It follows that there is a one-to-many mapping between the abstract syntax of some data and its concrete syntaxes, that is, the same data can be represented in many different formats. For example, consider the following statement: An RGB color is defined as an object of three components (red, green, and blue), each of which is an integer quantity. This is an example of an abstract syntax specification. It specifies the essential characteristics of an RGB color without saying anything about its representation in a computer. Format 1 uses a 16-bit signed integer quantity for representing each primary color, in a machine where bits are ordered left to right. Format 2 is identical to format 1, except that bits are ordered right to left. Formats 3 and 4 both use 8-bit unsigned integers with bits ordered left to right. However, in format 4, the primary colors appear in reverse order. Four alternate representation formats for an RGB color.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

257

Concrete syntax is essential wherever data is to be digitally stored or communicated. In general, each system has its own concrete syntax which may be different to the concrete syntax of other systems. Two communicating applications running on two such systems would have to convert their data into a common concrete syntax to facilitate transmission. We will use the terms application concrete syntax and transfer concrete syntax to distinguish between these two syntaxes. To preserve the characteristics of the data which is subject to transformation, the two applications should standardize on the same abstract syntax notation. The role of various syntaxes.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

258

Presentation service primitives.

Service Primitives As mentioned in the previous lesson, the presentation layer is transparent to the exchanges taking place between the application and session layers for structuring application communication. Consequently, each session layer service primitive is represented by a corresponding presentation layer primitive. The presentation layer adds functionality to four of the session service primitives and provides a new primitive . The remaining service primitives are identical in functionality to their session layer counterparts. A choice of syntaxes is made through negotiation between peer applications at connection time. Applications negotiate and agree on the abstract syntaxes for all data which is subject to transfer. For each abstract syntax S, they should also agree on a corresponding transfer syntax T. The combination of S and T is called a presentation context. In most cases, more than one presentation context is used for the same presentation connection. The set of all such contexts is called the defined context set. Applications also negotiate a default context, which is used when the defined context set is empty. A presentation connection is directly mapped onto a session connection. Most of the presentation service primitives are mapped directly and unchanged onto corresponding session primitives. Conversion between application concrete syntaxes and the transfer concrete syntax is managed by the presentation layer . All transferred data is tagged by the transmitting end to enable the receiving end to determine its presentation context. This enables the latter to correctly map the bit strings into meaningful data for use by the receiving application.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

259

Sample scenario of presentation services.

Presentation service functional units.

Functional Units The above figure summarizes the available presentation functional units, which includes all of the session functional units. Kernel represents a mandatory subset of services which all presentation layer implementations must provide. 10.5 DNS The Domain Name System (DNS) associates various information with domain names; most importantly, it serves as the "phone book" for the Internet by translating human-readable

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

260

computer hostnames, into IP addresses, e.g. 208.77.188.166, which networking equipment needs to deliver information. A DNS also stores other information such as the list of mail servers that accept email for a given domain. By providing a worldwide keyword-based redirection service, the Domain Name System is an essential component of contemporary Internet use . 10.5.1 Uses Above all, the DNS makes it possible to assign domain names to organizations independent of the physical routing hierarchy represented by the numerical IP address. Because of this, hyperlinks and Internet contact information can remain the same, whatever the current IP routing arrangements may be, and can take a human-readable form (such as "example.com"). These Internet names are easier to remember than the IP address 208.77.188.166. People take advantage of this when they recite meaningful URLs and e-mail addresses without caring how the machine will actually locate them. The Domain Name System distributes the responsibility for assigning domain names and mapping them to IP networks by allowing an authoritative name server for each domain to keep track of its own changes, avoiding the need for a central register to be continually consulted and updated. Additionally other arbitrary identifiers such as RFID tags, UPC codes, International characters in email addresses and host names, and a variety of other identifiers could all potentially utilize DNS. 10.5.2 History The practice of using a name as a more human-legible abstraction of a machine's numerical address on the network predates even TCP/IP. This practice dates back to the ARPAnet era. Back then, a different system was used. The DNS was invented in 1983, shortly after TCP/IP was deployed. With the older system, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI (now SRI International). The HOSTS.TXT file mapped numerical addresses to names. A hosts file still exists on most modern operating systems, either by default or through configuration, and allows users to specify an IP address (eg. 208.77.188.166) to use for a hostname (eg. www.example.net) without checking DNS. Systems based on a hosts file have inherent limitations, because of the obvious requirement that every time a given computer's address changed, every computer that seeks to communicate with it would need an update to its hosts file. The growth of networking required a more scalable system that recorded a change in a host's address in one place only. Other hosts would learn about the change dynamically through a notification system, thus completing a globally accessible network of all hosts' names and their associated IP Addresses. At the request of Jon Postel, Paul Mockapetris invented the Domain Name system in 1983 and wrote the first implementation. The original specifications appear in RFC 882 and RFC 883. In November 1987, the publication of RFC 1034 and RFC 1035 updated the DNS specification and made RFC 882 and RFC 883 obsolete. Several more-recent RFCs have proposed various extensions to the core DNS protocols. In 1984, four Berkeley students — Douglas Terry, Mark Painter, David Riggle and Songnian Zhou — wrote the first UNIX implementation, which was maintained by Ralph Campbell thereafter. In 1985, Kevin Dunlap of DEC significantly re-wrote the DNS implementation and renamed it BIND (Berkeley Internet Name Domain, previously: Berkeley Internet Name Daemon). Mike Karels, Phil

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

261

Almquist and Paul Vixie have maintained BIND since then. BIND was ported to the Windows NT platform in the early 1990s. Due to BIND's long history of security issues, several alternative nameserver and resolver programs have been written and distributed in recent years. 10.5.3 Structure

Domain names, arranged in a tree, cut into zones, each served by a name server. The domain name space The domain name space consists of a tree of domain names. Each node or leaf in the tree has zero or more resource records, which hold information associated with the domain name. The tree sub-divides into zones beginning at the root zone. A DNS zone consists of a collection of connected nodes authoritatively served by an authoritative DNS nameserver. (Note that a single nameserver can host several zones.) When a system administrator wants to let another administrator control a part of the domain name space within the first administrator’s zone of authority, control can be delegated to the second administrator. This splits off a part of the old zone into a new zone, which comes under the authority of the second administrator's nameservers. The old zone ceases to be authoritative for the new zone. Parts of a domain name A domain name usually consists of two or more parts (technically a label), which is conventionally written separated by dots, such as example.com. The rightmost label conveys the top-level domain (for example, the address www.example.com has the top-level domain com).

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 

262

Each label to the left specifies a subdivision, or subdomain of the domain above it. Note: “subdomain” expresses relative dependence, not absolute dependence. For example: example.com is a subdomain of the com domain, and www.example.com is a subdomain of the domain example.com. In theory, this subdivision can go down 127 levels. Each label can contain up to 63 octets. The whole domain name may not exceed a total length [4] of 253 octets. In practice, some domain registries may have shorter limits. A hostname refers to a domain name that has one or more associated IP addresses; ie: the 'www.example.com' and 'example.com' domains are both hostnames, however, the 'com' domain is not.

DNS servers The Domain Name System is maintained by a distributed database system, which uses the clientserver model. The nodes of this database are the name servers. Each domain or subdomain has one or more authoritative DNS servers that publish information about that domain and the name servers of any domains subordinate to it. The top of the hierarchy is served by the root nameservers: the servers to query when looking up (resolving) a top-level domain name (TLD). DNS resolvers The client-side of the DNS is called a DNS resolver. It is responsible for initiating and sequencing the queries that ultimately lead to a full resolution (translation) of the resource sought, e.g., translation of a domain name into an IP address. A DNS query may be either a recursive query or a non-recursive query. The resolver (or another DNS server acting recursively on behalf of the resolver) negotiates use of recursive service using bits in the query headers.  

A non-recursive query is one in which the DNS server may provide a partial answer to the query (or give an error). A recursive query is one where the DNS server will fully answer the query (or give an error). DNS servers are not required to support recursive queries.

Resolving usually entails iterating through several name servers to find the needed information. However, some resolvers function simplistically and can communicate only with a single name server. These simple resolvers rely on a recursive query to a recursive name server to perform the work of finding information for them. 10.5.4 Address resolution mechanism In theory a full host name may have several name segments, (e.g ahost.ofasubnet.ofabiggernet.inadomain.example). In practice, full host names will frequently consist of just three segments (ahost.inadomain.example, and most often www.inadomain.example). For querying purposes, software interprets the name segment by segment, from right to left, using an iterative search procedure. At each step along the way, the program queries a corresponding DNS server to provide a pointer to the next server which it should consult.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

263

A DNS recursor consults three nameservers to resolve the address www.wikipedia.org. As originally envisaged, the process was as simple as:    

the local system is pre-configured with the known addresses of the root servers in a file of root hints, which need to be updated periodically by the local administrator from a reliable source to be kept up to date with the changes which occur over time. query one of the root servers to find the server authoritative for the next level down (so in the case of our simple hostname, a root server would be asked for the address of a server with detailed knowledge of the example top level domain). querying this second server for the address of a DNS server with detailed knowledge of the second-level domain (inadomain.example in our example). repeating the previous step to progress down the name, until the final step which would, rather than generating the address of the next DNS server, return the final address sought.

The mechanism in this simple form has a difficulty: it places a huge operating burden on the root servers, with every search for an address starting by querying one of them. Being as critical as they are to the overall function of the system, such heavy use would create an insurmountable bottleneck for trillions of queries placed every day. The section DNS in practice describes how this is addressed. Circular dependencies and glue records Name servers in delegations appear listed by name, rather than by IP address. This means that a resolving name server must issue another DNS request to find out the IP address of the server to which it has been referred. Since this can introduce a circular dependency if the nameserver referred to is under the domain that it is authoritative of, it is occasionally necessary for the nameserver providing the delegation to also provide the IP address of the next nameserver. This record is called a glue record. Caching and time to live Because of the huge volume of requests generated by a system like DNS, the designers wished to provide a mechanism to reduce the load on individual DNS servers. To this end, the DNS resolution process allows for caching (i.e. the local recording and subsequent consultation of the results of a DNS query) for a given period of time after a successful answer. How long a resolver caches a DNS response (i.e. how long a DNS response remains valid) is determined by a value called the time to live (TTL). The TTL is set by the administrator of the DNS server handing out the response. The period of validity may vary from just seconds to days or even weeks.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

264

In the real world

DNS resolving from program to OS-resolver to ISP-resolver to greater system. Users generally do not communicate directly with a DNS resolver. Instead DNS-resolution takes place transparently in client-applications such as web-browsers, mail-clients, and other Internet applications. When an application makes a request which requires a DNS lookup, such programs send a resolution request to the local DNS resolver in the local operating system, which in turn handles the communications required. The DNS resolver will almost invariably have a cache (see above) containing recent lookups. If the cache can provide the answer to the request, the resolver will return the value in the cache to the program that made the request. If the cache does not contain the answer, the resolver will send the request to one or more designated DNS servers. In the case of most home users, the Internet service provider to which the machine connects will usually supply this DNS server: such a user will either have configured that server's address manually or allowed DHCP to set it; however, where systems administrators have configured systems to use their own DNS servers, their DNS resolvers point to separately maintained nameservers of the organization. In any event, the name server thus queried will follow the process outlined above, until it either successfully finds a result or does not. It then returns its results to the DNS resolver; assuming it has found a result, the resolver duly caches that result for future use, and hands the result back to the software which initiated the request. Broken resolvers An additional level of complexity emerges when resolvers violate the rules of the DNS protocol. A number of large ISPs have configured their DNS servers to violate rules (presumably to allow them to run on less-expensive hardware than a fully-compliant resolver), such as by disobeying TTLs, or by indicating that a domain name does not exist just because one of its name servers does not respond. As a final level of complexity, some applications (such as web-browsers) also have their own DNS cache, in order to reduce the use of the DNS resolver library itself. This practice can add extra difficulty when debugging DNS issues, as it obscures the freshness of data, and/or what data comes from which cache. These caches typically use very short caching times — on the order of one minute. Internet Explorer offers a notable exception: recent versions cache DNS records for half an hour. Other applications The system outlined above provides a somewhat simplified scenario. The Domain Name System includes several other functions:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS ďƒ˜

ďƒ˜

265

Hostnames and IP addresses do not necessarily match on a one-to-one basis. Many hostnames may correspond to a single IP address: combined with virtual hosting, this allows a single machine to serve many web sites. Alternatively a single hostname may correspond to many IP addresses: this can facilitate fault tolerance and load distribution, and also allows a site to move physical location seamlessly. There are many uses of DNS besides translating names to IP addresses. For instance, Mail transfer agents use DNS to find out where to deliver e-mail for a particular address. The domain to mail exchanger mapping provided by MX records accommodates another layer of fault tolerance and load distribution on top of the name to IP address mapping.

10.5.5 Protocol details DNS primarily uses UDP on port 53 to serve requests. Almost all DNS queries consist of a single UDP request from the client followed by a single UDP reply from the server. TCP comes into play only when the response data size exceeds 512 bytes, or for such tasks as zone transfer. Some operating systems such as HP-UX are known to have resolver implementations that use TCP for all queries, even when UDP would suffice. Types of DNS records When sent over the internet, all records use the common format specified in RFC 1035 shown below.

RR (Resource record) fields

Field

Description

Length (octets)

NAME

Name of the node to which this record pertains. (variable)

TYPE

Type of RR. For example, MX is type 15.

2

CLASS

Class code.

2

TTL

Signed time in seconds that RR stays valid.

4

RDLENGTH Length of RDATA field.

2

RDATA

(variable)

Additional RR-specific data.

The type of the record indicates what the format of the data is, and gives a hint of its intended use; for instance, the A record is used to translate from a domain name to an IPv4 address, the

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

266

NS record lists which name servers can answer lookups on a DNS zone, and the MX record is used to translate from a name in the right-hand side of an e-mail address to the name of a machine able to handle mail for that address. Many more record types exist and be found in the complete List of DNS record types. Security issues DNS was not originally designed with security in mind, and thus has a number of security issues. One class of vulnerabilities is DNS cache poisoning, which tricks a DNS server into believing it has received authentic information when, in reality, it has not. DNS responses are traditionally not cryptographically signed, leading to many attack possibilities; DNSSEC modifies DNS to add support for cryptographically signed responses. There are various extensions to support securing zone transfer information as well. Even with encryption, a DNS server could become compromised by a virus (or for that matter a disgruntled employee) that would cause IP addresses of that server to be redirected to a malicious address with a long TTL. This could have far-reaching impact to potentially millions of Internet users if busy DNS servers cache the bad IP data. This would require manual purging of all affected DNS caches as required by the long TTL (up to 68 years). Some domain names can spoof other, similar-looking domain names. For example, "paypal.com" and "paypa1.com" are different names, yet users may be unable to tell the difference when the user's typeface (font) does not clearly differentiate the letter l and the number 1. This problem is much more serious in systems that support internationalized domain names, since many characters that are different, from the point of view of ISO 10646, appear identical on typical computer screens. This vulnerability is often exploited in phishing. Techniques such as Forward Confirmed reverse DNS can also be used to help validate DNS results. Domain Registration The right to use a domain name is delegated by so-called domain name registrars which are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization charged with overseeing the name and number systems of the Internet. In addition to ICANN, each top-level domain (TLD) is maintained and serviced technically by a sponsoring organization, the TLD Registry. The registry is responsible for maintaining the database of names registered within the TLDs they administer. The registry receives registration information from each domain name registrar authorized to assign names in the corresponding TLD and publishes the information using a special service, the whose protocol. Registrars usually charge an annual fee for the service of delegating a domain name to a user and providing a default set of name servers. Often this transaction is termed a sale or lease of the domain name, and the registrant is called an "owner", but no such legal relationship is actually associated with the transaction, only the exclusive right to use the domain name. More correctly authorized users are known as "registrants" or as "domain holders". ICANN publishes a complete list of TLD registries and domain name registrars in the world. One can obtain information about the registrant of a domain name by looking in the WHOIS database held by many domain registries. For most of the more than 240 country code top-level domains (ccTLDs), the domain registries hold the authoritative WHOIS (Registrant, name servers, expiration dates, etc.). For instance, DENIC, Germany NIC, holds the authoritative WHOIS to a .DE domain name. Since about 2001, most gTLD registries (.ORG, .BIZ, .INFO) have adopted this so-called "thick" registry approach,

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

267

i.e. keeping the authoritative WHOIS in the central registries instead of the registrars. For .COM and .NET domain names, a "thin" registry is used: the domain registry (e.g. VeriSign) holds a basic WHOIS (registrar and name servers, etc.). One can find the detailed WHOIS (registrant, name servers, expiry dates, etc.) at the registrars. Some domain name registries, also called Network Information Centres (NIC), also function as registrars, and deal directly with end users. But most of the main ones, such as for .COM, .NET, .ORG, .INFO, etc., use a registry-registrar model. There are hundreds of Domain Name Registrars that actually perform the domain name registration with the end user (see lists at ICANN or VeriSign). By using this method of distribution, the registry only has to manage the relationship with the registrar, and the registrar maintains the relationship with the end users, or 'registrants' -- in some cases through additional layers of resellers. Administrative contact A registrant usually designates an administrative contact to manage the domain name. In practice, the administrative contact usually has the most immediate power over a domain. Management functions delegated to the administrative contacts may include (for example):  

the obligation to conform to the requirements of the domain registry in order to retain the right to use a domain name authorization to update the physical address, e-mail address and telephone number etc. in WHOIS

Technical contact A technical contact manages the name servers of a domain name. The many functions of a technical contact include:   

making sure the configurations of the domain name conforms to the requirements of the domain registry updating the domain zone providing the 24×7 functionality of the name servers (that leads to the accessibility of the domain name)

Billing contact The party whom a domain name registrar invoices. Name servers Namely the authoritative name servers that host the domain name zone of a domain name. Abuse and Regulation Critics often claim abuse of administrative power over domain names. Particularly noteworthy was the VeriSign Site Finder system which redirected all unregistered .com and .net domains to a VeriSign webpage. For example, at a public meeting with VeriSign to air technical concerns about [8] SiteFinder , numerous people, active in the IETF and other technical bodies, explained how they were surprised by VeriSign's changing the fundamental behavior of a major component of Internet infrastructure, not having obtained the customary consensus. SiteFinder, at first, assumed every Internet query was for a website, and it monetized queries for incorrect domain names, taking the user to VeriSign's search site. Unfortunately, other applications, such as many

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

268

implementations of email, treat a lack of response to a domain name query as an indication that the domain does not exist, and that the message can be treated as undeliverable. The original VeriSign implementation broke this assumption for mail, because it would always resolve an erroneous domain name to that of SiteFinder. While VeriSign later changed SiteFinder's behaviour with regard to email, there was still widespread protest about VeriSign's action being more in its financial interest than in the interest of the Internet infrastructure component for which VeriSign was the steward. Despite widespread criticism, VeriSign only reluctantly removed it after the Internet Corporation for Assigned Names and Numbers (ICANN) threatened to revoke its contract to administer the root name servers. ICANN published the extensive set of letters exchanged, committee reports, and ICANN decisions. Truth in Domain Names Act In the United States, the "Truth in Domain Names Act" (actually the "Anticybersquatting Consumer Protection Act"), in combination with the PROTECT Act, forbids the use of a misleading domain name with the intention of attracting people into viewing a visual depiction of sexually explicit conduct on the Internet. Check Your Progress 5. What is synchronization? Explain its types. 6. What is the use of DNS? 10.6 Summary This lesson introduced you to the introduction of the Application Layer in the OSI reference model. Now, you can define a network and the use of the application layer in the network. You have also learnt that the various protocols used for the network. The application layer provides the services which directly support an application running on a host. 10.7 Keywords AE : Application Entity (AE), which is immersed in the application layer and is implemented in terms of the application layer services. AP : Application Process(AP), which remains outside the scope of the application layer. ASE : An Application Service Element (ASE) represents an application service and its associated standard protocol. CASE : A Common Application Service Element (CASE) denotes a general application service.

10.9 Check Your Progress. Note: Use the space provided below for your answers. Compare your answers with those given at the end.

1. What are the categories of the application services?

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

269

………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………. 2. Define Control Object. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………. 3. What is called Message Store? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ……….

4. Define Activity. ………………………………………………………………………………………………………………… …………………………………………………………………………… 5. What is synchronization? Explain its types. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………. 6. What is the use of DNS? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………. Answer To Check Your Progress. 1. Depending on their scope of use, application services may be divided into two broad categories: (i) General, low-level services which are used by most applications. This group includes three sets of services: association control, reliable transfer, and remote operations. (ii) Specific, high-level services which are designed to support task-oriented application requirements. Examples include: virtual terminal handling, message handling, and file transfer. 2. Control Object. A control object manages a specific VT function. There are generally many control objects responsible for functions such as interrupts, character echoing, and field definition. 3. A Message Store (MS) acts on behalf of a UA running on a system which may not be available on a continuous basis (e.g., personal computers). MSs are typically used within a LAN environment, serving a collection of personal computers.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

270

4. An activity represents a convenient way of referring to a set of related tasks as a single entity. It has a clear beginning and a clear end, both of which are marked by major synchronization points. 5. Synchronization The synchronization service of the session layer is based upon the use of synchronization points. These are markers that are inserted in the data flow to coordinate the exchange of data between applications. There are two types of synchronization points:  Major Synchronization Points.  Minor Synchronization Points.

6. The Domain Name System distributes the responsibility for assigning domain names and mapping them to IP networks by allowing an authoritative name server for each domain to keep track of its own changes, avoiding the need for a central register to be continually consulted and updated. 10.10 Further Reading 1. “Communication Network – Fundamental concepts and key architecture”, Leo Gareia and Widjaja. 2. “Communication Networks” - Sharam Hekmat, PragSoft Corporation, www.pragsoft.com.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT- 11 TELNET Structure 11.0 Introduction 11.1 Objectives 11.2 Definition 11.3 Tell net 11.3.1 Security 11.3.2 TELNET 5250 11.3.3 Telnet 3270 11.4 R Login 11.5 FTP 11.5.1 Connection Methods 11.5.2 Data Format 11.6 SMTP 11.6.1 Description 11.6.2 Outgoing main SMTP server 11.6.3 Security and Spamming 11.6.4 Extended SMTP 11.7 Summary 11.8 Keywords 11.9 Exercise and Questions 11.10 Check Your Progress 11.11 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

271


COMPUTER COMMUNICATION & NETWORKS

272

11.0 Introduction This lesson explains the concept and security of telnet5250 and telnet 3270. Also, it explains the Remote login and connection methods of file transfer protocol. Further, we will discuss about the description security of SMTP protocol. At the end of this lesson, you can understand about the extended SMTP protocol. 11.1 Objectives After studying this lesson you should able to  Understand about the concept and security of Telnet  Describe briefly about the File Transfer Protocol  Explain the concepts of SMTP  Discuss briefly about the extended SMTP protocol 11.2 Definition TELNET (Telecommunications Network) is a network protocol used on the Internet or local area network (LAN) connections. It was developed in 1969 beginning with RFC 15 and standardized as IETF STD 8, one of the first Internet standards. 11.3 Tell Net TELNET (Telecommunications Network) is a network protocol used on the Internet or local area network (LAN) connections. It was developed in 1969 beginning with RFC 15 and standardized as IETF STD 8, one of the first Internet standards. The term telnet also refers to software which implements the client part of the protocol. TELNET clients have been available on most Unix systems for many years and are available for virtually all platforms. Most network equipment and OSs with a TCP/IP stack support some kind of TELNET service server for their remote configuration (including ones based on Windows NT). Because of security issues with TELNET, its use has waned as it is replaced by the use of SSH for remote access. "To telnet" is also used as a verb meaning to establish or use a TELNET or other interactive TCP connection, as in, "To change your password, telnet to the server and run the password command". Most often, a user will be telnet ting to a Unix-like server system or a simple network device such as a router. For example, a user might "telnet in from home to check his mail at school". In doing so, he would be using a telnet client to connect from his computer to one of his servers. Once the connection is established, he would then log in with his account information and execute operating system commands remotely on that computer, such as ls or cd. On many systems, the client may also be used to make interactive raw-TCP sessions. It is commonly believed that a telnet session which does not include an IAC (character 255) is functionally identical. This is not the case however due to special NVT (Network Virtual Terminal) rules such as the requirement for a bare CR (ASCII 13) to be followed by a NULL (ASCII 0). Protocol details TELNET is a client-server protocol, based on a reliable connection-oriented transport. Typically this protocol used to establish a connection to TCP port 23, where a getty-equivalent program (telnetd) is listening, although TELNET predates TCP/IP and was originally run on NCP.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

273 [1]

Initially, TELNET was an ad-hoc protocol with no official definition . Essentially, it used an 8-bit channel to exchange 7-bit ASCII data. Any byte with the high bit set was a special TELNET character. On March 5th, 1973, a meeting was held at UCLA where "New TELNET" was defined in two NIC documents: TELNET Protocol Specification, NIC #15372, and TELNET Option Specifications, NIC #15373. The protocol has many extensions, some of which have been adopted as Internet standards. IETF standards STD 27 through STD 32 define various extensions, most of which are extremely common. Other extensions are on the IETF standards track as proposed standards. 11.3.1 Security When TELNET was initially developed in 1969, most users of networked computers were in the computer departments of academic institutions, or at large private and government research facilities. In this environment, security was not nearly as much of a concern as it became after the bandwidth explosion of the 1990s. The rise in the number of people with access to the Internet, and by extension, the number of people attempting to crack other people's servers made encrypted alternatives much more of a necessity. Experts in computer security, such as SANS Institute, and the members of the comp.os.linux.security newsgroup recommend that the use of TELNET for remote logins should be discontinued under all normal circumstances, for the following reasons: 

 

TELNET, by default, does not encrypt any data sent over the connection (including passwords), and so it is often practical to eavesdrop on the communications and use the password later for malicious purposes; anybody who has access to a router, switch, hub or gateway located on the network between the two hosts where TELNET is being used can intercept the packets passing by and obtain login and password information (and whatever else is typed) with any of several common utilities like tcpdump and Wireshark. Most implementations of TELNET have no authentication to ensure that communication is carried out between the two desired hosts and not intercepted in the middle. Commonly used TELNET daemons have several vulnerabilities discovered over the years.

These security-related shortcomings have seen the usage of the TELNET protocol drop rapidly, especially on the public Internet, in favor of the ssh protocol, first released in 1995. SSH provides much of the functionality of telnet, with the addition of strong encryption to prevent sensitive data such as passwords from being intercepted, and public key authentication, to ensure that the remote computer is actually who it claims to be. As has happened with other early Internet protocols, extensions to the TELNET protocol provide TLS security and SASL authentication that address the above issues. However, most TELNET implementations do not support these extensions; and there has been relatively little interest in implementing these as SSH is adequate for most purposes. The main advantage of TLS-TELNET would be the ability to use certificate-authority signed server certificates to authenticate a server host to a client that does not yet have the server key stored. In SSH, there is a weakness in that the user must trust the first session to a host when it has not yet acquired the server key. 11.3.2 TELNET 5250 IBM 5250 or 3270 workstation emulation is supported via custom TELNET clients, TN5250/TN3270, and IBM servers. Clients and servers designed to pass IBM 5250 data streams over TELNET generally do support SSL encryption, as SSH does not include 5250 emulation. Under OS/400, Port 992 is the default port for Secured TELNET.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

274

Current status As of the mid-2000s, while the TELNET protocol itself has been mostly superseded for remote login, TELNET clients are still used, often when diagnosing problems, to manually "talk" to other services without specialized client software. For example, it is sometimes used in debugging network services such as an SMTP, IRC, HTTP, FTP or POP3 server, by serving as a simple way to send commands to the server and examine the responses. This approach has limitations as TELNET clients speak is close to, but not equivalent to, raw mode (due to terminal control handshaking and the special rules regarding \377 and \15). Thus, other software such as nc (netcat) or socat on Unix (or PuTTY on Windows) are finding greater favor with some system administrators for testing purposes, as they can be called with arguments not to send any terminal control handshaking data. Also netcat does not distort the \377 octet, which allows raw access to TCP socket, unlike any standard-compliant TELNET software. TELNET is popular with:    

enterprise networks to access host applications, e.g. on IBM Mainframes. administration of network elements, e.g., in commissioning, integration and maintenance of core network elements in mobile communication networks. MUD games played over the Internet, as well as talkers, MUSHes, MUCKs, MOOes, and the resurgent BBS community. embedded systems

Reverse telnet Reverse Telnet is a specialized application of telnet, where the server side of the connection reads and writes data to a TTY line (RS-232 serial port), rather than providing a command shell to the host device. Typically, reverse telnet is implemented on an embedded device (e.g. terminal/console server), which has an Ethernet network interface and serial port(s). Through the use of reverse telnet on such a device, IP-networked users can use telnet to access seriallyconnected devices. In the past, reverse telnet was typically used to connect to modems or other external asynchronous devices. Today, reverse telnet is used mostly for connecting to the console port of a router, switch or other device. Example On the client, the command line for initiating a "reverse telnet" connection might look like this: telnet 172.16.1.254 2002 (The syntax in the above example would be valid for the command-line telnet client packaged with many operating systems, including most Unices, or available as an option or add-on.) In this example, 172.16.1.254 is the IP address of the server, and 2002 is the TCP port associated with a TTY line on the server. A typical server configuration on a Cisco router would look like this: version 12.3 service timestamps debug uptim

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

275

service timestamps log uptime no service password-encryption ! hostname Terminal_Server ! ip host Router1 2101 8.8.8.8 ip host Router2 2102 8.8.8.8 ip host Router3 2113 8.8.8.8 ! ! interface Loopback0 description Used for Terminal Service ip address 8.8.8.8 255.255.255.255 ! line con 0 exec-timeout 0 0 password MyPassword login line 97 128 transport input telnet line vty 0 4 exec-timeout 0 0 password MyPassword login transport input none ! end IBM 3270

Clemson University's library catalog displayed in a 3270 emulation program running on Mac OS X. The IBM 3270 is a class of terminals made by IBM since 1972 (known as "display devices") normally used to communicate with IBM mainframes. As such, it was the successor to the IBM 2260 display terminal. Due to the text color on the original models, these terminals are informally known as green screen terminals. Unlike common serial ASCII terminals, the 3270 minimizes the number of I/O interrupts required by accepting large blocks of data known as data streams, and uses a high speed proprietary communications interface, using coax cable. IBM stopped manufacturing terminals many years ago, but the IBM 3270 protocol is still commonly used via terminal emulation to access some mainframe-based applications. Accordingly, such applications are sometimes referred to as green screen applications. Use of 3270 is slowly diminishing over time as more and more mainframe applications acquire Web interfaces, but some web applications use the technique of "screen scraping" to capture old

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

276

screens and transfer the data to modern front-ends. Today, many sites such as call centers still find the "green screen" 3270 interface to be more productive and efficient than spending resources to replace them with more modern systems. Principles In a data stream, both text and control (or formatting functions) are interspersed allowing an entire screen to be "painted" as a single output operation. The concept of "formatting" in these devices allows the screen to be divided into clusters of contiguous character cells for which numerous "attributes" (color, highlighting, character set, protection from modification) can be set. An attribute occupied a physical location on the screen which also determined the beginning and end of a "field" (separately addressable sub section of the screen). Further, using a technique known as "read modified," the changes from any number of formatted fields that have been modified can be read as a single input without transferring any other data, another technique to enhance the terminal throughput of the CPU. Some users familiar with character interrupt-driven terminal interfaces (such as Microsoft Windows) find this technique unusual. There was also a "read buffer" capability which transferred the entire content of the 3270-screen buffer including field attributes. This was mainly used for debugging purposes to preserve the application program screen contents while replacing it, temporarily, with debugging information. The first 3270s had no function keys. Later 3270s had twelve, and later twenty-four, special programmed function keys, or PF keys, and three PA (or "program attention") keys placed in one or two rows at the top of the keyboard. When one of these keys is pressed, it will cause its control unit (historically, usually, an IBM 3274 or 3174, but nowadays the onboard mainframe equivalent) to generate an I/O interrupt and present a special code identifying which key was pressed. Application program functions such as termination, page-up, page-down, or help can be invoked by a single key press, thereby reducing the load on very busy processors. In this way, the CPU is not interrupted at every keystroke, a scheme which allowed an early 3033 mainframe with only 16 MB to support up to 17,500 3270 terminals under CICS. On the other hand, vi-like behavior was not possible. (But end-user responsiveness was arguably more predictable with 3270, something users appreciated.) For the same reason, a porting of Lotus 12-3 to mainframes with 3279 screens did not meet success because its programmers were not able to properly adapt the spreadsheet's user interface to a "screen at a time" rather than "character at a time" device. In contrast, IBM's OfficeVision office productivity software enjoyed great success with 3270 interaction because of its design understanding. And for many years the PROFS calendar was the most commonly displayed screen on office terminals around the world. In contrast also, ICI Mond Division's Works Records System, the first known shared public spreadsheet used the 3270 successfully for what was, in effect, a high powered version of today's spreadsheets with additional functions. It remained in continual use for 27 years up until 2001 and, despite its lack of a GUI, cells could be defined anywhere on the screen (not necessarily in rows or columns) and could be instantly re-configured for length, content and formulas as required. It is interesting to note that ICI's online, fully interactive system pre-dated PC spreadsheets by quite a few years and allowed multiple users to use the spreadsheets at the same time, similar to today's Web based shared spreadsheets such as Editgrid, Google Spreadsheets, and others. As mentioned above, the Web (and HTTP) is similar to 3270 interaction because the terminal (browser) is given more responsibility for managing presentation and user input, minimizing host

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

277

interaction while still facilitating server-based information retrieval and processing. Applications development has in many ways returned to the 3270 approach. In the 3270 era, all application functionality was provided centrally. With the advent of the PC, the idea was to invoke central systems only when absolutely unavoidable, and to do all application processing with local software on the personal computer. Now in the Web era (and with Wikis in particular), the application again is strongly centrally controlled, with only technical functionality distributed to the PC. In the early 1990's a popular solution to link PCs with the mainframes was the IRMA card. It was a piece of hardware that plugged into a PC and connected to a coaxial cable towards the mainframe. IRMA also allowed file transfers between the PC and the mainframe. Third parties Many manufacturers created 3270 compatible terminals such as Hewlett Packard, or adapted ASCII terminals such as the HP 2640 series to have a similar block-mode capability which would transmit a screen at a time, with some form validation capability. Modern applications are sometimes built upon legacy 3270 applications, using software utilities to capture (screen scraping) screens and transfer the data to web pages or GUI interfaces. A version of the IBM PC called the 3270 PC, released in October 1983, included 3270 terminal emulation. Later, the PC/G (graphics) and PC/GX (extended graphics) followed. 11.3.3 Telnet 3270 Telnet 3270, or TN3270 describes either the process of sending and receiving 3270 data streams using the Telnet protocol or the software that emulates a 3270 class terminal which communicates using that process. TN3270 allows a 3270 terminal emulator to communicate over a TCP/IP network instead of an SNA network. Standard telnet clients cannot be used as a substitute for TN3270 clients, as they use fundamentally different techniques for exchanging data. 11.4 R-Login Remote Login Remote login occurs when a user connects to an Internet host to use its native user interface. In the 1970s and early 1980s, text-oriented terminals were the predominate tools for computer users. Protocols such as TELNET and RLOGIN were developed for terminal users to use their terminals as if they were directly connected to a remote system. UN*X systems, with their predominately terminal-oriented interface, still make heavy use of these protocols. In the late 1980s, as graphical, window-oriented user interfaces became popular, protocols were developed to allow remote windowing operations, much as earlier protocols allowed remote terminal operations. Although conceptually similar, the operation of such windowing protocols is markedly different, and they are not discussed here. See Windowing Systems for more information. X Windows Protocol Overview X Windows is the predominate windowing system on UNIX computers, developed by the X Consortium, lead by M.I.T. An X server manages the display on the workstation. Clients can connect to server via TCP/IP and perform graphics operations. This makes X Windows much

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

278

more network capable than Microsoft Windows, for example, which can only be accessed via a local API. X Windows operates over TCP, typically using server port numbers starting with 6000. The X server for a system's first display listens on port 6000; if the system has a second display, its server listens on port 6001; a third display would listen on 6002; etc. The protocol used over this reliable stream connection is essentially request/reply, and it's reputation is as a fat protocol that consumes a lot of bandwidth. Lightweight X (LWX), introduced in X11R6, attempts to reduces X's bandwidth needs to the point where it can be run over dialup modem connections. The X Protocol, documented in a postscript file, defines dozens of messages that can be exchanged between a client and a server. They can generally be classified into four categories: Requests, Replies, Events, and Errors. Typical requests include Draw PolyLine, Draw Text, Create Window, Fill. Replies are matched to particular Requests. Events are asynchronous occurances such as keystrokes and mouse clicks. Errors are matched to particular Requests. If a window is partially or fully obscured by another, overlapping window, the server has two options available to it. The server can allocate additional memory, called backing store, to record to contents of the obscured window. This is purely optional, however. The server can simply ignore the obscured part of the window. Later, when that part of the window becomes visible again, the server sends an Expose event to the client, which must then redraw the affected area. The client, therefore, must be prepared to redraw any part of its windows at any time. Applications do not need to access the X Windows protocol directly. X Windows supports several APIs. The most basic of these is Xlib, which interfaces fairly directly to the underlying network protocol. Most X client applications are linked against Xlib, which allows them to operate on either a local or remote X server, simply by adjusting either an environment variable or a command-line argument. Widgets layer on top of Xlib and provide X Windows with an object-oriented programming model. A widget is an X window capable of handling most of its own protocol interaction. The most popular widget sets are Athena Widgets (aw) and Motif. X Window's security model is all-or-nothing. Either an application can perform any operation on an X desktop, or it can perform none. There is no concept of limiting an application to a single top-level window, for example. Although there is power in this model, such as allowing the window manager to be a normal X client, there are also serious performance implications. A hostile X client could connect to an X server and arrange to capture any screen image, or even to capture keystrokes as a password is being typing in one of the windows. For this reasons, X servers are typically fairly restrictive about which clients they will accept connections from. Two major security models are available. Host-based security (traditionally controlled by the xhost program), permits or denies connections based on their source IP addresses. Authentication (traditionally controlled by the xauth program), requires the connecting program to possess a secret password, typically stored in a UNIX file and subject to standard UNIX access controls. Kerberos-based authentication is also available. 11.5 FTP File Transfer Protocol (FTP) is a network protocol used to transfer data from one computer to another through a network, such as the Internet. FTP is a file transfer protocol for exchanging and manipulating files over any TCP-based computer network. A FTP client may connect to a FTP server to manipulate files on that server. As there are many FTP client and server programs available for different operating systems, FTP is a popular choice for exchanging files independent of the operating systems involved.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

279

11.5.1 Connection methods FTP runs exclusively over TCP. FTP servers by default listen on port 21 for incoming connections from FTP clients. A connection to this port from the FTP Client forms the control stream on which commands are passed to the FTP server from the FTP client and on occasion from the FTP server to the FTP client. FTP uses out-of-band control, which means it uses a separate connection for control and data. Thus, for the actual file transfer to take place, a different connection is required which is called the data stream. Depending on the transfer mode, the process of setting up the data stream is different. In active mode, the FTP client opens a dynamic port, sends the FTP server the dynamic port number on which it is listening over the control stream and waits for a connection from the FTP server. When the FTP server initiates the data connection to the FTP client it binds the source port to port 20 on the FTP server. In order to use active mode, the client sends a PORT command, with the IP and port as argument. The format for the IP and port is "h1,h2,h3,h4,p1,p2". Each field is a decimal representation of 8 bits of the host IP, followed by the chosen data port. For example, a client with an IP of 192.168.0.1, listening on port 49154 for the data connection will send the command "PORT 192,168,0,1,192,2". The port fields should be interpreted as p1Ă—256 + p2 = port, or, in this example, 192Ă—256 + 2 = 49154. In passive mode, the FTP server opens a dynamic port, sends the FTP client the server's IP address to connect to and the port on which it is listening (a 16 bit value broken into a high and low byte, like explained before) over the control stream and waits for a connection from the FTP client. In this case the FTP client binds the source port of the connection to a dynamic port. To use passive mode, the client sends the PASV command to which the server would reply with something similar to "227 Entering Passive Mode (127,0,0,1,192,52)". The syntax of the IP address and port are the same as for the argument to the PORT command. In extended passive mode, the FTP server operates exactly the same as passive mode, however it only transmits the port number (not broken into high and low bytes) and the client is to assume that it connects to the same IP address that was originally connected to. Extended passive mode was added by RFC 2428 in September 1998.While data is being transferred via the data stream, the control stream sits idle. This can cause problems with large data transfers through firewalls which time out sessions after lengthy periods of idleness. While the file may well be successfully transferred, the control session can be disconnected by the firewall, causing an error to be generated. The FTP protocol supports resuming of interrupted downloads using the REST command. The client passes the number of bytes it has already received as argument to the REST command and restarts the transfer. In some command line clients for example, there is an often-ignored but valuable command, "reget" (meaning "get again") that will cause an interrupted "get" command to be continued, hopefully to completion, after a communications interruption. Resuming uploads is not as easy. Although the FTP protocol supports the APPE command to append data to a file on the server, the client does not know the exact position at which a transfer got interrupted. It has to obtain the size of the file some other way, for example over a directory listing or using the SIZE command. In ASCII mode (see below), resuming transfers can be troublesome if client and server use different end of line characters. The objectives of FTP, as outlined by its RFC, are:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS    

280

To promote sharing of files (computer programs and/or data). To encourage indirect or implicit use of remote computers. To shield a user from variations in file storage systems among different hosts. To transfer data reliably, and efficiently.

Criticisms of FTP      

Passwords and file contents are sent in clear text, which can be intercepted by eavesdroppers. There are protocol enhancements that circumvent this, for instance by using SSL, TLS or Kerberos. Multiple TCP/IP connections are used, one for the control connection, and one for each download, upload, or directory listing. Firewalls may need additional logic and or configuration changes to account for these connections. It is hard to filter active mode FTP traffic on the client side by using a firewall, since the client must open an arbitrary port in order to receive the connection. This problem is largely resolved by using passive mode FTP. It is possible to abuse the protocol's built-in proxy features to tell a server to send data to an arbitrary port of a third computer; see FXP. FTP is a high latency protocol due to the number of commands needed to initiate a transfer. No integrity check on the receiver side. If a transfer is interrupted, the receiver has no way to know if the received file is complete or not. Some servers support extensions to calculate for example a file's MD5 sum (e.g. using the SITE MD5 command) or CRC checksum, however even then the client has to make explicit use of them. In the absence of such extensions, integrity checks have to be managed externally. No date/timestamp attribute transfer. Uploaded files are given a new current timestamp, unlike other file transfer protocols such as SFTP, which allow attributes to be included. There is no way in the standard FTP protocol to set the time-last-modified (or timecreated) date stamp that most modern files systems preserve. There is a draft of a proposed extension that adds new commands for this, but as of yet, most of the popular FTP servers do not support it.

Security problems The original FTP specification is an inherently insecure method of transferring files because there is no method specified for transferring data in an encrypted fashion. This means that under most network configurations, user names, passwords, FTP commands and transferred files can be "sniffed" or viewed by anyone on the same network using a packet sniffer. This is a problem common to many Internet protocol specifications written prior to the creation of SSL such as HTTP, SMTP and Telnet. The common solution to this problem is to use either SFTP (SSH File Transfer Protocol), or FTPS (FTP over SSL), which adds SSL or TLS encryption to FTP as specified in RFC 4217. Anonymous FTP Many sites that run FTP servers enable anonymous ftp. Under this arrangement, users do not need an account on the server. The user name for anonymous access is typically 'anonymous', but historically 'ftp' was also used in the past; this account does not need a password. Although users are commonly asked to send their email addresses as their passwords for "authentication," there is usually only trivial or no verification of what is actually entered. As modern FTP clients hide the login process from the user, and usually don't know the user's email address, the software supplies dummy passwords. For example:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS    

281

Mozilla Firefox (2.0) — mozilla@example.com KDE Konqueror (3.5) — anonymous@ wget (1.10.2) — -wget@ lftp (3.4.4) — lftp@

Internet Gopher has been suggested as an alternative to anonymous FTP, as well as Trivial File Transfer Protocol and File Service Protocol. 11.5.2 Data format While transferring data over the network, several data representations can be used. The two most common transfer modes are:  

ASCII mode Binary mode: In "Binary mode", the sending machine sends each file byte for byte and as such the recipient stores the byte stream as it receives it. (The FTP standard calls this "IMAGE" or "I" mode)

In "ASCII mode", any form of data that is not plain text will be corrupted. When a file is sent using an ASCII-type transfer, the individual letters, numbers, and characters are sent using their ASCII character codes. The receiving machine saves these in a text file in the appropriate format (for example, a Unix machine saves it in a Unix format, a Windows machine saves it in a Windows format). Hence if an ASCII transfer is used it can be assumed plain text is sent, which is stored by the receiving computer in its own format. Translating between text formats might entail substituting the end of line and end of file characters used on the source platform with those on the destination platform, e.g. a Windows machine receiving a file from a Unix machine will replace the line feeds with carriage return-line feed pairs. By default, most FTP clients use ASCII mode. Some clients try to determine the required transfermode by inspecting the file's name or contents, or by determining whether the server is running an operating system with the same text file format. The FTP specifications also list the following transfer modes:  

EBCDIC mode - this transfers bytes, except they are encoded in EBCDIC rather than ASCII. Thus, for example, the ASCII mode server Local mode - this is designed for use with systems that are word-oriented rather than byte-oriented. For example mode "L 36" can be used to transfer binary data between two 36-bit machines. In L mode, the words are packed into bytes rather than being padded. Given the predominance of byte-oriented hardware nowadays, this mode is rarely used. However, some FTP servers accept "L 8" as being equivalent to "I".

In practice, these additional transfer modes are rarely used. They are however still used by some legacy mainframe systems. The text (ASCII/EBCDIC) modes can also be qualified with the type of carriage control used (e.g. TELNET NVT carriage control, ASA carriage control), although that is rarely used nowadays. FTP and web browsers Most recent web browsers and file managers can connect to FTP servers, although they may lack the support for protocol extensions such as FTPS. This allows manipulation of remote files over FTP through an interface similar to that used for local files. This is done via an FTP URL, which

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

282

takes the form ftp(s)://<ftpserveraddress> (e.g., ftp://ftp.gimp.org/). A password can optionally be given in the URL, e.g.: ftp(s)://<login>:<password>@<ftpserveraddress>:<port>. Most webbrowsers require the use of passive mode FTP, which not all FTP servers are capable of handling. Some browsers allow only the downloading of files, but offer no way to upload files to the server. FTP and NAT devices The representation of the IPs and ports in the PORT command and PASV reply poses another challenge for NAT devices in handling FTP. The NAT device must alter these values, so that they contain the IP of the NAT-ed client, and a port chosen by the NAT device for the data connection. The new IP and port will probably differ in length in their decimal representation from the original IP and port. This means that altering the values on the control connection by the NAT device must be done carefully, changing the TCP Sequence and Acknowledgment fields for all subsequent packets. For example: A client with an IP of 192.168.0.1, starting an active mode transfer on port 1025, will send the string "PORT 192,168,0,1,4,1". A NAT device masquerading this client with an IP of 192.168.15.5, with a chosen port of 2000 for the data connection, will need to replace the above string with "PORT 192,168,15,5,7,208". The new string is 23 characters long, compared to 20 characters in the original packet. The Acknowledgment field by the server to this packet will need to be decreased by 3 bytes by the NAT device for the client to correctly understand that the PORT command has arrived to the server. If the NAT device is not capable of correcting the Sequence and Acknowledgement fields, it will not be possible to use active mode FTP. Passive mode FTP will work in this case, because the information about the IP and port for the data connection is sent by the server, which doesn't need to be NATed. If NAT is performed on the server by the NAT device, then the exact opposite will happen. Active mode will work, but passive mode will fail. It should be noted that many NAT devices perform this protocol inspection and modify the PORT command without being explicitly told to do so by the user. This can lead to several problems. First of all, there is no guarantee that the used protocol really is FTP, or it might use some extension not understood by the NAT device. One example would be an SSL secured FTP connection. Due to the encryption, the NAT device will be unable to modify the address. As result, active mode transfers will fail only if encryption is used, much to the confusion of the user. The proper way to solve this is to tell the client which IP address and ports to use for active mode. Furthermore, the NAT device has to be configured to forward the selected range of ports to the client's machine. FTP over SSH FTP over SSH refers to the practice of tunneling a normal FTP session over an SSH connection. Because FTP uses multiple TCP connections (unusual for a TCP/IP protocol that is still in use), it is particularly difficult to tunnel over SSH. With many SSH clients, attempting to set up a tunnel for the control channel (the initial client-to-server connection on port 21) will protect only that channel; when data is transferred, the FTP software at either end will set up new TCP connections (data channels) which will bypass the SSH connection, and thus have no confidentiality, integrity protection, etc. If the FTP client is configured to use passive mode and to connect to a SOCKS server interface that many SSH clients can present for tunneling, it is possible to run all the FTP channels over

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

283

the SSH connection. Otherwise, it is necessary for the SSH client software to have specific knowledge of the FTP protocol, and monitor and rewrite FTP control channel messages and autonomously open new forwarding for FTP data channels. Version 3 of SSH Communications Security's software suite, and the GPL licensed FONC are two software packages that support this mode. FTP over SSH is sometimes referred to as secure FTP; this should not be confused with other methods of securing FTP, such as with SSL/TLS (FTPS). Other methods of transferring files using SSH that are not related to FTP include SFTP and SCP; in each of these, the entire conversation (credentials and data) is always protected by the SSH protocol. References The protocol is standardized in RFC 959 by the IETF as:     

RFC 959 File Transfer Protocol (FTP). J. Postel, J. Reynolds. Oct-1985. This obsoleted the preceding RFC 765 and earlier FTP RFCs back to the original RFC 114. RFC 1579 Firewall-Friendly FTP. RFC 2228 — FTP Security Extensions RFC 2428 — Extensions for IPv6, NAT, and Extended passive mode Sep-1998. RFC 3659 — Extensions to FTP. P. Hethmon. March-2007.

Check Your Progress 1. What is TELNET? 2. What is the use of FTP protocol in internet? 11.6 SMTP Simple Mail Transfer Protocol (SMTP) is the de facto standard for email transmissions across the Internet. It is very common for email software to use SMTP to send mail and POP3 to receive it, but SMTP can be used to receive mail. 11.6.1 Description SMTP is a relatively simple, text-based protocol, in which one or more recipients of a message are specified (and in most cases verified to exist) along with the message text and possibly other encoded objects. The message is then transferred to a remote server using a procedure of queries and responses between the client and server. Either an end-user's email client, a.k.a. MUA (Mail User Agent), or a relaying server's MTA (Mail Transport Agents) can act as an SMTP client. An email client knows the outgoing mail SMTP server from its configuration. A relaying server typically determines which SMTP server to connect to by looking up the MX (Mail eXchange) DNS record for each recipient's domain name (the part of the email address to the right of the at (@) sign). Conformant MTAs (not all) fall back to a simple A record in the case of no MX. (Relaying servers can also be configured to use a smart host.) The SMTP client initiates a TCP connection to server's port 25 (unless overridden by configuration). It is quite easy to test an SMTP server using the telnet program (see below). SMTP is a "push" protocol that cannot "pull" messages from a remote server on demand. To retrieve messages only on demand, which is the most common requirement on a single-user

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

284

computer, a mail client must use POP3 or IMAP. Another SMTP server can trigger a delivery in SMTP using ETRN. It is possible to receive mail by running an SMTP server. POP3 became popular when single-user computers connected to the Internet only intermittently; SMTP is more suitable for a machine permanently connected to the Internet. History Forms of one-to-one electronic messaging were used in the 1960s. People communicated with one another using systems developed for a particular mainframe computer. As more computers began to be interconnected with others, especially in the US Government's ARPANET, standards were developed to allow users using different systems to be able to email one another. SMTP grew out of these standards developed during the 1970s. [1]

SMTP can trace its roots to the Mail Box Protocol (ca. 1971), FTP Mail (ca. 1973), and Mail [2] Protocol. The work continued throughout the 1970s, until the ARPANET converted into the modern Internet around 1980. Jon Postel then proposed a Mail Transfer Protocol in 1980 that [3] began to remove the mail's reliance on FTP. SMTP was published as RFC 821 in August 1982, also by Jonathan Postel. The SMTP standard was developed around the same time the Usenet was, a one-to-many communication network with some similarities. SMTP became widely used in the early 1980s. At the time, it was a complement to UUCP (Unix to Unix CoPy) mail, which was better suited to handle email transfers between machines that were intermittently connected. SMTP, on the other hand, works best when both the sending and receiving machines are connected to the network all the time. Both use a store and forward mechanism and are examples of push technology. Usenet's newsgroups are still propagated with [4] [5] UUCP between servers , but UUCP mail has virtually disappeared along with the "bang paths" it used as message routing headers. Since this protocol started out as purely ASCII text-based, it did not deal well with binary files. Standards such as Multipurpose Internet Mail Extensions (MIME) were developed to encode binary files for transfer through SMTP. MTAs developed after Send mail also tended to be implemented 8-bit-clean, so that the alternate "just send eight" strategy could be used to transmit arbitrary data via SMTP. Non-8-bit-clean MTAs today tend to support the 8BITMIME extension, permitting binary files to be transmitted almost as easily as plain text. Developers Many people edited or contributed to the core SMTP specifications, among them Jon Postel, Eric Allman, Dave Crocker, Ned Freed, Randall Gellens, John Klensin, and Keith Moore. 11.6.2 Outgoing mail SMTP server An email client requires the name or the IP address of an SMTP server as part of its configuration. The server will deliver messages on behalf of the user. This setting allows for various policies and network designs. End users connected to the Internet can use the services of an email provider that is not necessarily the same as their connection provider (ISP). Network topology, or the location of a client within a network or outside of a network, is no longer a limiting factor for email submission or delivery. Modern SMTP servers typically use a client's credentials (authentication) rather than a client's location (IP address), to determine whether it is eligible to relay email. Server administrators choose whether clients use TCP port 25 (SMTP) or port 587 (Submission), as formalized in RFC 4409, for relaying outbound mail to a mail server. The specifications and many servers support both. Although some servers support port 465 for legacy

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

285

secure SMTP in violation of the specifications, it is preferable to use standard ports and standard [6] ESMTP commands according to RFC 3207 if a secure session needs to be used between the client and the server. Some servers are set up to reject all relaying on port 25, but valid users authenticating on port 587 are allowed to relay mail to any valid address. A server that relays all email for all destinations for all clients connecting to port 25 is known as an open relay and is now generally considered a bad practice worthy of blacklisting. Some Internet Service Providers intercept port 25, so that it is not possible for users of that ISP to send mail via a relaying SMTP server elsewhere using port 25; they are restricted to using the ISP's SMTP server only. Some independent SMTP servers support an additional port other than 25 to allow users with authenticated access to access them even if port 25 is blocked. The practical purpose of this is that a traveler connecting to different ISPs otherwise has to change SMTP server settings on the mail client for each ISP; using a relaying SMTP server allows the SMTP client settings to be used unchanged worldwide. Sample communications After establishing a connection between the sender (the client) and the receiver (the server), the following is a valid SMTP session. In the following conversation, everything sent by the client is prefixed here with "C: " and everything sent by the server with "S: "; this prefix is not part of the conversation. On most computer systems (particularly Microsoft Windows and some Linux distributions), a connection can be established using the telnet command on the client machine, for example: telnet smtp.example.com 25 which opens a TCP connection from the sending machine to the MTA listening on port 25 on host smtp.example.com. By convention, SMTP servers greet clients with their fully-qualified domain name. In this example, the client computer (relay.example.org) has already determined that "smtp.example.com" is a mail exchanger for the example.com domain by doing a DNS lookup of example.com's MX records. Note that a carriage return and a line feed character (not shown) are required at the end of each line; in a manual Telnet session they are both normally generated by pressing the Enter or carriage return key once. S: 220 smtp.example.com ESMTP Postfix C: HELO relay.example.org S: 250 Hello relay.example.org, I am glad to meet you C: MAIL FROM:<bob@example.org> S: 250 Ok C: RCPT TO:<alice@example.com> S: 250 Ok C: RCPT TO:<theboss@example.com> S: 250 Ok C: DATA S: 354 End data with <CR><LF>.<CR><LF> C: From: "Bob Example" <bob@example.org> C: To: Alice Example <alice@example.com> C: Cc: theboss@example.com C: Date: Tue, 15 Jan 2008 16:02:43 -0500 C: Subject: Test message C: C: Hello Alice. C: This is a test message with 5 headers and 4 lines in the body. C: Your friend, C: Bob

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

286

C: . S: 250 Ok: queued as 12345 C: QUIT S: 221 Bye {The server closes the connection} In this example, the email is sent to two mailboxes on the same SMTP server: once for each recipient listed in the "To" and "Cc" headers; if there were any in a "Bcc" list, which are not included in any headers, there would have been additional "RCPT TO" commands for those recipients as well. If the second recipient had been located elsewhere, the client would QUIT and connect to the appropriate SMTP server once the first message had been queued. Note that the information the client sends in the HELO and MAIL FROM commands can be retrieved in additional headers that the server adds to the message: Received and Return-Path respectively. Although optional and not shown above, many clients ask the server which SMTP extensions the server supports, by using the EHLO greeting to invoke Extended SMTP (ESMTP) specified in RFC 1870. These clients fall back to HELO only if the server does not respond to EHLO. Modern clients may use the ESMTP extension keyword SIZE to inquire of the server the maximum message size that will be accepted. Older clients and servers may try to transfer huge messages that will be rejected after wasting the network resources, including a lot of connect time to dial-up ISPs that are paid by the minute. Users can manually determine in advance the maximum size accepted by ESMTP servers. The user telnets as above, but substitutes "EHLO host.example.org" for the HELO command line. S: 220-smtp2.example.com ESMTP Postfix C: EHLO bob.example.org S: 250-smtp2.example.com Hello bob.example.org [192.0.2.201] S: 250-SIZE 14680064 S: 250-PIPELINING S: 250 HELP Thus smtp2.example.com declares that it will accept a fixed maximum message size no larger than 14,680,064 octets (8-bit bytes). Depending on the server's actual resource usage, it may be currently unable to accept a message this large. In the simplest case, an ESMTP server will declare a maximum SIZE with only the EHLO user interaction. 11.6.3 Security and spamming One of the limitations of the original SMTP is that it has no facility for authentication of senders. Therefore the SMTP-AUTH extension was defined. However, the impracticalities of widespread SMTP-AUTH implementation and management means that Email spamming is not and cannot be addressed by it. Modifying SMTP extensively, or replacing it completely, is not believed to be practical, due to the network effects of the huge installed base of SMTP. Internet Mail 2000 was one such proposal for replacement. Spam is enabled by several factors, including vendors implementing broken MTAs (that do not adhere to standards, and therefore make it difficult for other MTAs to enforce standards), security vulnerabilities within the operating system (often exacerbated by always-on broadband connections) that allow spammers to remotely control end-user PCs and cause them to send spam, and a lack of "intelligence" in many MTAs. There are a number of proposals for sideband protocols that will assist SMTP operation. The Anti-Spam Research Group (ASRG) of the Internet Research Task Force (IRTF) is working on a

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

287

number of Email authentication and other proposals for providing simple source authentication that is flexible, lightweight, and scalable. Recent Internet Engineering Task Force (IETF) activities include MARID (2004) leading to two approved IETF experiments in 2005, and Domain Keys Identified Mail in 2006. Other protocols for email Email is "handed off" (pushed) from a client (MUA) to a mail server (MSA), usually using Simple Mail Transfer Protocol or IMAP. From there, the MSA delivers the mail to an MTA, usually running on the same machine. The MTA looks up the destination(s)'s MX records with a DNS lookup, and begins to relay (push) the message to the server on record via TCP port 25 and SMTP. Once the receiving MTA accepts the incoming message, it is delivered via a mail delivery agent (MDA) to a server which is designated for local mail delivery. The MDA either delivers the mail directly to storage, or forwards it over a network using either SMTP or LMTP, a derivative of SMTP designed for this purpose. Once delivered to the local mail server, the mail is stored for batch retrieval by authenticated mail clients (MUAs). Generally speaking, mail retrieval (pull) is performed using either a type of online folders (e.g. IMAP 4, a protocol that both delivers and organizes mail) or the older single repository format (e.g. POP3, the Post Office Protocol). Web mail clients may use either method, but the retrieval protocol is often not a formal standard. Some local mail servers and MUAs are capable of either push or pull mail retrieval. 11.6.4 Extended SMTP Extended SMTP (ESMTP), sometimes referred to as Enhanced SMTP, is a definition of protocol extensions to the Simple Mail Transfer Protocol standard. The extension format was defined in RFC 1869 in 1995. RFC 1869 established a structure for all existing and future extensions, to produce a consistent and manageable means by which ESMTP clients and servers can be identified and ESMTP servers can indicate supported extensions to connected clients. Extensions ESMTP is a protocol used by software to send e-mail that supports graphics and other attachments. The main identification feature is for ESMTP clients to open a transmission with the command EHLO (Extended HELLO), rather than HELO (Hello, the original RFC 821 standard). A server can then respond with success (code 250), failure (code 550) or error (code 500, 501, 502, 504, or 421), depending on its configuration. An ESMTP server would return the code 250 OK in a multiline reply with its domain and a list of keywords to indicate supported extensions. An RFC 821 compliant server would return error code 500, allowing the ESMTP client to try either HELO or QUIT. Each service extension is defined in an approved format in subsequent RFCs and registered with the IANA. The first definitions were the RFC 821 optional services - SEND, SOML (Send or Mail), SAML (Send and Mail), EXPN, HELP, and TURN. The format of additional SMTP verbs was set and for new parameters in MAIL and RCPT. Some relatively common keywords (not all of them corresponding to commands) used today are:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS          

288

8BITMIME — 8 bit data transmission, RFC 1652 ATRN — Authenticated Turn for On-Demand Mail Relay, RFC 2645 SMTP-AUTH — Authenticated SMTP, RFC 2554 CHUNKING — Chunking, RFC 3030 DSN — Delivery status notification, RFC 3461 (See Variable envelope return path) ETRN — Extended Turn, RFC 1985 HELP — Supply helpful information, RFC 821 PIPELINING — Command pipelining, RFC 2920 SIZE — Message size declaration, RFC 1870 STARTTLS — Transport layer security, RFC 3207

With RFC 821 made obsolete by RFC 2821 in 2001 the ESMTP format was restated in RFC 2821. Support for the EHLO command in servers was made a "MUST", superseding the original HELO, which became a required "fallback". Non-standard, unregistered, service extensions can be used by bilateral agreement, these services are indicated by an EHLO keyword starting with "X", and with any additional parameters or verbs similarly marked.

Check Your Progress 3. What is the use of SMTP protocol? 4. Define Extended SMTP protocol. 11.7 Summary This lesson explained the services are directly accessible by an application via common wellknown application program interfaces (APIs), which can actually occur in many layers. Followed by we understood the usage of the Domain Name service protocol, tell net security and remotelogin. At the same way we knew the connection methods of the File Transfer Protocol, description of SMTP and extended SMTP. 11.8 Keywords SASE : A Specific Application Service Element (SASE) denotes a specific application service. ROSE : The Remote Operations Service Element (ROSE) serves the needs of distributed applications in invoking remote operations. RTSE : The Reliable Transfer Service Element (RTSE) hides much of the underlying complexity of the session dialogue management services by providing high-level error handling capabilities to other ASEs for data transfer.

11.10 Check Your Progress. Note: Use the space provided below for your answers. Compare your answers with those given at the end. 1. What is TELNET?

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

289

………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………. 2. What is the use of FTP protocol in internet? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………. 3. What is the use of SMTP protocol? ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………. 4. Define Extended SMTP protocol. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………. Answer to Check Your Progress 1. or local area network (LAN) connections. It was developed in 1969 beginning with RFC 15 and standardized as IETF STD 8, one of the first Internet standards. 2. File Transfer Protocol (FTP) is a network protocol used to transfer data from one computer to another through a network, such as the Internet. FTP is a file transfer protocol for exchanging and manipulating files over any TCP-based computer network. 3. Simple Mail Transfer Protocol (SMTP) is the de facto standard for email transmissions across the Internet. It is very common for email software to use SMTP to send mail and POP3 to receive it, but SMTP can be used to receive mail. 4. Extended SMTP (ESMTP), sometimes referred to as Enhanced SMTP, is a definition of protocol extensions to the Simple Mail Transfer Protocol standard. The extension format was defined in RFC 1869 in 1995. 11.11 Further Reading 1. “Communication Network – Fundamental concepts and key architecture”, Leo Gareia and Widjaja. 2. “Communication Networks” - Sharam Hekmat, PragSoft Corporation, www.pragsoft.com.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT- 12 WWW SECURITY Structure 12.0 Introduction 12.1 Objectives 12.2 Definition 12.3 WWW Security 12.3.1 Key exchange in Symmetric key schemes 12.3.2 Digital Signature 12.3.3 Key description Centre 12.3.4 Challenge Response Protocol 12.4 SNMP 12.4.1 SNMP (Simple Network Management Protocol) 12.4.2 Overview and basic concepts 12.4.3 The Internet Management Model 12.4.4 Protocol Details 12.4.5 Development and Usage 12.5 Summary 12.6 Keywords 12.7 Exercise and Questions 12.8 Check Your Progress 12.9 Further Reading

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

290


COMPUTER COMMUNICATION & NETWORKS

291

12.0 Introduction This lesson will explain the www security issues like digital signature, key description centre and challenge response protocol. These services are directly accessible by an application via common well-known application program interfaces (APIs), which can actually occur at many layers. Examples of layer 7 services include FTP (file transfer protocol), Telnet and SNMP (simple network management protocol). Most network management activities are based on the services provided by layer 7 application entities, which in turn rely on lower layer services to be able to perform their functions. 12.1 Objectives After studying this unit you should able to  Understand the WWW Security  Discuss in detail the digital signature  Write brief note on challenge response protocol  Explain the work of Simple Network Management Protocol  Describe the development and usage of SNMP protocol 12.2 Definition Data on the network is analogous to possessions of a person. It has to be kept secure from others with malicious intent. This intent ranges from bringing down servers on the network to using people's private information like credit card numbers to sabotage of major organizations with a presence on a network. To secure data, one has to ensure that it makes sense only to those for whom it is meant. 12.3 WWW Security Data on the network is analogous to possessions of a person. It has to be kept secure from others with malicious intent. This intent ranges from bringing down servers on the network to using people's private information like credit card numbers to sabotage of major organizations with a presence on a network. To secure data, one has to ensure that it makes sense only to those for whom it is meant. This is the case for data transactions where we want to prevent eavesdroppers from listening to and stealing data. Other aspects of security involve protecting user data on a computer by providing password restricted access to the data and maybe some resources so that only authorized people get to use these, and identifying miscreants and thwarting their attempts to cause damage to the network among other things. The various issues in Network security are as follows : 1. Authentication: We have to check that the person who has requested for something or has sent an e-mail is indeed allowed to do so. In this process we will also look at how the person authenticates his identity to a remote machine. 2. Integrity: We have to check that the message which we have received is indeed the message which was sent. Here CRC will not be enough because somebody may deliberately change the data. Nobody along the route should be able to change the data. 3. Confidentiality: Nobody should be able to read the data on the way so we need Encryption 4. Non-repudiation: Once we sent a message, there should be no way that we can deny sending it and we have to accept that we had sent it. 5. Authorization: This refers to the kind of service which is allowed for a particular client. Even though a user is authenticated we may decide not to authorize him to use a particular service.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

292

For authentication, if two persons know a secret then we just need to prove that no third person could have generated the message. But for Non-repudiation we need to prove that even the sender could not have generated the message. So authentication is easier than Non-repudiation. To ensure all this, we take the help of cryptography. We can have two kinds of encryption : 1. Symmetric Key Encryption: There is a single key which is shared between the two users and the same key is used for encrypting and decrypting the message. 2. Public Key Encryption: There are two keys with each user : a public key and a private key. The public key of a user is known to all but the private key is not known to anyone except the owner of the key. If a user encrypts a message in his private key then it can be decrypted by anyone by using the sender's public key. To send a message securely, we encrypt the message in the public key of the receiver which can only be decrypted by the user with his private key. Symmetric key encryption is much faster and efficient in terms of performance. But it does not give us Non-repudiation. And there is a problem of how do the two sides agree on the key to be used assuming that the channel is insecure ( others may snoop on our packet ). In symmetric key exchange, we need some amount of public key encryption for authentication. However, in public key encryption, we can send the public key in plain text and so key exchange is trivial. But this does not authenticate anybody. So along with the public key, there needs to be a certificate. Hence we would need a public key infrastructure to distribute such certificates in the world. 12.3.1 Key Exchange in Symmetric Key Schemes We will first look at the case where we can use public key encryption for this key exchange. . The sender first encrypts the message using the symmetric key. Then the sender encrypts the symmetric key first using it's private key and then using the receiver's public key. So we are doing the encryption twice. If we send the certificate also along with this then we have authentication also. So what we finally send looks like this : Z : Certificatesender + Publicreciever ( Privatesender ( Ek ) ) + Ek ( M ) Here Ek stands for the symmetric key and Ek ( M ) for the message which has been encrypted in this symmetric key. However this still does not ensure integrity. The reason is that if there is some change in the middle element, then we will not get the correct key and hence the message which we decrypt will be junk. So we need something similar to CRC but slightly more complicated. This is because somebody might change the CRC and the message consistently. This function is called Digital Signature. 12.3.2 Digital Signatures Suppose A has to send a message to B. A computes a hash function of the message and then sends this after encrypting it using its own private key. This constitutes the signature produced by A. B can now decrypt it, re-compute the hash function of the message it has received and compare the two. Obviously, we would need the hash functions to be such that the probability of two messages hashing to the same value is extremely low. Also, it should be difficult to compute a message with the same hash function as another given message. Otherwise any intruder could replace the message with another that has the same hash value and leave the signatures intact leading to loss of integrity. So the message along with the digital signature looks like this : Z + Privatesender ( Hash ( M ) )

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

293

Digital Certificates In addition to using the public key we would like to have a guarantee of talking to a known person. We assume that there is an entity who is entrusted by everyone and whose public key is known to everybody. This entity gives a certificate to the sender having the sender's name, some other information and the sender's public key. This whole information is encrypted in the private key of this trusted entity. A person can decrypt this message using the public key of the trusted authority. But how can we be sure that the public key of the authority is correct ? In this respect Digital signatures are like I-Cards. Let us ask ourselves the question : How safe are we with ICards? Consider a situation where you go to the bank and need to prove your identity. I-Card is used as a proof of your identity. It contains your signature. So in order to distribute the public key of this authority we use certificates of higher authority and so on. Thus we get a tree structure where the each node needs the certificates of all nodes above it on the path to the root in order to be trusted. But at some level in the tree the public key needs to be known to everybody and should be trusted by everybody too. Accesses messages across a link from one machine to another. The mail is enclosed in what is called an envelope . The envelope contains the To and From fields and these are followed by the mail . The mail consists of two parts namely the Header and the Data. Computer Networks (CS425) Key Exchange in Symmetric Key Schemes In this lecture we will look at key exchange in symmetric key schemes where public key encryption cannot be used. So the encryption using public and private keys is not possible. We will see that in this scenario how do we exchange the symmetric key. The two people who are communicating do not want others to understand what they are talking about. So they would use a language which others possibly do not understand. But they have to decide upon a common language. For this the language has to be encrypted in some key which will be somehow known to the other person. Key exchange in symmetric key schemes is a tricky business because anyone snooping on the exchange can get hold of the key if we are not careful and since there is no public-private key arrangement here, he can obtain full control over the communication. There are various approaches to the foolproof exchange of keys in these schemes. We look at one approach which is as follows:Diffie - Hellman Key Exchange A and B are two persons wishing to communicate. Both of them generate a random number each, say x and y respectively. There is a function f which has no inverse. Now A sends f(x) to B and B sends f(y) to A. So now A knows x and f(y) and B knows y and f(x). There is another function g such that g(x, f(y)) = g(y, f(x)). The key used by A is g(x, f(y)) and that used by B is g(y, f(x)). Both are actually same. The implementation of this approach is described below :

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

294

1. A has two large prime numbers n and g. There are other conditions also that these numbers must satisfy. x x y 2. A sends n, g and g mod n to B in a message. B evaluates (g mod n) to be used as the key. y y x 3. B sends g mod n to A. A evaluates (g mod n) to be used as the key. So now both xy parties have the common number g mod n. This is the symmetric (secret communication) key used by both A and B now. x

y

This works because though the other people know n, g, g mod n, g mod n but still they cannot evaluate the key because they do not know either x or y. Man in the Middle Attack However there is a security problem even then. Though this system cannot be broken but it can be bypassed. The situation which we are referring to is called the man-in-the-middle attack. We assume that there is a guy C in between A and B. C has the ability to capture packets and create x z new packets. When A sends n, g and g mod n, C captures them and sends n, g and g mod n y to B. On receiving this B sends n, g and g mod n but again C captures these and sends n, g z z x z y and g mod n to A. So A will use the key (g mod n) and B will use the key (g mod n) . Both these keys are known to C and so when a packet comes from A, C decrypts it using A's key and encrypts it in it's own key and then sends it to B. Again when a packet comes from B, it does a similar thing before sending the packet to A. So effectively there are two keys - one operating between A and C and the other between C and B.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

295

There must be some solution to this problem. The solution can be such so that we may not be able to communicate further ( because our keys are different ) but atleast we can prevent C from looking at the data. We have to do something so that C cannot encrypt or decrypt the data. We use a policy that A only sends half a packet at a time. C cannot decrypt half a packet and so it is stuck. A sends the other half only when it receives a half-packet from B. C has two options when it receives half a packet : 1. It does not send the packet to B at all and dumps it. In this case B will anyway come to know that there is some problem and so it will not send it's half-packet. 2. It forwards the half-packet as it is to B. Now when B sends it's half-packet, A sends the remaining half. When B decrypts this entire packet it sees that the data is junk and so it comes to know that there is some problem in communication. Here we have assumed that there is some application level understanding between A and B like the port number. If A sends a packet at port number 25 and receives a packet at port number 35, then it will come to know that there is some problem. At the very least we have ensured that C cannot read the packets though it can block the communication. There is another much simpler method of exchanging keys which we now discuss : Key Distribution Center There is a central trusted node called the Key Distribution Center ( KDC ). Every node has a key which is shared between it and the KDC. Since no one else knows A's secret key (KA) KDC is sure that the message it received has come from A. We show the implementation through this diagram :

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

296

When A wants to communicate with B, it sends a message encrypted in it's key to the KDC. The KDC then sends a common key to both A and B encrypted in their respective keys. A and B can communicate safely using this key. There is a problem with this implementation also. It is prone to replay attack. The messages are in encrypted form and hence would not make sense to an intruder but they may be replayed to the listener again and again with the listener believing that the messages are from the correct source. To prevent this, we can use:  

Timestamps: which however don't generally work because of the offset in time between machines. Synchronization over the network becomes a problem. Nonce numbers: which are like ticket numbers. B accepts a message only if it has not seen this nonce number before.

12.3.3 Key Distribution Centre(Recap.) There is a central trusted node called the Key Distribution Center ( KDC ). Every node has a key which is shared between it and the KDC. Since no one else knows node A's secret key K A, KDC is sure that the message it received has come from A. When A wants to communicate with B it could do two things: 1. A sends a message encrypted in it's key KA to the KDC. The KDC then sends a common key KS to both A and B encrypted in their respective keys KA and KB. A and B can communicate safely using this key. 2. Otherwise A sends a key KS to KDC saying that it wants to talk to B encrypted in the key KA. KDC send a message to B saying that A wants to communicate with you using KS.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

297

There is a problem with this implementation. It is prone to replay attack. The messages are in encrypted form and hence would not make sense to an intruder but they may be replayed to the listener again and again with the listener believing that the messages are from the correct source. When A send a message KA(M), C can send the same message to B by using the IP address of A. A solution to be used is to use the key only once. If B sends the first message KA(A,KS) also along with K(s,M), then again we may have trouble. In case this happens, B should accept packets only with higher sequence numbers. To prevent this, we can use:  

Timestamps which however don't generally work because of the offset in time between machines. Synchronization over the network becomes a problem. Nonce numbers which are like ticket numbers. B accepts a message only if it has not seen this nonce number before.

In general, 2-way handshakes are always prone to attacks. So we now look at an another protocol. Needham-Schroeder Authentication Protocol This is like a bug-fix to the KDC scheme to eliminate replay attacks. A 3-way handshake (using nonce numbers) very similar to the ubiquitous TCP 3-way handshake is used between communicating parties. A sends a random number RA to KDC. KDC send back a ticket to A which has the common key to be used.

RA, RB and RA2 are nonce numbers. RA is used by A to communicate with the KDC. On getting the appropriate reply from the KDC, A starts communicating with B, whence another nonce

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

298

number RA2 is used. The first three messages tell B that the message has come from KDC and it has authenticated A. The second last message authenticates B. The reply from B contains R B, which is a nonce number generated by B. The last message authenticates A. The last two messages also remove the possibility of replay attack. However, the problem with this scheme is that if somehow an intruder gets to know the key KS ( maybe a year later ), then he can replay the entire thing ( provided he had stored the packets ). One possible solution can be that the ticket contains a time stamp. We could also put a condition that A and B should change the key every month or so. To improve upon the protocol, B should also involve KDC for authentication. We look at one possible improvement here. which is a different protocol. Otway-Rees Key Exchange Protocol Here a connection is initiated first. This is followed by key generation. This ensures greater security. B sends the message sent by A to the KDC and the KDC verifies that A, B, R in the two messages are same and RA and RB have not been used for some time now. It then sends a common key to both A and B.

In real life all protocols will have time-stamps. This is because we cannot remember all random numbers generated in the past. We ignore packets with higher time stamps than some limit. So we only need to remember nonce for this limit. Looking at these protocols, we can say that designing of protocols is more of an art than science. If there is so much problem in agreeing on a key then should we not use the same key for a long time. The key can be manually typed using a telephone or sent through some other media.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

299

Challenge - Response Protocol Suppose nodes A and B have a shared key KAB which was somehow pre-decided between them. Can we have a secure communication between A and B ? We must have some kind of a three way handshake to avoid replay attack So, we need to have some interaction before we start sending the data. A challenges B by sending it a random number RA and expects an encrypted reply using the pre-decided key KAB. B then challenges A by sending it a random number RB and expects an encrypted reply using the pre-decided key KAB. A B 1. A, RA-------------> 2. <--------KAB(RA), RB 3. KAB(RB)----------> Unfortunately this scheme is so simple that this will not work. This protocol works on the assumption that there is a unique connection between A and B. If multiple connections are possible, then this protocol fails. In replay attack, we could repeat the message K AB(M) if we can somehow convince B that I am A. Here, a node C need not know the shared key to communicate with B. To identify itself as A, C just needs to send KAB(RB1) as the response to the challengevalue RB1 given by B in the first connection. C can remarkably get this value through the second connection by asking B itself to provide the response to its own challenge. Thus, C can verify itself and start communicating freely with B. Thus, replay of messages becomes possible using the second connection. Any encryption desired, can be obtained by sending the value as R B2 in the second connection, and obtaining its encrypted value from B itself. A B 1 Connection: A, RA-------------> <----------KAB(RA), RB1 nd 2 Connection: A, RB1------------> <--------- KAB(RB1), RB2 st 1 Connection: KAB(RB1)---------> st

Can we have a simple solution apart from time-stamp ? We could send KAB(RA,RB) in the second message instead of KAB(RA) and RA. It may help if we keep two different keys for different directions. So we share two keys one from A to B and the other from B to A. If we use only one key, then we could use different number spaces ( like even and odd) for the two directions. Then A would not be able to send RB. So basically we are trying to look at the traffic in two directions as two different traffics. This particular type of attack is called reflection attack. 5 - way handshake We should tell the sender that the person who initiates the connection should authenticate himself first. So we look at another protocol. Here we are using a 5-way handshake but it is secure. When we combine the messages, then we are changing the order of authentication which is leading to problems. Basically KAB(RB) should be sent before KAB(RA). If we have a node C in the middle, then C can pose as B and talk to A. So C can do replay attack by sending messages which it had started some time ago. A

B

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

300

1. A------------------> 2. <-----------------RB 3. KAB(RB)----------> 4. RA----------------> 5. <----------KAB(RA) On initiating a connection B challenges A by sending it a random number R B and expects an encrypted reply using the pre-decided key KAB. When A sends back KAB(RB), B becomes sure that it is talking to the correct A, since only A knows the shared key. Now A challenges B by sending it a random number RA, and expects an encrypted reply using the pre-decided key KAB. When B sends back KAB(RA), A becomes sure that it is talking to the correct B, since only B knows the shared key. Check Your Progress 1. Explain the various issues of the Network security. 12.4 SNMP 12.4.1 SNMP(Simple Network Management Protocol) A large network can often get into various kinds of trouble due to routers (dropping too many packets), hosts( going down) etc. One has to keep track of all these occurence and adapt to such situations. A protocol has been defined . Under this scheme all entities in the network belong to 4 class : 1. 2. 3. 4.

Managed Nodes Management Stations Management Information (called Object) A management protocol

The managed nodes can be hosts, routers, bridges, printers or any other device capable of communicating status information to others. To be managed directly by SNMP , a node must be capable of running am SNMP management process, called SNMP agent. Network management is done by management stations by exchanging information with the nodes. These are basically general purpose computers running special management software. The management stations polls the stations periodically . Since SNMP uses unreliable service of UDP the polling is essential to keep in touch with the nodes. Often the nodes sends a trap message indicating that it is going to go down. The management stations then periodically checks (with an increased frequency) . This type of polling is called trap directed polling. Often a group of nodes are represented by a single node which communicate with the management stations. This type of node is called proxy agent. The proxy agent can also server as a security arrangement. All the variables in these scheme are called Objects. Each variable can be referenced by a specific addressing scheme adopted by this system. The entire collection of all objects is called Management Information Base (MIB). The addressing is hierarchical as seen in the picture.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

301

Internet is addressed .All the objects under this domain has this string at the beginning. The information are exchanged in a standard and vendor-neutral way . All the data are represented in Abstract Syntax Notation 1 (ASN.1). It is similar to XDR as in RPC but it have widely different representation scheme. A part of it actually adopted in SNMP and modified to form Structure Of Information Base. The Protocol specifies various kinds of messages that can be exchanged between the managed nodes and the management station. The last two options has been actually added in the SNMPv2. The fourth option need some kind of authentication from the management station. Addressing Example : Following is an Example of the kind of address one can refer to when fetching a value in the table :(20) IP-Addr-Table = Sequence of IPAddr-Entry (1) IPAddrEntry = SEQUENCE { IPADDENTRYADDR : IPADDR (1) Index : integer (2) Netmask : IPAddr (3)

}

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

302

The Simple Network Management Protocol (SNMP) forms part of the internet protocol suite as defined by the Internet Engineering Task Force (IETF). SNMP is used in network management systems to monitor network-attached devices for conditions that warrant administrative attention. It consists of a set of standards for network management, including an Application Layer protocol, [1] a database schema, and a set of data objects. SNMP exposes management data in the form of variables on the managed systems, which describe the system configuration. These variables can then be queried (and sometimes set) by managing applications. 12.4.2 Overview and basic concepts In typical SNMP usage, there are a number of systems to be managed, and one or more systems managing them. A software component called an agent (see below) runs on each managed system and reports information via SNMP to the managing systems. Essentially, SNMP agents expose management data on the managed systems as variables (such as "free memory", "system name", "number of running processes", "default route"). But the protocol permits also active management tasks, as modifying and applying a new configuration. The managing system can retrieve the information through the GET, GETNEXT and GETBULK protocol operations or the agent will send data without being asked using TRAP or INFORM protocol operations. Management systems can also send configuration updates or controlling requests through the SET protocol operation to actively manage a system. Configuration and control operations are used only when changes are needed to the network infrastructure. The monitoring operations are usually performed on a regular basis. The variables accessible via SNMP are organized in hierarchies. These hierarchies, and other metadata (such as type and description of the variable), are described by Management Information Bases (MIBs). SNMP is part of the Internet network management architecture. This architecture is based on the interaction of many entities, as described in the following section. 12.4.3 The Internet Management Model As specified in Internet RFCs and other documents, a network management system comprises:   

 

Network elements -- Sometimes called managed devices, network elements are hardware devices such as computers, routers, and terminal servers that are connected to networks. Agents -- Agents are software modules that reside in network elements. They collect and store management information such as the number of error packets received by a network element. Managed object -- A managed object is a characteristic of something that can be managed. For example, a list of currently active TCP circuits in a particular host computer is a managed object. Managed objects differ from variables, which are particular object instances. Using our example, an object instance is a single active TCP circuit in a particular host computer. Managed objects can be scalar (defining a single object instance) or tabular (defining multiple, related instances). Management information base (MIB) -- A MIB is a collection of managed objects residing in a virtual information store. Collections of related managed objects are defined in specific MIB modules. Syntax notation -- A syntax notation is a language used to describe a MIB's managed objects in a machine-independent format. Consistent use of a syntax notation allows different types of computers to share information. Internet management systems use a subset of the International Organization for Standardization's (ISO's) Open System

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

 

303

Interconnection (OSI) Abstract Syntax Notation (ASN.1) to define both the packets exchanged by the management protocol and the objects that are to be managed. Structure of Management Information (SMI) -- The SMI defines the rules for describing management information. The SMI is defined using ASN.1. Network management stations (NMSs) -- Sometimes called consoles, these devices execute management applications that monitor and control network elements. Physically, NMSs are usually engineering workstation-caliber computers with fast CPUs, megapixel color displays, substantial memory, and abundant disk space. At least one NMS must be present in each managed environment. Parties -- Newly defined in SNMPv2, a party is a logical SNMPv2 entity that can initiate or receive SNMPv2 communication. Each SNMPv2 party comprises a single, unique party identity, a logical network location, a single authentication protocol, and a single privacy protocol. SNMPv2 messages are communicated between two parties. An SNMPv2 entity can define multiple parties, each with different parameters. For example, different parties can use different authentication and/or privacy protocols. Management protocol -- A management protocol is used to convey management information between agents and NMSs. SNMP is the Internet community's de facto standard management protocol.

Management Information Bases (MIBs) SNMP itself does not define which information (which variables) a managed system should offer. Rather, SNMP uses an extensible design, where the available information is defined by management information bases (MIBs). MIBs describe the structure of the management data of a device subsystem; they use a hierarchical namespace containing object identifiers (OID). Each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined by ASN.1. The MIB hierarchy can be depicted as a tree with a nameless root, the levels of which are assigned by different organizations. The top-level MIB OIDs belong to different standards organizations, while lower-level object IDs are allocated by associated organizations. This model permits management across all layers of the OSI reference model, extending into applications such as databases, email, and the Java EE reference model, as MIBs can be defined for all such area-specific information and operations. A managed object (sometimes called a MIB object, an object, or a MIB) is one of any number of specific characteristics of a managed device. Managed objects are made up of one or more object instances (identified by their OIDs), which are essentially variables. Two types of managed objects exist:  

Scalar objects define a single object instance. Tabular objects define multiple related object instances that are grouped in MIB tables.

An example of a managed object is at Input, which is a scalar object that contains a single object instance, the integer value that indicates the total number of input AppleTalk packets on a router interface. An object identifier (or object ID or OID) uniquely identifies a managed object in the MIB hierarchy. Abstract Syntax Notation One (ASN.1) In telecommunications and computer networking, Abstract Syntax Notation One (ASN.1) is a standard and flexible notation that describes data structures for representing, encoding, transmitting, and decoding data. It provides a set of formal rules for describing the structure of

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

304

objects that are independent of machine-specific encoding techniques and is a precise, formal notation that removes ambiguities. ASN.1 is a joint ISO and ITU-T standard, originally defined in 1984 as part of CCITT X.409:1984. ASN.1 moved to its own standard, X.208, in 1988 due to wide applicability. The substantially revised 1995 version is covered by the X.680 series. An adapted subset of ASN.1, Structure of Management Information (SMI), is specified in SNMP to define sets of related MIB objects; these sets are termed MIB modules. SNMP basic components An SNMP-managed network consists of three key components:   

Managed devices Agents Network-management systems (NMSs)

A managed device is a network node that contains an SNMP agent and that resides on a managed network. Managed devices collect and store management information and make this information available to NMSs using SNMP. Managed devices, sometimes called network elements, can be any type of device including, but not limited to, routers, access servers, switches, bridges, hubs, IP telephones, computer hosts, and printers. An agent is a network-management software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP. A network management system (NMS) executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs may exist on any managed network. 12.4.4 Protocol Details SNMPv1 and SMI-specific data types The SNMPv1 SMI specifies the use of a number of SMI-specific data types, which are divided into two categories:  

Simple data types Application-wide data types

Simple data types Three simple data types are defined in the SNMPv1 SMI, all of which are unique values:   

31

31

The integer data type is a signed integer in the range of -2 to 2 -1. Octet strings are ordered sequences of 0 to 65,535 octets. Object IDs come from the set of all object identifiers allocated according to the rules specified in ASN.1.

Application-wide data types The following seven application-wide data types exist in the SNMPv1 SMI:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS 

 

   

305

Network addresses represent addresses from a particular protocol family. SMIv1 supports only 32-bit (IPv4) addresses (SMIv2 uses Octet Strings to represent addresses generically, and thus are usable in SMIv1 too. SMIv1 had an explicit IPv4 address datatype.) Counters are non-negative integers that increase until they reach a maximum value and then roll over to zero. SNMPv1 specifies a counter size of 32 bits. Gauges are non-negative integers that can increase or decrease between specified minimum and maximum values. Whenever the system property represented by the gauge is outside of that range, the value of the gauge itself will vary no further than the respective maximum or minimum, as specified in RFC 2578. Time ticks represent time since some event, measured in hundredths of a second. Opaques represent an arbitrary encoding that is used to pass arbitrary information strings that do not conform to the strict data typing used by the SMI. Integers represent signed integer-valued information. This data type redefines the integer data type, which has arbitrary precision in ASN.1 but bounded precision in the SMI. Unsigned integers represent unsigned integer-valued information, which is useful when values are always non-negative. This data type redefines the integer data type, which has arbitrary precision in ASN.1 but bounded precision in the SMI.

SNMPv1 MIB tables The SNMPv1 SMI defines highly structured tables that are used to group the instances of a tabular object (that is, an object that contains multiple variables). Tables are composed of zero or more rows, which are indexed in a way that allows SNMP to retrieve or alter an entire row with a single Get, Get Next, or Set command. SNMPv2 and structure of management information The SNMPv2 SMI is described in RFC 2578. It makes certain additions and enhancements to the SNMPv1 SMI-specific data types, such as including bit strings, network addresses, and counters. Bit strings are defined only in SNMPv2 and comprise zero or more named bits that specify a value. Network addresses represent an address from a particular protocol family. Counters are non-negative integers that increase until they reach a maximum value and then return to zero. In SNMPv1, a 32-bit counter size is specified. In SNMPv2, 32-bit and 64-bit counters are defined.      

The SNMP protocol operates at the application layer (layer 7) of the OSI model. It specifies (in version 1) five core protocol data units (PDUs): GET REQUEST - used to retrieve a piece of management information. GETNEXT REQUEST - used iteratively to retrieve sequences of management information. GET RESPONSE - used by the agent to respond with data to get and set requests from the manager. SET REQUEST - used to initialize and make a change to a value of the network element. TRAP - used to report an alert or other asynchronous event about a managed subsystem. In SNMPv1, asynchronous event reports are called traps while they are called notifications in later versions of SNMP. In SMIv1 MIB modules, traps are defined using the TRAP-TYPE macro; in SMIv2 MIB modules, traps are defined using the NOTIFICATION-TYPE macro.

Other PDUs were added in SNMPv2, including: 

GETBULK REQUEST - a faster iterator used to retrieve sequences of management information.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS  

306

INFORM - similar to a TRAP but the receiver must respond with an acknowledgement RESPONSE message. REPORT - definable by an administrative framework.

Typically, SNMP uses UDP ports 161 for the agent and 162 for the manager. The Manager may send Requests from any available ports (source port) to port 161 in the agent (destination port). The agent response will be given back to the source port. The Manager will receive traps on port 162. The agent may generate traps from any available port. Many distributions change this, however, and this is not necessarily always true. SNMPv2 SMI information modules The SNMPv2 SMI also specifies information modules, which specify a group of related definitions. Three types of SMI information modules[4] exist: MIB modules, compliance statements, and capability statements.   

MIB modules contain definitions of interrelated managed objects. Compliance statements provide a systematic way to describe a group of managed objects that must be implemented for conformance to a standard. Capability statements are used to indicate the precise level of support that an agent claims with respect to a MIB group. A NMS can adjust its behavior toward agents according to the capabilities statements associated with each agent.

SNMPv3 SNMPv3 is defined by RFC 3411–RFC 3418 (also known as 'STD0062'). SNMPv3 primarily [2] added security and remote configuration enhancements to SNMP. SNMPv3 is the current standard version of SNMP as of 2004. The IETF has designated SNMPv3 a full Internet Standard, the highest maturity level for an RFC. It considers earlier versions to be obsolete (designating them "Historic"). In December 1997 the "Simple Times" newsletter published several articles written by the SNMPv3 RFC editors explaining some of the ideas behind version 3 specifications.

SNMPv3 provides important security features:   

Message integrity to ensure that a packet has not been tampered with in transit. Authentication to verify that the message is from a valid source. Encryption of packets to prevent snooping by an unauthorized source.

12.4.5 Development and usage Version 1 SNMP version 1 (SNMPv1) is the initial implementation of the SNMP protocol. SNMPv1 operates over protocols such as User Datagram Protocol (UDP), Internet Protocol (IP), OSI Connectionless Network Service (CLNS), AppleTalk Datagram-Delivery Protocol (DDP), and Novell Internet Packet Exchange (IPX). SNMPv1 is widely used and is the de facto networkmanagement protocol in the Internet community. The first RFCs for SNMP, now known as SNMPv1, appeared in 1988:

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS       

RFC 1065 — Structure and identification of management information for internets RFC 1066 — Management information base for network management of internets RFC 1067 — A simple network management protocol These protocols were obsoleted by: RFC 1155 — Structure and identification of management information for internets RFC 1156 — Management information base for network management of internets RFC 1157 — A simple network management protocol

307 TCP/IP-based TCP/IP-based

TCP/IP-based TCP/IP-based

After a short time, RFC 1156 (MIB-1) was replaced by more often used: 

RFC 1213 — Version 2 of management information base (MIB-2) for network management of TCP/IP-based internets

Version 1 has been criticized for its poor security. Authentication of clients is performed only by a "community string", in effect a type of password, which is transmitted in clear text. The '80s design of SNMP V1 was done by a group of collaborators who viewed the officially sponsored OSI/IETF/NSF (National Science Foundation) effort (HEMS/CMIS/CMIP) as both unimplementable in the computing platforms of the time as well as potentially unworkable. SNMP was approved based on a belief that it was an interim protocol needed for taking steps towards large scale deployment of the Internet and its commercialization. In that time period Internetstandard authentication/security was both a dream and discouraged by focused protocol design groups. Version 2 SNMPv2 (RFC 1441–RFC 1452), revises version 1 and includes improvements in the areas of performance, security, confidentiality, and manager-to-manager communications. It introduced GETBULK, an alternative to iterative GETNEXTs for retrieving large amounts of management data in a single request. However, the new party-based security system in SNMP v2, viewed by many as overly complex, was not widely accepted. Community-Based Simple Network Management Protocol version 2, or SNMPv2c, is defined in RFC 1901–RFC 1908. In its initial stages, this was also informally known as SNMP v1.5. SNMP v2c comprises SNMP v2 without the controversial new SNMP v2 security model, using instead the simple community-based security scheme of SNMP v1. While officially only a "Draft Standard", this is widely considered the de facto SNMP v2 standard. User-Based Simple Network Management Protocol version 2, or SNMP v2u, is defined in RFC 1909–RFC 1910. This is a compromise that attempts to offer greater security than SNMP v1, but without incurring the high complexity of SNMP v2. A variant of this was commercialized as SNMP v2*, and the mechanism was eventually adopted as one of two security frameworks in SNMP v3. SNMPv1 & SNMPv2c interoperability As presently specified, SNMPv2 is incompatible with SNMPv1 in two key areas: message formats and protocol operations. SNMPv2c messages use different header and protocol data unit (PDU) formats than SNMPv1 messages. SNMPv2c also uses two protocol operations that are not specified in SNMPv1. Furthermore, RFC 1908 defines two possible SNMPv1/v2c coexistence strategies: proxy agents and bilingual network-management systems.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

308

Bilingual network-management system Bilingual SNMPv2 network-management systems support both SNMPv1 and SNMPv2. To support this dual-management environment, a management application in the bilingual NMS must contact an agent. The NMS then examines information stored in a local database to determine whether the agent supports SNMPv1 or SNMPv2. Based on the information in the database, the NMS communicates with the agent using the appropriate version of SNMP. Check Your Progress 2. Write about the two kinds of Encryption in security. 12.5 Summary This lesson described the WWW security and the basic concepts & development and usage of Simple Network Management Protocol. Simple Mail Transfer Protocol (SMTP) is the de facto standard for email transmissions across the Internet. It is very common for email software to use SMTP to send mail and POP3 to receive it, but SMTP can be used to receive mail. Data on the network is analogous to possessions of a person. It has to be kept secure from others with malicious intent.

12.6 Keywords Timestamps: which however don't generally work because of the offset in time between machines. Synchronization over the network becomes a problem. Nonce numbers :which are like ticket numbers. B accepts a message only if it has not seen this nonce number before. Gauges: These are non-negative integers that can increase or decrease between specified minimum and maximum values. Opaques : It represents an arbitrary encoding that is used to pass arbitrary information strings that do not conform to the strict data typing used by the SMI. 12.8 Check Your Progress. Note: Use the space provided below for your answers. Compare your answers with those given at the end. 1. Explain the various issues of the Network security. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ………. 2. Write about the two kinds of Encryption in security. ………………………………………………………………………………………………………………… ………………………………………………………………………………………………………………… ……….

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

309

Answer to Check Your Progress 1. The various issues in Network security are as follows :     

Authentication Integrity Confidentiality Non-repudiation Authorization

2. Symmetric Key Encryption: There is a single key which is shared between the two users and the same key is used for encrypting and decrypting the message. Public Key Encryption: There are two keys with each user : a public key and a private key. The public key of a user is known to all but the private key is not known to anyone except the owner of the key Tokens.

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621


COMPUTER COMMUNICATION & NETWORKS

UNIT QUESTIONS UNIT-I Questions 1. 2. 3. 4. 5. 6.

Explain briefly about the communication model. What are the characteristics of the communication? Discuss about the communication protocols. Explain the types of communication networks What are the types of communication interaction? What are the communication modes?

UNIT-2 1. Write about data compression. 2 .Write short note on data encryption. 3. What is called data transmission? 4. Explain about the transmission media techniques 5. Write about digital to analog signals 6. Explain about the encoding techniques.

UNIT-3 Questions 1. 2. 3. 4. 5.

What are the srvice premitives of the protocol? Define computer network. Explain beriefly about the OSI reference model. Dsescribe about TCP/IP model. Discuss in detail the network topologies and types of topologies.

UNIT – 4 Questions 1. List out the types of logical topologies. 2. 3. 4. 5.

Write about hybrid network topologies. Explain in detail the general description of the Ethernet. Differentiate Ethernet and Fast Ethernet. Write brief note on Modes of operation of token ring.

UNIT-5

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

310


COMPUTER COMMUNICATION & NETWORKS Questions 1. What are the benefits of Wireless LAN? 2. 3. 4. 5.

Write the advantages and disadvantages of the Wireless LAN. Explain the types of Wireless LANs with neat sketch. Differentiate Bridging and Routing. What are the advantages and disadvantages of the network bridges?

UNIT-6 Questions 1. 2. 3. 4. 5. 6. 7. 8.

Give brief introduction about the network layer. Define network. Explain the concept of switching methods. Describe the circuit switching model. Discuss in detail the concept of Packet switching model. Write about Path selection. Write brief note on Routing. Explain the relation of X.25 with the OSI reference model.

UNIT-7 Questions

2. 3. 4. 5. 6. 7.

1. Explain in detail about the Sub netting and Sub layers . What are called routing algorithm? Explain with an example. Describe in detail the Internet Control Message Protocol. What are called Congestion control? Explain the work of connection less Unreliable service protocol. What are the classifications of routing algorithms? Write brief note on delta routing.

UNIT-8 Questions 1. 2. 3. 4. 5. 6. 7. 8.

Give brief introduction about Transport layer. Write short note on Transmission Control Protocol. List out the Types of Network Types. What are the classes of Protocol in network types? Define Multiplexing. What is called Splitting and Recombining? What is the use of congestion control in transport layer? What are the classifications of Congestion control algorithms?

UNIT-9 Questions 1. Write short note on Bandwidth management. 2. What is TCP congestion avoidance algorithm? 3. What are the Salient features of TCP?

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

311


COMPUTER COMMUNICATION & NETWORKS 4. 5. 6. 7.

Explain in detail about the Transmission Control Protocol. Describe brief note on User datagram protocol. What is TCP segment structure? Differentiate between Transmission control protocol and User Datagram Protocol.

UNIT-10 Questions 1. 2. 3. 4. 5. 6. 7.

Give brief introduction to the session and presentation layer. Explain the session layer protocol. What is synchronization? Write its types. Explain in detail about the presentation layer. Describe briefly the Uses and Structure of the Domain Name Service (DNS). Write the protocol details in the DNS. Write about the security in DNS.

UNIT-11 Questions 1. Discuss in detail the TELNET. 2. Write a short note on connection methods and data format in the TELNET. 3. What is the description of the SMTP server? 4. Explain about the Extended SMTP. 5. What is Digital Signature? 6. Explain the concepts of the SNMP ( Simple Network Management Protocol).

UNIT-12 Questions 1. Write the overview and basic concepts of the SNMP. 2. Explain the internet protocol management model in SNMP. 3. Describe the protocol details in the SNMP. 4.What are the uses and development of the SNMP. 5.What are the various issues of the network security? 6.List out the types of synchronization with description. 7.Differentiate DNS and FTP.

-------------------------------------------------THE END------------------------------------------------------

FOR MORE DETAILS VISIT US ON WWW.IMTSINSTITUTE.COM OR CALL ON +91-9999554621

312


COMPUTERCOMMUNI CATI ON & NETOWORKS

Publ i s he dby

I ns t i t ut eofManage me nt& Te c hni c alSt udi e s Addr e s s:E4 1 , Se c t o r 3 , No i da( U. P) www. i mt s i ns t i t ut e . c o m| Co nt a c t :+9 1 9 2 1 0 9 8 9 8 9 8


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.