Vikram Narayandas, P.Leelavathi, P.Ravinder Kumar, S.Pradeep / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.315-321
A Survey and Analysis study on Host-to-Host Congestion Control for TCP Data Transmission 1
Vikram Narayandas 2 P.Leelavathi 3 P.Ravinder Kumar
4
S.Pradeep
1
Associate Professor, Department of CSE, Jayaprakash Narayan College of Engineering, Mahabubnagar, Andhra Pradesh. 2 Assistant Professor, Department of CSE, Sree Visheshwariah College of Engineering, Mahabubnagar, Andhra Pradesh 3 Associate Professor, Department of ECE, Jayaprakash Narayan College of Engineering, Mahabubnagar, Andhra Pradesh. 4 Assistant Professor Department of IT, Jayaprakash Narayan College of Engineering, Mahabubnagar, Andhra Pradesh
Abstract The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works. Performance characteristics of a particular version of TCP are defined by the congestion control algorithm it employs. This paper presents a survey of various congestion control proposals that preserve the original host-to-host idea of TCP—namely, that neither sender nor receiver relies on any explicit notification from the network. The proposed solutions focus on a variety of problems, starting with the basic problem of eliminating the phenomenon of congestion collapse, and also include the problems of effectively using the available network resources in different types of environments(wired, wireless, high-speed, long-delay, etc.). In a shared, highly distributed, and heterogeneous environment such as the Internet, effective network use depends not only on how well a single TCP based application can utilize the network capacity, but also on how well it cooperates with other applications transmitting data through the same network.. There have been enhancements allowing senders to detect fast packet losses and route changes. Other techniques have the ability to estimate the loss rate, the bottleneck buffer size, and level of congestion. The survey describes each congestion control alternative, its strengths and its weaknesses. Index Terms—TCP, congestion control, congestion collapse, packet reordering in TCP, wireless high-speed TCP, Data transmission in TCP.
1. INTRODUCTION In spite of a rapid growth of the Internet population and an explosive increase of the traffic demand, the Internet is still working without collapse. Of course, a continuous effort to increase the link bandwidth and router processing capacity is supporting the Internet growth behind the scenes. However, a most essential device for achieving such a success is a congestion control mechanism provided by a transport– layer protocol, i.e., TCP (Transmission Control Protocol). In TCP, each end host controls its packet transmission rate by changing the window size in response to network congestion. A key is
that the TCP congestion control is performed in a distributed fashion; each end host determines its window size by itself according to the information obtained from the network. In general, there are two major objectives in the congestion control mechanism. The one is to avoid an occurrence of the network congestion, and to dissolve the congestion. if the congestion occurrence cannot be avoided. The other is to provide fair service to connections. Keeping the fairness among multiple homogeneous/heterogeneous connections in the network is an essential feature for the network to be widely accepted. The fair service also involves detecting mis–behaving flows which do not properly react against the network congestion and unfairly occupy the network resources.(such as router buffer and link bandwidth). In the previous, the sender sends the packets without the intermediate station. The data packets has been losses many and time is wasted. Retransmission of data packets is difficulty. The standard already requires receivers to report the sequence number of the last in-order delivered data packet each time a packet is received, even if received out of order. For example, in response to a data packet sequence 5,6,7,10,11,12, the receiver will ACK the packet sequence 5,6,7,7,7,7. In the idealized case, the absence of reordering guarantees that an out-of-order delivery occurs only if some packet has been lost. Thus, if the sender sees several ACKs carrying the same sequence numbers (duplicate ACKs), it can be sure that the network has failed to deliver some data and can act accordingly. In Proposed system we have designed, developed, and implemented a compromised router detection protocol that dynamically infers, based on measured traffic rates and buffer sizes, the number of congestive packet losses that will occur. The Host to Host congestion control proposals that build a foundation for all currently known host-to-host algorithms. This foundation includes: 1) The basic principle of probing the available network resources, 2) Loss-based and delay-based techniques to estimate the congestion state in the network, and 3) Techniques to detect packet losses quickly
315 | P a g e
Vikram Narayandas, P.Leelavathi, P.Ravinder Kumar, S.Pradeep / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.315-321 TCP standard specifies a sliding window based flow control. This flow control has several mechanisms. First, the sender buffers all data before the transmission, assigning a sequence number to each buffered byte. Continuous blocks of the buffered data are packetized into TCP packets that include a sequence number of the first data byte in the packet. Second, a portion (window) of the prepared packets is transmitted to the receiver using the IP protocol. As soon as the sender receives delivery confirmation for at least one data packet, it transmits a new portion of packets. Finally, the sender holds responsibility for a data block until the receiver explicitly confirms delivery of the block. As a result, the sender may eventually decide that a particular unacknowledged data block has been lost and start recovery procedures (e.g., retransmit one or several packets). To acknowledge data delivery, the receiver forms an ACK packet that carries one sequence number and (optionally) several pairs of sequence numbers. The former, a cumulative ACK, indicates that all data blocks having smaller sequence numbers have already been delivered. The latter, a selective ACK explicitly indicates the ranges of sequence numbers of delivered data packets. To be more precise, TCP does not have a separate ACK packet, but rather uses flags and option fields in the common TCP header for acknowledgment purposes. (A TCP packet can be both a data packet and an ACK packet at the same time.) However, without loss of generality, we will discuss a notion of ACK packets as a separate entity. Packet Sequence Numbers: A packet sequence number is a 32 bit number in the range from 1 through 2^32 没 1, which is used to specify the sequential order of a Data packet in a Data Stream. A sender node assigns consecutive sequence numbers to the Data packets provided by the Sender application. Zero is reserved to indicate that the data session has not yet started. Data Queue: A Data Queue is a buffer, maintained by a Sender or a Repair Head, for transmission and retransmission of the Data packets provided by the Sender application. New Data packets are added to the data queue as they arrive from the sending application, up to a specified buffer limit. The admission rate of packets to the network is controlled by flow and congestion control algorithms. Once a packet has been received by the Receivers of a Data Stream, it may be deleted from the buffer. -Must achieve good link utilization -Must not starve competing flows.
good understanding of how and when a protocol breaks than when it works. Security -Security and privacy issues -Sender authentication
2. Algorithms 2.1. Slow Start, Congestion Avoidance algorithms,
Fig: 2.1.slow start and congestion avoidance Slow Start and Congestion Avoidance algorithms. These provide two slightly different distributed host-tohost mechanisms which allow a TCP sender to detect available network resources and adjust the transmission rate of the TCP flow to the detected limits. Assuming the probability of random packet corruption during transmission is negligible ( 1%), the sender can treat all detected packet losses as congestion indicators. In addition, the reception of any ACK packet is an indication that the network can accept and deliver at least one new packet Thus the sender, reasonably sure it will not cause congestion, can send at least the amount of data that has just been acknowledged. This in-out packet balancing is called the packet conservation principle and is a core element, both of Slow Start and of Congestion Avoidance 2.2. Fast Packet Recovery algorithms
between any two nodes may be trivially ba
Fig: 2.2Fast recovery algorithms Scalability The protocol should be able to work under a variety of conditions Include multiple network topologies Variable link speeds Receiver set size can be to the order of at least 1000 - 10000 It is more important to have a
Fast Recovery algorithms, has a direct means to calculate the number of outstanding data packets using information extracted from SACKs. Instead of the congestion window inflation technique, the FACK
316 | P a g e
Vikram Narayandas, P.Leelavathi, P.Ravinder Kumar, S.Pradeep / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.315-321 maintains three special state variables (Fig H, the highest sequence number of all sent data packets—all data packets with sequence number less than H have been sent at least once; (2) F, the forward-most sequence number of all acknowledged data packets—no data packets with sequence number above F have been delivered.
3. Methodology: 1. Tcp Host to host Network Host-to-host principle, meaning they do not rely on any kind of explicit signaling from the network.1 The proposed algorithms introduce a wide variety of techniques that allow senders to detect loss events, congestion state, and route changes, as well as measure the loss rate, the RTT, the RTT variation, bottleneck buffer sizes, and congestion level 2 .Congestion Collapse TCP sender’s estimate of the number of data packets the network can accept for delivery without becoming congested. In the special case where the flow control limit (the so-called receiver window) is less than the congestion control limit (i.e., the congestion window), the former is considered a real bound for outstanding data packets. Although this is a formal definition of the real TCP rate bound, we will only consider the congestion window as a rate limiting factor, assuming that in most cases the processing rate of end-hosts is several orders of magnitude higher than the data transfer rate that the network can potentially offer. Additionally, we will compare different algorithms, focusing on the congestion window dynamics as a measure of the particular congestion control algorithm effectiveness 3. Congestion Avoidance & Packet Recovery Module: Congestion control algorithm, the purpose of which is to reduce consumption of network resources in complex congestion situations. But this expectation rests on the assumption that congestion states, as deduced from each detected loss, are independent, and in the example above this does not hold true. All packet losses from the original data bundle (i.e., from those data packets outstanding at the moment of loss detection) have a high probability of being caused by a single congestion event. Thus, the second and third losses from the example above should be treated only as requests to retransmit data and not as congestion indicators. Moreover, reducing the congestion window does not guarantee the instant release of network resources. All packets sent before the congestion window reductions are still in transit. Before the new congestion window size becomes effective, we should not apply any additional rate reduction policies. This can be interpreted as reducing the congestion window no more often than once per one-way propagation delay or approximately RTT/2 . Reno’s Fast Recovery. It solves the ambiguity of congestion events by restricting the exit from the recovery phase until all data
packets from the initial congestion window are acknowledged. More formally, the New Reno algorithm adds a special state variable to remember the sequence number of the last data packet sent before entering the Fast Recovery state. This value helps to distinguish between partial and new data ACKs.The reception of a new data ACK means that all packets sent before the error detection were successfully delivered and any new loss would reflect a new congestion event. A partial ACK confirms the recovery from only the first error and indicates more losses in the original bundle of packets.
4. Calculate RTO & RTT Module: 3.4.1; RTO -Retransmission timeout estimate. If this value is overestimated, the TCP packet loss detection mechanism becomes very conservative, and performance of individual flows may severely degrade. In the opposite case, when the value of the RTO is underestimated, the error detection mechanism may perform unnecessary retransmissions, wasting shared network resources and worsening the overall congestion in the network 3.4.2. RTT -Round-trip time after sending the lost packet. The RTO, by definition, is greater than RTT. If we require that TCP receivers immediately reply to all out-of-order data packets with reports of the last in order packet a duplicate ACK , the loss can be detected by the Fast Retransmit algorithm , almost within the RTT interval. In other words, assuming the probability of packet reordering and duplication in the network is negligible; the duplicate ACK s can be considered a reliable loss indicator. Having this new indicator, the sender can retransmit lost data without waiting for the corresponding RTO event. We measure the fairness in sharing the bottleneck bandwidth among competing flows that have different RTTs. There are several notions of ―RTT fairness‖. One notion is to achieve the equal bandwidth sharing where the two competing flows may share the same bottleneck bandwidth even if they have different RTTs. This property may not be always desirable because long RTT flows tend to use more resources than short RTT flows since they are likely to travel through more routers over a longer path. Another notion is to have bandwidth shares inversely proportional to the RTT ratios. This proportional fairness makes more sense in terms of the overall end-to-end resource usage. Although there is no commonly accepted notion of RTT-fairness, it is clear that the bandwidth share ratio should be within some reasonable bound so that no flows are being starved because they travel a longer distance. Note that RTT-fairness is highly correlated with the amount of randomness in packet losses (or in other words, the amount of loss synchronization).
5. Data Transmission in TCP Data is multicast by a Sender on the Data Multicast Address. Retransmissions of data packets may be
317 | P a g e
Vikram Narayandas, P.Leelavathi, P.Ravinder Kumar, S.Pradeep / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.315-321 multicast by the Sender on the Data Multicast Address or be multicast on a Local Control Channel by a Repair Head. In order to provide NACK suppression and to work with proactive FEC, retransmissions are always multicast. If Generic Router Assist is enabled, the routers may provide NACK suppression and allow dynamically scoped retransmission to just the subset of Receivers and Repair Heads that have missed a packet. A Repair Head joins all of the Data Multicast Addresses that any of its descendants have joined. A Repair Head is responsible for receiving and buffering all data packets using the reliability semantics configured for a stream. As a simple to implement option, a Repair Head may also function as a Receiver, and pass these data packets to an attached application. For additional fault tolerance, a Receiver MAY subscribe to the multicast address associated with the Local Control Channel of one or more Repair Heads in addition to the multicast address of its parent. In this case it does not bind to this Repair Head or Sender, but will process Retransmission packets sent to this address. If the Receiver's Repair Head fails and it transfers to another Repair Head, this minimizes the number of data packets it needs to recover after binding to the new Repair Head. 5.1. Local Retransmission If a Repair Head or Sender determines from its child node's ACKs or NACKs that a Data packet was missed, the Repair Head retransmits the Data packet or, if FEC is enabled, an FEC parity packet. The Repair Head or Sender multicasts the Retransmission packet on its multicast Local Control Channel. In the event that a Repair Head receives a retransmission and knows that its children need this repair, it re-multicasts the retransmission to its children. 5.2 .Dynamically Scoped Retransmission Dynamically Scoped Retransmission may be used on a network whose routers support dynamically scoped retransmissions through Generic Router Assist. Dynamically Scoped Retransmissions use soft state kept in the routers to constrain the Retransmission to only the children that have requested them through a NACK. Dynamically Scoped Retransmissions are known to be susceptible to router topology changes. Therefore, only the first retransmission of a packet is sent via this mechanism. Thereafter, only the above two mechanisms should be used. This will allow the protocol to provide connectivity even during router topology changes, albeit with less efficiency. 5.3 .Control Traffic Management One of the largest challenges for scaleable reliable multicast protocols has been that of controlling the potential explosion of control traffic. There is a fundamental tradeoff between the latency with which losses can be detected and repaired, and the amount of control traffic generated by the protocol. In conjunction
with the dynamic global tree parameters, PACK provides a set of algorithms that carefully control and manage this traffic, preventing control traffic explosion. Despite their different names, ACKs and NACKs both function as selective acknowledgements of the window of contiguous sequence numbers that have not yet been fully acknowledged. The only difference between the packet headers is a single flag. ACK packet frequency is controlled by setting a number of tree wide parameters controlling their maximum rate of generation. The primary parameter is the ratio parameter, R, for the maximum number of ACK packets to be generated per data packet sent. The higher R is, the faster positive acknowledgements will be generated all the way back to the sender. This induces more back-channel traffic. ACKs must be enabled for any Data Session. NACKs should be implemented as part of any implementation, and may be enabled for any given Data Session. If enabled, then on detection of a lost packet, a Receiver waits a random interval before sending a NACK. If the Receiver receives the retransmitted data before the NACK timer expires, the Receiver cancels the NACK. This reduces the chance that multiple Receivers generate a NACK for the same packet. Repair Head node multicasts a Data packet to its children as soon as it gets a NACK request for that packet, unless it retransmitted that packet previously in a configurable time period. If it does not have the missing packet, it forwards the NACK to its parent, and multicasts a control packet to its children to suppress any further NACKs for that packet from them. The Repair Head forwards only one NACK for a missing Data packet within a specified period of time. If more than one packet has been detected as missing before the NACK is sent, the NACK will request all of the missing packets. 5.4. Integrated Forward Error Correction This feature encodes data packets with FEC algorithms, but does not transmit the parity packets until a loss is detected. The parity packets are then transmitted and are able to repair different lost packets at different Receivers. This is a powerful tool for providing scalability in the face of independent loss. When implemented, it is a simple matter to also provide proactive FEC which automatically transmits a certain percentage of parity packets along with the data. This is particularly useful when a high minimum error rate is expected, or when low latency is particularly important. Both of these are optionally supported in PACK. FEC is organized around windows of packets. PACK Data packets include an FEC offset window field, which identifies the offset of a given packet within an FEC window. Combined with the FEC session configuration parameters, this allows receivers to decode a combination of Data and parity packets, to generate each window of Data packets. Proactive FEC packets are parity packets sent as global retransmissions at the same time a window of Data packets are sent. Reactive FEC packets are sent
318 | P a g e
Vikram Narayandas, P.Leelavathi, P.Ravinder Kumar, S.Pradeep / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.315-321 either from a Repair Head or a Sender, in response to requests for retransmissions. If using reactive FEC, a Repair Head must first have all the packets in a window before it can respond to any request for retransmission. The ACK and NACK bitmaps, combined with the information in the headers of the Data packets, provides each Repair Head with enough information to determine which parity packets the RH must compute and send in response to requests for retransmission. 5.5 Flow and Congestion Control Flow and congestion control algorithms act to prevent the Senders from overflowing the Receivers' buffers and to force them to share the network fairly and safely with other TCP and RM connections. PACK uses a combination of a transmission window for flow control, and the dynamic rate control algorithms specified in the Congestion Control (CC) BB for congestion control. These algorithms have been proven to meet all the requirements for flow and congestion control, including being safe for use in a general Internet environment, and provably fair with TCP. The Sender application provides the minimum and maximum rate limits as part of the global parameters. A Sender will not transmit at lower than the minimum rate (except possibly during short periods of time when certain slow receivers are being ejected), or higher than the maximum rate. If a Receiver is not able to keep up with the minimum rate for a period of time, the CC BB algorithms will cause it to leave the group. Receivers that leave the group MAY attempt to rejoin the group at a later time, but SHOULD NOT attempt an immediate reconnection. 5.6. Notification of Confirmed Delivery PACK provides a simple membership count for each session. This is done by each repair head counting/aggregating its (sub tree) membership count and propagating it up the tree to the sender. The propagation up the tree is piggybacked on the regular PACK (ACK and NACK) packets. 5.7 Sender node failure detection and Repair Head failure detection: A Sender node that has no data to send will periodically send Null Data packets on the Data Multicast Address. If a Receiver or a Repair Head fails to receive Data packets or Null Data packets for a session sent by the Sender, the Receiver detects a Sender failure. Each Repair Head node sends Heartbeat packets to its child nodes on its multicast Control Tree. If the child nodes do not receive any Heartbeats from their parent Repair Head, they detect failure of the parent. 5.8. Receiver node failure detection and Repair Head discovery A Receiver node sends ACKs and (optionally) NACKs for each of the active sessions that it has joined. If
none of the sessions are active, then the Receiver sends Heartbeat Response packets to its parent. If a Receiver's parent node does not receive a ACK, NACK or a Heartbeat Response packet within a specified time interval, the parent detects the failure of the Receiver and removes the child from its child list. PACK supports an option which allows the nodes in the PACK tree to acquire the addresses and location of its ancestors in the control tree and the addresses of its parent's siblings. If a PACK node's parent fails, then the node can use the acquired information to join an alternate control node. When a child node detects failure of its parent node, it can try to reconnect to an alternate Repair Head of the PACK tree, or it can try to reconnect directly to the Sender. The reliability semantics PACK provides are defined by the binding between a receiver and its repair head. When this binding is established, the repair head agrees to provide retransmission of missed packets for the receiver starting from a specific (receiver requested) sequence number. At this time, the repair head MUST not have discarded any data packet starting from this sequence number. Subsequently, a repair head needs to discard older packets from its buffer from time to time. The following two factors influence when to discard an old packet: a) Stability - When all receivers immediately subordinate to the repair head have acknowledged receipt of a packet, that packet is considered stable. When the whole sub-tree of receivers below a repair head have received a packet, it is considered as "strictly stable". PACK provides no explicit support for this strict sense of stability (note this form of reliability is also referred to as "pessimistic reliability"). b) Sender recovery window - Each data packet carries two sequence numbers: one is the sequence number of the current data packet, and the other is the sender recommended sequence number where recovery should start from (smaller than the current sequence number). This pair of sequence numbers forms a sender-suggested recovery window. A repair head must not discard any packet before it becomes stable. Per binding agreement or session wide configuration, a repair head may be allowed to discard a packet when it moves outside of the sender recovery window. When a repair head's buffer is filled up and none of the packets can be discarded (due to stability or recovery window requirements), newly arrived packets must be discarded and recovered later.
6. NACK-based Reliability This building block defines NACK-based loss detection/notification and recovery. The major issues it addresses are implosion prevention (suppression) and NACK semantics (i.e. how packets to be retransmitted should be specified, both in the case of selective and FEC loss repair). The NACK suppression mechanisms used by PACK are unicast NACKs with multicast confirmation and exponentially distributed timers. These suppression mechanisms primarily need to both minimize delay while also minimizing redundant messages. They may also need
319 | P a g e
Vikram Narayandas, P.Leelavathi, P.Ravinder Kumar, S.Pradeep / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.315-321 to have special weighting to work with Congestion Feedback. 6.1. NACK BB Algorithms Exponential Back Off. When a packet is detected as lost, an exponentially distributed timer is set, based on the algorithms. This timer is biased based on the input congestion weighting factor. If either a packet or an explicit suppression message with the same sequence number is detected before the timer goes off, the timer is cancelled. NACK Generation. When a timer goes off, the protocol instantiation is notified to generate a NACK for that sequence number. The protocol instantiation may, at its discretion, group multiple NACK notifications in to a single NACK packet. For PACK, NACKs are implemented as a unicast packet with a multicast confirmation response. Response to a Retransmission Request. When a Repair Head or other possible retransmission agent receives the first NACK from another group member for a given packet, it notifies the protocol instantiation to send either a data retransmission or, if it doesn't have the packet for retransmission, an optional suppression message. It then sets an embargo timeout, tied to the RTT to the furthest Receiver, during which other requests for the same packet will be ignored. The length of this embargo doubles each time that a retransmission is sent. This algorithm should also work with requests for retransmissions that come in the form of ACKs, as the algorithms and packet formats for both are identical, with the exception of the suppression mechanisms used.
[2].
[3]
[4].
[5]
[6].
[7] [8]
survey on congestion control for mobile ad hoc networks,‖ Wireless Communications and Mobile Computing, vol. 7, no. 5, p. 655, 2007. A . Al Hanbali, E. Altman, and P. Nain, ―A survey of TCP over ad hoc networks,‖ IEEE Commun. Surveys Tutorials, vol. 7, no. 3, pp. 22–36, 3rd quarter 2005. J. Widmer, R. Denda, and M. Mauve, ―A survey on TCP-friendly congestion control,‖ IEEE Network, vol. 15, no. 3, pp. 28–37, May/June 2001. K.-C. Leung, V. Li, and D. Yang, ―An overview of packet reordering in transmission control protocol (TCP): problems, solutions, and chal- lenges,‖ IEEE Trans. Parallel Distrib. Syst., vol. 18, no. 4, pp. 522–535, April 2007. S . Low, F. Paganini, and J. Doyle, ―Internet congestion control,‖ IEEE Control Syst. Mag., vol. 22, no. 1, pp. 28–43, February 2002. M. Gerla and L. Kleinrock,―Flow control comparative survey,‖ IEEE Trans. Common., vol. 28, no. 4, pp. 553–574, April 1980. J . Nagle, ―RFC896—Congestion control in IP/TCP internet works,‖ RFC, 1984. S .Floyd, T.Henderson, and A.Gurtov, ―RFC3782the New Reno modification to TCP’s fast recovery algorithm,‖ RFC, 2004.
Author’s Information:
7. CONCLUSION In this paper we have presented a survey of various approaches to TCP congestion control that do not rely on any explicit signaling from the network. The survey highlighted the fact that the research focus has changed with the development of the Internet, from the basic problem of eliminating the congestion collapse phenomenon to problems of using available network resources effectively in different types of environments (wired, wireless, high-speed, long-delay, etc.). In the first part of this survey, we classified and discussed proposals that build a foundation for host-to-host congestion control principles. We showed that technology advances have introduced new challenges for TCP congestion control. First, we discussed several solutions ―good‖ flow rate and using this rate as a baseline to distinguish between congestion or random packet loss. Second, we reviewed a group of solutions with the most research interest over the recent past. These proposals aim to solve the problem of poor utilization of high-speed or long-delay network channels by TCP flows. Finally we showed data transmission in TCP.
REFERENCES: [1}.
C. Lechers, B. Scheuermann, and M. Mauve, ―A
1. Vikram Narayandas M.Tech (CN) from Shadan College of Engineering, R.R Dist, B.Tech (CSIT) from Sree Datta college of Engineering, Hyderabad and. Currently he is working as Associate Professor at Jayaprakash Narayan College of Engineering, Mahabubnagar. And he has 6 years of experience in Teaching. His areas of interest include Computer networks, wire less networks, Data mining, Network Security, Software Engineering, Sensor Networks, and Cloud Computing. He is the membership of IJMRA, IACSIT, ISTE, IAENG, and CSTA. He is Peer reviewer on honorary basis of GJCST.He is Associate Editor for IJMRA. He has Published 12 papers in International Journals.
2.P.Leelavathi M.Tech(CSE) from Nishitha College of Engineering M.Sc (computers) from osmania University Currently She is working as Assistant Professor at Sree
320 | P a g e
Vikram Narayandas, P.Leelavathi, P.Ravinder Kumar, S.Pradeep / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.315-321 visheshwariah College of Engineering Her areas of interest include computer networks and communications ,wireless networks, signal processing .data mining ,sensor networks .
3.P.Ravinder Kumar M.Tech(WMC) from vardhaman college of engineering ,B.Tech(ECE) from JayaPrakash Narayana College of Engineering Currently he is working as Associate Professor at Jayaprakash narayan college of engineering And has 9 years of Experience in teaching . His areas of interest include computer networks and communications, wireless networks, signal processing.
4.S.Pradeep M.Tech CSE from Jaya prakash Narayana College of Engineering & Technology B.Tech from jaya prakash narayana College of Engineering. Currently he is working as Assistant Professor at Jaya prakash Narayana College of Engineering. His areas of interest include Data mining, Network Security, Software Engineering and Sensor Network.
321 | P a g e