Networking

Page 1

Networking A computer network is a collection of computing devices that are connected in various ways to communicate and share resources. Most of the things we do with our devices on a daily basis rely on communications across networks. Networks can be used to share both intangible resources such as files and tangible ones such as printers. The media used to connect networks are usually made using physical wires or cables, but increasingly, wireless technology is becoming more prevalent. WiFi uses radio waves or infrared signals to transmit data. Networks are not only defined by their physical connections; they are defined by their ability to communicate. Before going any further it is worth defining some terminology that is related to networking: Node/host: A generic term to refer to any device on a network. Networks can contain a variety of devices. As mentioned previously, printers and the like. But they will also contain a variety of devices for handling network traffic. The term node/host is a coverall for devices connected to the network. Data transfer rate: The speed at which data is moved from one place on a network to another. Another term for this is bandwidth, in this context bandwidth refers to the data-rate supported by the connection or interface. It is usually expressed in bps (bits per second), Kbps (kilo-bits per second) and so on. Protocol: A protocol is a set of rules to describing how two things interact. In networking, protocols are well defined to describe how data is transferred, formatted and processed. Different Types of Network LAN – Local Area Network A LAN would contain a relatively small number of nodes and is confined to a small geographical area, such as a building or campus. There are a number of ways to configure and connect a LAN using cabling or wireless. The most common way is to use Ethernet, the industry standard for LANs. A key point to remember about LANs is that the cabling is not leased, e.g. the cabling is private, it is not shared like an internet connection. A LAN will typically contain one machine that acts as a gateway to the Internet and allows access to the web for all the machines in the LAN. LAN Configurations Although these aren’t essential knowledge for the exam, they are worth knowing about to help deepen your understanding of networking. Ring Topology All nodes are connected in a closed loop. Messages travel in one direction. Each node passes along messages until they reach their destination.


There is an obvious problem with the ring topology. If one node or connection is out, then traffic cannot pass to the next node. Some ring networks implement a counter rotating response to this to allow traffic to flow in the opposite direction. There are some advantages with the ring topology. In order to transmit data, a node must hold the counter. The counter gives nodes permission to transmit. This means each node has equal opportunity to transmit data. There is also no need for a central machine to control connectivity between machines. The simplicity of the topology means that adding a node is quite straightforward, as is establishing where a fault has occurred. Star Topology This topology centres on one node through which all of the other nodes are connected and through which all communications are sent.

This topology puts huge pressure on the central node; if it is not working then the network grinds to a halt. This central node is usually a hub or switch (see network hardware). A hub can be passive, meaning it simply broadcasts messages to all machines and it is up to the recipient to pick up intended communications. A switch is more active, directing communications to the correct recipient. A star topology has some advantages. The performance of the topology is reasonably good as packets do not need to travel through an excessive number of nodes. Although this places a big overhead on the hub, as long as it has sufficient capacity, it shouldn’t affect other nodes. The “leaf� nodes of the network are isolated to the extent that they can be effectively disconnected from the network to prevent any problems to the rest of the network. This makes it easy to identify faults and


remove them from the network. Installing the network is easy as each device only requires one connection with an input and output port to become part of the network. There are some disadvantages worth noting. There is an over-reliance on the central device and as mentioned, if it fails then the whole network will fail in turn. This topology is more expensive to setup due to the additional hardware required, in comparison to ring and bus topologies. The size of a star network is limited by the capacity of the central node, there is no such problem on a bus or ring. Bus Topology The bus topology connects the nodes via a single, central communications channel, a bus.

All nodes on the network use this channel and no node has priority over another. This also means that every node on the network receives all transmissions. This creates a situation where collisions can take place on the bus. A collision is a situation where two nodes try to transmit something on the shared channel at the exact same time. They need to use a protocol to try to avoid collisions, called CSMA/CD. More on this later. The bus topology is relatively easy to expand as it only requires that the new node be connected to the central bus. For this reason it requires less cabling than a star network. The topology works well for small networks. As you may have already worked out, the bandwidth of the bus would be a limiting factor as well as an increased probability of collisions the more computers we connect. The disadvantages associated with this topology should be fairly obvious. If the central channel fails, it is likely that the entire network fails, unless it has been configured to be able to split. The entire network failing also makes fault detection more difficult. The need for the terminator boxes also adds extra cost to this topology.


Mesh Topology A mesh topology is one where there are multiple connections between nodes. In this topology these nodes act as relays for the rest of the network, in packet switching they move the packets to adjacent nodes to allow them to reach the final destination.

A mesh network where every node is connected to every other one is called a fully connected mesh. The advantages of the mesh are the multiple routes from one node to another. This allows data to be sent along the most efficient path and can mostly cope with nodes or channels that are faulty by re-routing data along another path. Finding these faults is fairly straightforward as it is usually obvious which node or channel has the problem. Some of the disadvantages of this topology are the expense to set up. It requires quite a lot of cabling, especially if it is a fully connected mesh. Expansion, although possible by connecting one node to another, can be complex. In order to maintain the efficiency afforded by the topology it should be connected to more than one node. This means expansion can be expensive due to the extra cabling requirements. WLAN A wireless local area network (WLAN) is a wireless computer network that links two or more devices using a wireless distribution method (often spread-spectrum or OFDM radio) within a limited area such as a home, school, computer laboratory, or office building. This gives users the ability to move around within a local coverage area and still be connected to the network, and can provide a connection to the wider Internet. Most modern WLANs are based on IEEE 802.11 standards, marketed under the Wi-Fi brand name. Wireless LANs have become popular in the home due to ease of installation and use, and in commercial complexes offering wireless access to their customers; often for free. This has raised some security issues around the use of public wi-fi access.


VLAN – Virtual Local Area Network In simple terms, a VLAN is a set of workstations within a LAN that can communicate with each other as though they were on a single, isolated LAN.

These machines in the three separate VLANs (10,20 and 30) can broadcast packets and they will reach only the nodes on that VLAN. For example, a broadcast message on VLAN 30 would only go to the security nodes. This broadcast would NOT reach any of the nodes on any of the other VLANs. Similarly, broadcasts on the other VLANS will never reach the others. Although the image shows that the VLANs are close to each other, this is not necessarily true. Nodes on the same VLAN do not need to be physically close to be on the same VLAN. VLANs are configured through software rather than hardware. This makes them very flexible. They are usually configured using switches, see network hardware for the functionality of a switch. The basic reason for splitting a network into VLANs is to reduce congestion on a large LAN. To understand this problem, we need to look briefly at how LANs have developed over the years. Initially LANs were very flat—all the workstations were connected to a single piece of coaxial cable, or to sets of chained hubs. In a flat LAN, every packet that any device puts onto the wire gets sent to every other device on the LAN. As the number of workstations on the typical LAN grew, they started to become hopelessly congested; there were just too many collisions, because most of the time when a workstation tried to send a packet, it would find that the wire was already occupied by a packet sent by some other device. WAN – Wide Area Network A wide area network (WAN) is a network that covers a broad area (i.e., any telecommunications network that links across metropolitan, regional, national or international boundaries) using leased telecommunication lines. Business and government entities utilize WANs to relay data among employees, clients, buyers, and suppliers from various geographical locations. In essence, this mode


of telecommunication allows a business to effectively carry out its daily function regardless of location. The Internet can be considered a WAN as well, and is used by businesses, governments, organizations, and individuals for almost any purpose imaginable.

The textbook definition of a WAN is a computer network spanning regions, countries, or even the world. However, in terms of the application of computer networking protocols and concepts, it may be best to view WANs as computer networking technologies used to transmit data over long distances, and between different LANs, MANs and other localised computer networking architectures. This distinction stems from the fact that common LAN technologies operating at Layer 1/2 (such as the forms of Ethernet or Wifi) are often geared towards physically localised networks, and thus cannot transmit data over tens, hundreds or even thousands of miles or kilometres. WANs are used to connect LANs and other types of networks together, so that users and computers in one location can communicate with users and computers in other locations. Many WANs are built for one particular organization and are private. Others, built by Internet service providers, provide connections from an organization's LAN to the Internet. WANs are often built using leased lines. At each end of the leased line, a router connects the LAN on one side with a second router within the LAN on the other. SAN – Storage Area Network A storage area network (SAN) is a dedicated network that provides access to consolidated, block level data storage. SANs are primarily used to enhance storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear like locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network (LAN) by other devices. Before moving on, we should address what block level storage is. The storage systems we use most commonly use file level storage. We find this in hard drives and other media. The storage disk is configured with a protocol such as NFS and the files are stored and accessed from it in bulk. In block level storage, raw volumes of storage are created and each block can be controlled as an individual hard drive. These Blocks are controlled by server based operating systems and each block can be individually formatted with the required file system.


A SAN moves storage resources off the common user network and reorganizes them into an independent, high-performance network. This allows each server to access shared storage as if it were a drive directly attached to the server. When a host wants to access a storage device on the SAN, it sends out a block-based access request for the storage device. SANs are particularly helpful in backup and disaster recovery settings. Within a SAN, data can be transferred from one storage device to another without interacting with a server. This speeds up the backup process and eliminates the need to use server CPU cycles for backup. Also, many SANs utilize Fibre Channel technology or other networking protocols that allow the networks to span longer distances geographically. That makes it more feasible for companies to keep their backup data in remote locations. The Internet The internet is a combination of all of the above. It is a global network of networks, a system of interconnected networks that use the Internet Protocol Suite as their standard. The networks are connected by a broad array of electronic, wireless and optical networking technologies. These different technologies all implement open standards and protocols that allow for interoperability. That is, even though the networks are different, because they all use the same protocols they can communicate. Computers and routers use routing tables in their operating system to direct IP packets to the nexthop router or destination. Routing tables can be maintained manually or automatically by routing protocols. Each of the end points on the network, such as your mobile phone or laptop can be considered a client, the machines that store the information that we seek are the servers. Other elements are nodes which serve as connection points along a route. The transmission lines might be physical such as cables and fibre optics or they might be wireless signals from satellites, cell towers or radios. All of this hardware would not create a network but for the protocols. Protocols are the rules that machines follow to complete tasks. Without a common set of protocols that all machines agreed to


follow, communication between devices would not happen. The various machines would be unable to understand each other or even send information in a meaningful way. Protocols provide the method and a common language for machines to transmit data. Intranet and Extranet It is worth combining the explanations of Intranet and Extranet. An extranet would not be possible unless it had an intranet to connect to. An intranet is a computer network that uses Internet Protocol technology to share information, operational systems, or computing services within an organization. This term refers to a network within an organization. Sometimes, the term refers only to the organization's internal website, but may be a more extensive part of the organization's information technology infrastructure, and may be composed of multiple local area networks. If we consider the ESF as an organisation of different schools. An intranet could be set up such that only members of those organisations could access the intranet. The intranet can be considered to be a private extension of the internet. An intranet may host multiple private websites and constitute an important component and focal point of internal communication and collaboration. Any of the well known Internet protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer protocol). In many organizations, intranets are protected from unauthorized external access by means of a network gateway and firewall. For smaller companies, intranets may be created simply by using private IP address ranges. In these cases, the intranet can only be directly accessed from a computer in the local network; however, companies may provide access to off-site employees by using a virtual private network, or by other access methods, requiring user authentication and encryption. An example of this is the webspace you are provided on the www.kgv.hk domain. It can only be accessed from within the school network, using a school machine, so is IP address based. However, access can be gained using other means requiring authentication. An extranet is a computer network that allows controlled access from outside of an organization's intranet. Extranets are used for specific use cases including business-to-business (B2B). In a businessto-business context, an extranet can be viewed as an extension of an organization's intranet that is extended to users outside the organization, usually partners, vendors and suppliers, in isolation from all other Internet users. An extranet could be understood as an intranet mapped onto the public Internet or some other transmission system not accessible to the general public, but managed by more than one company's administrator(s). For example: imagine a car manufacturer. They could create an intranet to collaborate on a new model of car, sharing ideas and information. After this process is over, they could extend this intranet to various parties that they might need to interact with such as sales teams, mechanics etc. They can be given access to technical specifications and product manuals that might run into thousands of print pages. The advantages here should be obvious, information can be updated, warnings and problems identified and all in a self-serve format. For decades, institutions have been interconnecting to each other to create private networks for sharing information. One of the differences that characterizes an extranet, however, is that its interconnections are over a shared network rather than through dedicated physical lines. Similarly, for smaller, geographically united organizations, "extranet" is a useful term to describe selective access to intranet systems granted to suppliers, customers, or other companies. Such


access does not involve tunneling, but rather simply an authentication mechanism to a web server. In this sense, an "extranet" designates the "private part" of a website, where "registered users" can navigate, enabled by authentication mechanisms on a "login page". An extranet requires network security. These can include firewalls, server management, the issuance and use of digital certificates or similar means of user authentication, encryption of messages and the use of virtual private networks (VPNs) that tunnel through the public network. VPN – Virtual Private Network A virtual private network (VPN) extends a private network across a public network, such as the Internet. It enables a computer or network-enabled device to send and receive data across shared or public networks as if it were directly connected to the private network, while benefiting from the functionality, security and management policies of the public network. A VPN is created by establishing a virtual point-to-point connection through the use of dedicated connections, virtual tunneling protocols, or traffic encryption. This is done using software, a VPN client. From a user perspective, the extended network resources are accessed in the same way as resources available within the private network. VPNs allow employees to securely access their company's intranet while traveling outside the office. Similarly, VPNs securely connect geographically separated offices of an organization, creating one cohesive network. VPN technology is also used by individual Internet users to secure their wireless transactions, to circumvent geo restrictions and censorship, and to connect to proxy servers for the purpose of protecting personal identity and location.

Peer-to-peer Network (P2P) In its simplest form, a peer-to-peer (P2P) network is created when two or more PCs are connected and share resources without going through a separate server computer. A P2P network can be an ad hoc connection—a couple of computers connected via a Universal Serial Bus to transfer files. A P2P network also can be a permanent infrastructure that links a half-dozen computers in a small office over copper wires. The initial use of P2P networks in business followed the deployment in the early 1980s of freestanding PCs. In contrast to the mini-mainframes of the day which served up word processing and


other applications to dumb terminals from a central computer and stored files on a central hard drive, the then-new PCs had self-contained hard drives and built-in CPUs. The smart boxes also had on-board applications, which meant they could be deployed to desktops and be useful without an umbilical cord linking them to a mainframe. Peer-to-peer is a decentralized communications model in which each party has the same capabilities and either party can initiate a communication session. Unlike the client/server model, in which the client makes a service request and the server fulfills the request, the P2P network model allows each node to function as both a client and server. Personal Area Network (PAN) A PAN is a computer network used for data transmission among devices such as computers, telephones and personal digital assistants. PANs can be used for communication among the personal devices themselves (intrapersonal communication), or for connecting to a higher level network and the Internet (an uplink). The typical range of a PAN is around 0-10 metres. As an example, we could look at the new trend of wearable technology and in particular smart watches. These devices can be used to track movement but also provide notifications from messaging and email. They do not work fully unless they are within a certain range of the phone they are tethered to, thus, creating a PAN using the mobile phone as the connection to the higher level network. You could even argue that a wireless mouse or keyboard creates a PAN. Standards in Networks Standards are necessary in almost every walk of life. For example, before 1904, fire hose couplings in the United States were not standard, which meant a fire department in one community could not help in another community, that’s pretty crazy if you think about it. More recently, there were two formats for HD video/audio on disc, HD-DVD and BluRay. Of course you know the conclusion to that particular rivalry. One of the main reasons for this was the addition of a BluRay player to the PS3 and its use of BluRay discs for gaming. Additionally, Microsoft’s failure to incorporate an HD-DVD player and use it in games also influenced this. So by market forces and public perception, BluRay became the de facto standard in high definition media. The primary reason for standards is to ensure that hardware and software produced by different vendors can work together. Without networking standards, it would be difficult—if not impossible— to develop networks that easily share information. Standards also mean that customers are not locked into one vendor. They can buy hardware and software from any vendor whose equipment meets the standard. In this way, standards help to promote more competition and hold down prices. Today, virtually all networking standards are “open” standards, administered by a standards organization or industry group. Open standards are more popular than proprietary ones in the computer industry, and that's particularly so when it comes to networking. In fact, the few technologies where there is no universally-accepted open standard have been losing ground to those with open standards, particularly in the areas of wireless LANs and home networking—pretty much proving how important an open process really is. In the early days of computing, many people didn't quite understand just how important universal standards were. Most companies were run by skilled inventors, who came up with great ideas for


new technologies and weren't particularly interested in sharing them. It wasn't considered a “smart business move” to share information about new inventions with other companies – this makes sense from a perspective of competition. This is not to say companies didn’t believe in standards, just that they thought they should be the ones controlling the standards in their particular field. As an example: imagine I have come up with a new networking technology and incorporated it into a new LAN product called CalpNet. CalpNet is my product. I have patents on the technology, I control its design and manufacture, I won’t tell anyone how it works, all they’ll do is copy me. Or will they? I intend to sell interface cards, cables and accessories for CalpNet, and a company that wanted to use it could install the cards in all of their PCs and be assured that they would be able to talk to each other. This solves the interoperability problem for this company by creating a “CalpNet standard”. This would be an example of a proprietary standard—it's owned by one company or person. The problem with proprietary standards is that other companies are excluded from the standard development process, and therefore have little incentive to cooperate with the standard owner. In fact, just the opposite: they have a strong motivation to develop a competing proprietary standard, even if it doesn't improve on the existing one. So when my competition sees what I am doing, they are not going to also create network interface cards that can work with CalpNet, which would require paying me a royalty. Instead, they’re going to develop a new line of networking hardware called JaNet, which is very similar to CalpNet in operation but uses different connectors and cable and logic. They too will try to sell bunches of cards and cables—to my customers, if possible! You can see what the problem is here: the market ends up with different companies using different products that can't interoperate. If you install CalpNet, you have to come to me for any upgrades or changes—you have no choice. What if two companies merge where one has 50 machines using CalpNet and the other 40 machines using JaNet? IT have a problem, that would be solvable but it would realty just be better to not have the problem in the first place. And how could you create something like the Internet if everyone's networks used different “standards”? The use of standards makes it much easier to develop software and hardware that link different networks because software and hardware can be developed one layer at a time. The OSI Model – breaking communications into layers As we know already, a network is two or more computers connected together allowing the computers to communicate and collaborate with each other. When one computer sends a signal to another across a network there is a range of different activities that have to take place. Think of each of these as a layer. The layers range from the physical layer where voltages are placed on wires to transmit the data to the software (video calling, mail applications and anti-virus software for example) that wants to use the network to send or receive a signal. Each layer performs its own specific task/job and happens at a different stage within the process of sending/receiving across a network.


In the diagram notice that     

on each layer the packet is added to with all the necessary information (shown in the middle column) each layer communicates with the layer below it for the transmitting side, and the layer above it when receiving at the sending computer we start at layer 7 and work down at the receiving end we start layer 1 and work up the middle column represents the packet that will be 'packaged up' send and 'unpacked'

The OSI (Open System Interconnection) Model breaks the various aspects of a computer network into seven distinct layers. Each successive layer envelops the layer beneath it, hiding its details from the levels above. The OSI Model isn't itself a networking standard in the same sense that Ethernet and TCP/IP are. Rather, the OSI Model is a framework into which the various networking standards can fit. The OSI Model specifies what aspects of a network's operation can be addressed by various network standards. So, in a sense, the OSI Model is sort of a standard's standard. The first three layers are sometimes called the lower layers. They deal with the mechanics of how information is sent from one computer to another over a network. Layers 4–7 are sometimes called the upper layers. They deal with how applications relate to the network through application programming interfaces. Layer 1: The Physical Layer The bottom layer of the OSI Model is the Physical Layer. It addresses the physical characteristics of the network, such as the types of cables used to connect devices, the types of connectors used, how


long the cables can be, and so on. For example, the Ethernet standard for 100BaseT cable specifies the electrical characteristics of the twisted-pair cables, the size and shape of the connectors, the maximum length of the cables, and so on. Another aspect of the Physical Layer is that it specifies the electrical characteristics of the signals used to transmit data over cables from one network node to another. The Physical Layer doesn't define any particular meaning for those signals other than the basic binary values 0 and 1. The higher levels of the OSI model must assign meanings to the bits transmitted at the Physical Layer. One type of Physical Layer device commonly used in networks is a repeater. A repeater is used to regenerate signals when you need to exceed the cable length allowed by the Physical Layer standard or when you need to redistribute a signal from one cable onto two or more cables. An old-style 10BaseT hub is also a Physical Layer device. Technically, a hub is a multi-port repeater because its purpose is to regenerate every signal received on any port on all the hub's other ports. Repeaters and hubs don't examine the contents of the signals that they regenerate. If they did, they'd be working at the Data Link Layer, not at the Physical Layer. Layer 2: The Data Link Layer The Data Link Layer is the lowest layer at which meaning is assigned to the bits that are transmitted over the network. Data-link protocols address things, such as the size of each packet of data to be sent, a means of addressing each packet so that it's delivered to the intended recipient, and a way to ensure that two or more nodes don't try to transmit data on the network at the same time. The Data Link Layer also provides basic error detection and correction to ensure that the data sent is the same as the data received. If an un-correctable error occurs, the data-link standard must specify how the node is to be informed of the error so it can retransmit the data. At the Data Link Layer, each device on the network has an address known as the Media Access Control address, or MAC address. This is the actual hardware address, assigned to the device at the factory. You can see the MAC address for a computer's network adapter by opening a command window and running the ipconfig /all command. Layer 3: The Network Layer The Network Layer handles the task of routing network messages from one computer to another. The two most popular Layer-3 protocols are IP (which is usually paired with TCP) and IPX (normally paired with SPX for use with Novell and Windows networks). One important function of the Network Layer is logical addressing. Every network device has a physical address called a MAC address, which is assigned to the device at the factory. When you buy a network interface card to install in a computer, the MAC address of that card can't be changed. But what if you want to use some other addressing scheme to refer to the computers and other devices on your network? This is where the concept of logical addressing comes in; a logical address gives a network device a place where it can be accessed on the network — using an address that you assign. Logical addresses are created and used by Network Layer protocols, such as IP. The Network Layer protocol translates logical addresses to MAC addresses. For example, if you use IP as the Network


Layer protocol, devices on the network are assigned IP addresses, such as 207.120.67.30. Because the IP protocol must use a Data Link Layer protocol to actually send packets to devices, IP must know how to translate the IP address of a device into the correct MAC address for the device. You can use the ipconfig command to see the IP address of your computer. Another important function of the Network layer is routing — finding an appropriate path through the network. Routing comes into play when a computer on one network needs to send a packet to a computer on another network. In this case, a Network Layer device called a router forwards the packet to the destination network. An important feature of routers is that they can be used to connect networks that use different Layer-2 protocols. For example, a router can be used to connect a local-area network that uses Ethernet to a wide-area network that runs on a different set of lowlevel protocols, such as T1. Layer 4: The Transport Layer The Transport Layer is the basic layer at which one network computer communicates with another network computer. The Transport Layer is where you'll find one of the most popular networking protocols: TCP. The main purpose of the Transport Layer is to ensure that packets move over the network reliably and without errors. The Transport Layer does this by establishing connections between network devices, acknowledging the receipt of packets, and resending packets that aren't received or are corrupted when they arrive. In many cases, the Transport Layer protocol divides large messages into smaller packets that can be sent over the network efficiently. The Transport Layer protocol reassembles the message on the receiving end, making sure that all packets contained in a single transmission are received and no data is lost. Layer 5: The Session Layer The Session Layer establishes sessions (instances of communication and data exchange) between network nodes. A session must be established before data can be transmitted over the network. The Session Layer makes sure that these sessions are properly established and maintained. Layer 6: The Presentation Layer The Presentation Layer is responsible for converting the data sent over the network from one type of representation to another. For example, the Presentation Layer can apply sophisticated compression techniques so fewer bytes of data are required to represent the information when it's sent over the network. At the other end of the transmission, the Transport Layer then uncompresses the data. The Presentation Layer also can scramble the data before it's transmitted and then unscramble it at the other end, using a sophisticated encryption technique. Layer 7: The Application Layer The highest layer of the OSI model, the Application Layer, deals with the techniques that application programs use to communicate with the network. The name of this layer is a little confusing because application programs (such as Excel or Word) aren't actually part of the layer. Rather, the Application Layer represents the level at which application programs interact with the network, using programming interfaces to request network services. One of the most commonly used application


layer protocols is HTTP, which stands for HyperText Transfer Protocol. HTTP is the basis of the World Wide Web. VPN A Virtual Private Network (VPN) is a network that uses encryption to allow IP traffic to travel securely over the TCP/IP network. A VPN is created using software called a VPN client. A VPN uses encrypted and authenticated links that provide remote access and routed connections between private networks or computers. A VPN can be used over a local area network, across a WAN connection, over the Internet, and even between a client and a server over a dial-up connection through the Internet.

VPNs were created to address two different problems: the high cost of dedicated leased lines needed for branch office communications and the need to allow employees a method of securely connecting to the headquarters' networks when they were on business out of town or working from home. VPNs work by using a tunneling protocol that encrypts packet contents and wraps them in an unencrypted packet. Imagine if you could blow a soap bubble in the shape of a tube and only you and your friend could talk through it. The bubble is temporary and when you want to have another conversation, you would have to create another bubble. That's essentially like a VPN's channel. This channel is actually a temporary direct session. This is what is commonly referred to as tunnelling. Then the VPN also exchanges a set of shared secrets to create an encryption key. The traffic traveling along the established channel is wrapped with an encrypted package that has an address on the outside of the package, but the contents are hidden from view. It's sort of like a candy wrapper. You can see the candy, but you don't really know what the candy looks like on the inside. The same thing happens with the encrypted traffic. The original contents are hidden from view, but it has enough information to get it to its destination. After the data reaches its destination, the wrapper is safely removed.


Tunnel endpoints are devices that can encrypt and decrypt packets. When you create a VPN, you establish a secure association between the two tunnel endpoints. These endpoints create a secure, virtual communication channel. Only the destination tunnel endpoint can unwrap packets and decrypt the packet contents. Routers use the unencrypted packet headers to deliver the packet to the destination device. Intermediate routers along the path cannot (and do not) read the encrypted packet contents. VPN Protocols PPTP - Point-to-Point Tunneling Protocol has long been the standard protocol for internal business VPN. It is a VPN protocol only, and relies on various authentication methods to provide security. Available as standard on just about every VPN capable platform and device, and thus being easy to set up without the need to install additional software, it remains a popular choice both for businesses and VPN providers. It also has the advantage of requiring a low computational overhead to implement (i.e. it’s quick). These days PPTP uses 128 bit encryption keys but this wasn’t always the case. It does have some security flaws and it will come as no surprise that the NSA almost certainly decrypts PPTP encrypted communications as standard. Perhaps more worrying is that the NSA has (or is in the process of) almost certainly decrypted the vast amounts of older data it has stored, which was encrypted back when even security experts considered PPTP to be secure. L2TP - Layer 2 Tunnel Protocol is a VPN protocol that on its own does not provide any encryption or confidentiality to traffic that passes through it. For this reason it is usually implemented with the IPsec encryption suite (similar to a cipher, as discussed below) to provide security and privacy. L2TP/IPsec is built-in to all modern operating systems and VPN capable devices, and is just as easy and quick to set up as PPTP (in fact it usually uses the same client). Problems can arise however, because the L2TP protocol uses UDP port 500, which is more easily blocked by firewalls, and may therefore require advanced configuration (port forwarding) when used behind a firewall. IPsec encryption has no major known vulnerabilities, and if properly implemented may still be secure. However, Edward Snowden’s revelations have strongly hinted at the standard being compromised by the NSA. SSTP – Largely a Windows only platform. SSTP uses SSL v3 and is a proprietary standard owned by Microsoft. This means that the code is not open to public scrutiny, and Microsoft’s history of cooperating with the NSA, and on-going speculation about possible backdoors built-in to the Windows operating system, do not inspire confidence in the standard. SOCKS - SOCKS servers do not interpret network traffic at all, which makes them much more flexible, but because they are usually handling more traffic, usually slower. The big advantage of the SOCKS protocol is that it supports any kind of internet traffic, such as POP3 and SMTP for emails, IRC chat, FTP for uploading files to websites, and torrent files. The latest iteration of the protocol is SOCKS5.


VPN Uses VPNs are hugely popular with corporations as a means of securing sensitive data when connecting remote data centers. These networks are also becoming increasingly common among individual users—and not just torrenters. Because VPNs use a combination of dedicated connections and encryption protocols to generate virtual P2P connections, even if snoopers did manage to siphon off some of the transmitted data, they'd be unable to access it on account of the encryption. What's more, VPNs allow individuals to spoof their physical location—the user's actual IP address is replaced by VPN provider. The underlying theme is that using a VPN is the best way to ensure that your data is always protected. It’s an added safety measure if you already work on a secure network, but it’s critical to use VPNs over open public Wi-Fi connections — hence their origins in the traveling corporate space. Attackers prey on unsuspecting people in coffee shops, hotels and airports, but a VPN provides the security to stop them. VPN secures your business computers’ Internet connection when using an untrusted public network to make sure that all of the data you are sending and receiving is encrypted and secured. You can save unnecessary expenses by eliminating the need for leased lines and call-trafficking equipment in your business establishment, office or store by using a VPN. If your business offers telecommuting jobs, you can allow your telecommuting employees to access your company’s server from their home or anywhere. Globalization has turned the world into a global village where businesses more and more hiring individuals that cannot relocate. Business firms can use VPN's to enable these employees to have access to these companies’ data center, network or vital resources from a remote location. So, you can save a lot of money by not renting extra office space, providing phones, and incurring other expenses. VPN's have become one of the major networking technologies within the past few years. Businesses can exchange important data with their trading partners or clients easily and securely through using the resources over encrypted and secured Internet connections.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.