CCNP Switching Study Guide v1.51 – Aaron Balchunas
1
___________________________________________
Cisco CCNP Switching Study Guide V1.51 © 2012 ________________________________________________ Aaron Balchunas aaron@routeralley.com http://www.routeralley.com
________________________________________________ Foreword: This study guide is intended to provide those pursuing the CCNP certification with a framework of what concepts need to be studied. This is not a comprehensive document containing all the secrets of the CCNP Switching exam, nor is it a “braindump” of questions and answers. This document is freely given, and can be freely distributed. However, the contents of this document cannot be altered, without my written consent. Nor can this document be sold or published without my expressed consent. I sincerely hope that this document provides some assistance and clarity in your studies.
________________________________________________
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
Table of Contents Part I – General Switching Concepts Section 1 Section 2 Section 3 Section 4
Ethernet Technologies Hubs vs. Switches vs. Routers Switching Models Switching Tables
Part II – Switch Configuration Section 5 Section 6
Basic Switch Management Switch Port Configuration
Part III – Switching Protocols and Functions Section 7 Section 8 Section 9 Section 10 Section 11
VLANs and VTP EtherChannel Spanning-Tree Protocol Multilayer Switching SPAN
Part IV– Advanced Switch Services Section 12 Section 13
Redundancy and Load Balancing Multicast
Part V – Switch Security Section 14 Section 15
AAA Switch Port and VLAN Security
Part VI – QoS Section 16 Section 17 Section 18 Section 19
Introduction to Quality of Service QoS Classification and Marking QoS Queuing QoS Congestion Avoidance
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
2
CCNP Switching Study Guide v1.51 – Aaron Balchunas
3
________________________________________________
Part I General Switching Concepts ________________________________________________
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
4
Section 1 - Ethernet Technologies What is Ethernet? Ethernet is a family of technologies that provides data-link and physical specifications for controlling access to a shared network medium. It has emerged as the dominant technology used in LAN networking. Ethernet was originally developed by Xerox in the 1970s, and operated at 2.94Mbps. The technology was standardized as Ethernet Version 1 by a consortium of three companies - DEC, Intel, and Xerox, collectively referred to as DIX - and further refined as Ethernet II in 1982. In the mid 1980s, the Institute of Electrical and Electronic Engineers (IEEE) published a formal standard for Ethernet, defined as the IEEE 802.3 standard. The original 802.3 Ethernet operated at 10Mbps, and successfully supplanted competing LAN technologies, such as Token Ring. Ethernet has several benefits over other LAN technologies: • Simple to install and manage • Inexpensive • Flexible and scalable • Easy to interoperate between vendors (References: http://docwiki.cisco.com/wiki/Ethernet_Technologies; http://www.techfest.com/networking/lan/ethernet1.htm)
Ethernet Cabling Types Ethernet can be deployed over three types of cabling: • Coaxial cabling – almost entirely deprecated in Ethernet networking • Twisted-pair cabling • Fiber optic cabling Coaxial cable, often abbreviated as coax, consists of a single wire surrounded by insulation, a metallic shield, and a plastic sheath. The shield helps protect against electromagnetic interference (EMI), which can cause attenuation, a reduction of the strength and quality of a signal. EMI can be generated by a variety of sources, such as florescent light ballasts, microwaves, cell phones, and radio transmitters. Coax is commonly used to deploy cable television to homes and businesses. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
5
Ethernet Cabling Types (continued) Two types of coax were used historically in Ethernet networks: • Thinnet • Thicknet Thicknet has a wider diameter and more shielding, which supports greater distances. However, it is less flexible than the smaller thinnet, and thus more difficult to work with. A vampire tap is used to physically connect devices to thicknet, while a BNC connector is used for thinnet. Twisted-pair cable consists of two or four pairs of copper wires in a plastic sheath. Wires in a pair twist around each other to reduce crosstalk, a form of EMI that occurs when the signal from one wire bleeds or interferes with a signal on another wire. Twisted-pair is the most common Ethernet cable. Twisted-pair cabling can be either shielded or unshielded. Shielded twistedpair is more resistant to external EMI; however, all forms of twisted-pair suffer from greater signal attenuation than coax cable. There are several categories of twisted-pair cable, identified by the number of twists per inch of the copper pairs: • Category 3 or Cat3 - three twists per inch. • Cat5 - five twists per inch. • Cat5e - five twists per inch; pairs are also twisted around each other. • Cat6 – six twists per inch, with improved insulation. An RJ45 connector is used to connect a device to a twisted-pair cable. The layout of the wires in the connector dictates the function of the cable. While coax and twisted-pair cabling carry electronic signals, fiber optics uses light to transmit a signal. Ethernet supports two fiber specifications: • Singlemode fiber – consists of a very small glass core, allowing only a single ray or mode of light to travel across it. This greatly reduces the attenuation and dispersion of the light signal, supporting high bandwidth over very long distances, often measured in kilometers. • Multimode fiber – consists of a larger core, allowing multiple modes of light to traverse it. Multimode suffers from greater dispersion than singlemode, resulting in shorter supported distances. Singlemode fiber requires more precise electronics than multimode, and thus is significantly more expensive. Multimode fiber is often used for high-speed connectivity within a datacenter. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
6
Network Topologies A topology defines both the physical and logical structure of a network. Topologies come in a variety of configurations, including: • Bus • Star • Ring • Full or partial mesh Ethernet supports two topology types – bus and star. Ethernet Bus Topology In a bus topology, all hosts share a single physical segment (the bus or the backbone) to communicate:
A frame sent by one host is received by all other hosts on the bus. However, a host will only process a frame if it matches the destination hardware address in the data-link header. Bus topologies are inexpensive to implement, but are almost entirely deprecated in Ethernet. There are several disadvantages to the bus topology: • Both ends of the bus must be terminated, otherwise a signal will reflect back and cause interference, severely degrading performance. • Adding or removing hosts to the bus can be difficult. • The bus represents a single point of failure - a break in the bus will affect all hosts on the segment. Such faults are often very difficult to troubleshoot. A bus topology is implemented using either thinnet or thicknet coax cable. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
7
Ethernet Star Topology In a star topology, each host has an individual point-to-point connection to a centralized hub or switch:
A hub provides no intelligent forwarding whatsoever, and will always forward every frame out every port, excluding the port originating the frame. As with a bus topology, a host will only process a frame if it matches the destination hardware address in the data-link header. Otherwise, it will discard the frame. A switch builds a hardware address table, allowing it to make intelligent forwarding decisions based on frame (data-link) headers. A frame can then be forwarded out only the appropriate destination port, instead of all ports. Hubs and switches are covered in great detail in another guide. Adding or removing hosts is very simple in a star topology. Also, a break in a cable will affect only that one host, and not the entire network. There are two disadvantages to the star topology: • The hub or switch represents a single point of failure. • Equipment and cabling costs are generally higher than in a bus topology. However, the star is still the dominant topology in modern Ethernet networks, due to its flexibility and scalability. Both twisted-pair and fiber cabling can be used in a star topology.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
8
The Ethernet Frame An Ethernet frame contains the following fields: Field
Length
Description
Preamble Start of Frame MAC Destination MAC Source 802.1Q tag Ethertype or length Payload CRC Interframe Gap
7 bytes 1 byte 6 bytes 6 bytes 4 bytes 2 bytes 42-1500 bytes 4 bytes 12 bytes
Synchronizes communication Signals the start of a valid frame Destination MAC address Source MAC address Optional VLAN tag Payload type or frame size Data payload Frame error check Required idle period between frames
The preamble is 56 bits of alternating 1s and 0s that synchronizes communication on an Ethernet network. It is followed by an 8-bit start of frame delimiter (10101011) that indicates a valid frame is about to begin. The preamble and the start of frame are not considered part of the actual frame, or calculated as part of the total frame size. Ethernet uses the 48-bit MAC address for hardware addressing. The first 24-bits of a MAC address determine the manufacturer of the network interface, and the last 24-bits uniquely identify the host. The destination MAC address identifies who is to receive the frame - this can be a single host (a unicast), a group of hosts (a multicast), or all hosts (a broadcast). The source MAC address indentifies the host originating the frame. The 802.1Q tag is an optional field used to identify which VLAN the frame belongs to. VLANs are covered in great detail in another guide. The 16-bit Ethertype/Length field provides a different function depending on the standard - Ethernet II or 802.3. With Ethernet II, the field identifies the type of payload in the frame (the Ethertype). However, Ethernet II is almost entirely deprecated. With 802.3, the field identifies the length of the payload. The length of a frame is important – there is both a minimum and maximum frame size. (Reference: http://www.techfest.com/networking/lan/ethernet2.htm; http://www.dcs.gla.ac.uk/~lewis/networkpages/m04s03EthernetFrame.htm)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
9
The Ethernet Frame (continued) Field
Length
Description
Preamble Start of Frame MAC Destination MAC Source 802.1Q tag Ethertype or length Payload CRC Interframe Gap
7 bytes 1 byte 6 bytes 6 bytes 4 bytes 2 bytes 42-1500 bytes 4 bytes 12 bytes
Synchronizes communication Signals the start of a valid frame Destination MAC address Source MAC address Optional VLAN tag Payload type or frame size Data payload Frame error check Required idle period between frames
The absolute minimum frame size for Ethernet is 64 bytes (or 512 bits) including headers. A frame that is smaller than 64 bytes will be discarded as a runt. The required fields in an Ethernet header add up to 18 bytes – thus, the frame payload must be a minimum of 46 bytes, to equal the minimum 64-byte frame size. If the payload does not meet this minimum, the payload is padded with 0 bits until the minimum is met. Note: If the optional 4-byte 802.1Q tag is used, the Ethernet header size will total 22 bytes, requiring a minimum payload of 42 bytes. By default, the maximum frame size for Ethernet is 1518 bytes – 18 bytes of header fields, and 1500 bytes of payload - or 1522 bytes with the 802.1Q tag. A frame that is larger than the maximum will be discarded as a giant. With both runts and giants, the receiving host will not notify the sender that the frame was dropped. Ethernet relies on higher-layer protocols, such as TCP, to provide retransmission of discarded frames. Some Ethernet devices support jumbo frames of 9216 bytes, which provide less overhead due to fewer frames. Jumbo frames must be explicitly enabled on all devices in the traffic path to prevent the frames from being dropped. The 32-bit Cycle Redundancy Check (CRC) field is used for errordetection. A frame with an invalid CRC will be discarded by the receiving device. This field is a trailer, and not a header, as it follows the payload. The 96-bit Interframe Gap is a required idle period between frame transmissions, allowing hosts time to prepare for the next frame. (Reference: http://www.infocellar.com/networks/ethernet/frame.htm)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
10
CSMA/CD and Half-Duplex Communication Ethernet was originally developed to support a shared media environment. This allowed two or more hosts to use the same physical network medium. There are two methods of communication on a shared physical medium: • Half-Duplex – hosts can transmit or receive, but not simultaneously • Full-Duplex – hosts can both transmit and receive simultaneously On a half-duplex connection, Ethernet utilizes Carrier Sense Multiple Access with Collision Detect (CSMA/CD) to control media access. Carrier sense specifies that a host will monitor the physical link, to determine whether a carrier (or signal) is currently being transmitted. The host will only transmit a frame if the link is idle, and the Interframe Gap has expired. If two hosts transmit a frame simultaneously, a collision will occur. This renders the collided frames unreadable. Once a collision is detected, both hosts will send a 32-bit jam sequence to ensure all transmitting hosts are aware of the collision. The collided frames are also discarded. Both devices will then wait a random amount of time before resending their respective frames, to reduce the likelihood of another collision. This is controlled by a backoff timer process. Hosts must detect a collision before a frame is finished transmitting, otherwise CSMA/CD cannot function reliably. This is accomplished using a consistent slot time, the time required to send a specific amount of data from one end of the network and then back, measured in bits. A host must continue to transmit a frame for a minimum of the slot time. In a properly configured environment, a collision should always occur within this slot time, as enough time has elapsed for the frame to have reached the far end of the network and back, and thus all devices should be aware of the transmission. The slot time effectively limits the physical length of the network – if a network segment is too long, a host may not detect a collision within the slot time period. A collision that occurs after the slot time is referred to as a late collision. For 10 and 100Mbps Ethernet, the slot time was defined as 512 bits, or 64 bytes. Note that this is the equivalent of the minimum Ethernet frame size of 64 bytes. The slot time actually defines this minimum. For Gigabit Ethernet, the slot time was defined as 4096 bits. (Reference: http://www.techfest.com/networking/lan/ethernet3.htm) *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
11
Full-Duplex Communication Unlike half-duplex, full-duplex Ethernet supports simultaneously communication by providing separate transmit and receive paths. This effectively doubles the throughput of a network interface. Full-duplex Ethernet was formalized in IEEE 802.3x, and does not use CSMA/CD or slot times. Collisions should never occur on a functional fullduplex link. Greater distances are supported when using full-duplex over half-duplex. Full-duplex is only supported on a point-to-point connection between two devices. Thus, a bus topology using coax cable does not support full-duplex. Only a connection between two hosts or between a host and a switch supports full-duplex. A host connected to a hub is limited to half-duplex. Both hubs and half-duplex communication are mostly deprecated in modern networks. Categories of Ethernet The original 802.3 Ethernet standard has evolved over time, supporting faster transmission rates, longer distances, and newer hardware technologies. These revisions or amendments are identified by the letter appended to the standard, such as 802.3u or 802.3z. Major categories of Ethernet have also been organized by their speed: • Ethernet (10Mbps) • Fast Ethernet (100Mbps) • Gigabit Ethernet • 10 Gigabit Ethernet The physical standards for Ethernet are often labeled by their transmission rate, signaling type, and media type. For example, 100baseT represents the following: • The first part (100) represents the transmission rate, in Mbps. • The second part (base) indicates that it is a baseband transmission. • The last part (T) represents the physical media type (twisted-pair). Ethernet communication is baseband, which dedicates the entire capacity of the medium to one signal or channel. In broadband, multiple signals or channels can share the same link, through the use of modulation (usually frequency modulation). *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
12
Ethernet (10 Mbps) Ethernet is now a somewhat generic term, describing the entire family of technologies. However, Ethernet traditionally referred to the original 802.3 standard, which operated at 10 Mbps. Ethernet supports coax, twisted-pair, and fiber cabling. Ethernet over twisted-pair uses two of the four pairs. Common Ethernet physical standards include: IEEE Standard 802.3a 802.3 802.3i 802.3j
Physical Standard 10base2 10base5 10baseT 10baseF
Cable Type Coaxial (thinnet) Coaxial (thicknet) Twisted-pair Fiber
Maximum Speed 10 Mbps 10 Mbps 10 Mbps 10 Mbps
Maximum Cable Length 185 meters 500 meters 100 meters 2000 meters
Both 10baseT and 10baseF support full-duplex operation, effectively doubling the bandwidth to 20 Mbps. Remember, only a connection between two hosts or between a host and a switch support full-duplex. The maximum distance of an Ethernet segment can be extended through the use of a repeater. A hub or a switch can also serve as a repeater. Fast Ethernet (100 Mbps) In 1995, the IEEE formalized 802.3u, a 100 Mbps revision of Ethernet that became known as Fast Ethernet. Fast Ethernet supports both twisted-pair copper and fiber cabling, and supports both half-duplex and full-duplex. Common Fast Ethernet physical standards include: IEEE Standard 802.3u 802.3u 802.3u 802.3u
Physical Standard 100baseTX 100baseT4 100baseFX 100baseSX
Cable Type Twisted-pair Twisted-pair Multimode fiber Multimode fiber
Maximum Maximum Cable Speed Length 100 Mbps 100 Mbps 100 Mbps 100 Mbps
100 meters 100 meters 400-2000 meters 500 meters
100baseT4 was never widely implemented, and only supported half-duplex operation. 100baseTX is the dominant Fast Ethernet physical standard. 100baseTX uses two of the four pairs in a twisted-pair cable, and requires Category 5 cable for reliable performance. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
13
Speed and Duplex Autonegotiation Fast Ethernet is backwards-compatible with the original Ethernet standard. A device that supports both Ethernet and Fast Ethernet is often referred to as a 10/100 device. Fast Ethernet also introduced the ability to autonegotiate both the speed and duplex of an interface. Autonegotiation will attempt to use the fastest speed available, and will attempt to use full-duplex if both devices support it. Speed and duplex can also be hardcoded, preventing negotiation. The configuration must be consistent on both sides of the connection. Either both sides must be configured to autonegotiate, or both sides must be hardcoded with identical settings. Otherwise a duplex mismatch error can occur. For example, if a workstation’s NIC is configured to autonegotiate, and the switch interface is hardcoded for 100Mbps and full-duplex, then a duplex mismatch will occur. The workstation’s NIC will sense the correct speed of 100Mbps, but will not detect the correct duplex and will default to halfduplex. If the duplex is mismatched, collisions will occur. Because the full-duplex side of the connection does not utilize CSMA/CD, performance is severely degraded. These issues can be difficult to troubleshoot, as the network connection will still function, but will be excruciatingly slow. When autonegotiation was first developed, manufacturers did not always adhere to the same standard. This resulted in frequent mismatch issues, and a sentiment of distrust towards autonegotiation. Though modern network hardware has alleviated most of the incompatibility, many administrators are still skeptical of autonegotiation and choose to hardcode all connections. Another common practice is to hardcode server and datacenter connections, but to allow user devices to autonegotiate. Gigabit Ethernet, covered in the next section, provided several enhancements to autonegotiation, such as hardware flow control. Most manufacturers recommend autonegotiation on Gigabit Ethernet interfaces as a best practice. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
14
Gigabit Ethernet Gigabit Ethernet operates at 1000 Mbps, and supports both twisted-pair (802.3ab) and fiber cabling (802.3z). Gigabit over twisted-pair uses all four pairs, and requires Category 5e cable for reliable performance. Gigabit Ethernet is backwards-compatible with the original Ethernet and Fast Ethernet. A device that supports all three is often referred to as a 10/100/1000 device. Gigabit Ethernet supports both half-duplex or fullduplex operation. Full-duplex Gigabit Ethernet effectively provides 2000 Mbps of throughput. Common Gigabit Ethernet physical standards include: IEEE Standard 802.3ab 802.3z 802.3z 802.3z
Physical Standard
Cable Type
1000baseT 1000baseSX 1000baseLX 1000baseLX
Twisted-pair Multimode fiber Multimode fiber Singlemode fiber
Speed 1 Gbps 1 Gbps 1 Gbps 1 Gbps
Maximum Cable Length 100 meters 500 meters 500 meters Several kilometers
In modern network equipment, Gigabit Ethernet has replaced both Ethernet and Fast Ethernet.
10 Gigabit Ethernet 10 Gigabit Ethernet operates at 10000 Mbps, and supports both twisted-pair (802.3an) and fiber cabling (802.3ae). 10 Gigabit over twisted-pair uses all four pairs, and requires Category 6 cable for reliable performance. Common Gigabit Ethernet physical standards include: IEEE Standard 802.3an 802.3ae 802.3ae
Physical Standard
Cable Type
10Gbase-T Twisted-pair 10Gbase-SR Multimode fiber 10Gbase-LR Singlemode fiber
Speed
Maximum Cable Length
10 Gbps 10 Gbps 10 Gbps
100 meters 300 meters Several kilometers
10 Gigabit Ethernet is usually used for high-speed connectivity within a datacenter, and is predominantly deployed over fiber. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
15
Twisted-Pair Cabling Overview A typical twisted-pair cable consists of four pairs of copper wires, for a total of eight wires. Each side of the cable is terminated using an RJ45 connector, which has eight pins. When the connector is crimped onto the cable, these pins make contact with each wire. The wires themselves are assigned a color to distinguish them. The color is dictated by the cabling standard - TIA/EIA-568B is the current standard: Color White Orange Orange White Green Blue White Blue Green White Brown Brown
Pin# 1 2 3 4 5 6 7 8
Each wire is assigned a specific purpose. For example, both Ethernet and Fast Ethernet use two wires to transmit, and two wires to receive data, while the other four pins remain unused. For communication to occur, transmit pins must connect to the receive pins of the remote host. This does not occur in a straight-through configuration:
The pins must be crossed-over for communication to be successful:
The crossover can be controlled either by the cable, or an intermediary device, such as a hub or switch. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
16
Twisted-Pair Cabling – Cable and Interface Types The layout or pinout of the wires in the RJ45 connector dictates the function of the cable. There are three common types of twisted-pair cable: • Straight-through cable • Crossover cable • Rollover cable The network interface type determines when to use each cable: • Medium Dependent Interface (MDI) • Medium Dependent Interface with Crossover (MDIX) Host interfaces are generally MDI, while hub or switch interfaces are typically MDIX.
Twisted-Pair Cabling – Straight-Through Cable A straight-through cable is used in the following circumstances: • From a host to a hub – MDI to MDIX • From a host to a switch - MDI to MDIX • From a router to a hub - MDI to MDIX • From a router to a switch - MDI to MDIX Essentially, a straight-through cable is used to connect any device to a hub or switch, except for another hub or switch. The hub or switch provides the crossover (or MDIX) function to connect transmit pins to receive pins. The pinout on each end of a straight-through cable must be identical. The TIA/EIA-568B standard for a straight-through cable is as follows: Pin# Connector 1 1 2 3 4 5 6 7 8
White Orange Orange White Green Blue White Blue Green White Brown Brown
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Connector 2
Pin#
White Orange Orange White Green Blue White Blue Green White Brown Brown
1 2 3 4 5 6 7 8
A straight-through cable is often referred to as a patch cable. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
17
Twisted-Pair Cabling – Crossover Cable A crossover cable is used in the following circumstances: • From a host to a host – MDI to MDI • From a hub to a hub - MDIX to MDIX • From a switch to a switch - MDIX to MDIX • From a hub to a switch - MDIX to MDIX • From a router to a router - MDI to MDI Remember that a hub or a switch will provide the crossover function. However, when connecting a host directly to another host (MDI to MDI), the crossover function must be provided by a crossover cable. A crossover cable is often required to uplink a hub to another hub, or to uplink a switch to another switch. This is because the crossover is performed twice, once on each hub or switch (MDIX to MDIX), negating the crossover. Modern devices can now automatically detect whether the crossover function is required, negating the need for a crossover cable. This functionality is referred to as Auto-MDIX, and is now standard with Gigabit Ethernet, which uses all eight wires to both transmit and receive. AutoMDIX requires that autonegotiation be enabled. To create a crossover cable, the transmit pins must be swapped with the receive pins on one end of the cable: • Pins 1 and 3 • Pins 2 and 6 Pin# Connector 1 1 2 3 4 5 6 7 8
White Orange Orange White Green Blue White Blue Green White Brown Brown
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Connector 2
Pin#
White Green Green White Orange Blue White Blue Orange White Brown Brown
3 6 1 4 5 2 7 8
Note that the Orange and Green pins have been swapped on Connector 2. The first connector is using the TIA/EIA-568B standard, while the second connector is using the TIA/EIA-568A standard. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
18
Twisted-Pair – Rollover Cable A rollover cable is used to connect a workstation or laptop into a Cisco device’s console or auxiliary port, for management purposes. A rollover cable is often referred to as a console cable, and its sheathing is usually flat and light-blue in color. To create a rollover cable, the pins are completely reversed on one end of the cable: Pin# Connector 1 1 2 3 4 5 6 7 8
White Orange Orange White Green Blue White Blue Green White Brown Brown
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Connector 2
Pin#
Brown White Brown Green White Blue Blue White Green Orange White Orange
8 7 6 5 4 3 2 1
Rollover cables can be used to configure Cisco routers, switches, and firewalls.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
19
Power over Ethernet (PoE) Power over Ethernet (PoE) allows both data and power to be sent across the same twisted-pair cable, eliminating the need to provide separate power connections. This is especially useful in areas where installing separate power might be expensive or difficult. PoE can be used to power many devices, including: • Voice over IP (VoIP) phones • Security cameras • Wireless access points • Thin clients PoE was originally formalized as 802.3af, which can provide roughly 13W of power to a device. 802.3at further enhanced PoE, supporting 25W or more power to a device. Ethernet, Fast Ethernet, and Gigabit Ethernet all support PoE. Power can be sent across either the unused pairs in a cable, or the data transmission pairs, which is referred to as phantom power. Gigabit Ethernet requires the phantom power method, as it uses all eight wires in a twisted-pair cable. The device that provides power is referred to as the Power Source Equipment (PSE). PoE can be supplied using an external power injector, though each powered device requires a separate power injector. More commonly, an 802.3af-compliant network switch is used to provide power to many devices simultaneously. The power supplies in the switch must be large enough to support both the switch itself, and the devices it is powering.
(Reference: http://www.belden.com/docs/upload/PoE_Basics_WP.pdf) *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
20
Section 2 - Hubs vs. Switches vs. Routers Layered Communication Network communication models are generally organized into layers. The OSI model specifically consists of seven layers, with each layer representing a specific networking function. These functions are controlled by protocols, which govern end-to-end communication between devices. As data is passed from the user application down the virtual layers of the OSI model, each of the lower layers adds a header (and sometimes a trailer) containing protocol information specific to that layer. These headers are called Protocol Data Units (PDUs), and the process of adding these headers is referred to as encapsulation. The PDU of each lower layer is identified with a unique term: # 7 6 5 4 3 2 1
Layer Application Presentation Session Transport Network Data-link Physical
PDU Name Segments Packets Frames Bits
Commonly, network devices are identified by the OSI layer they operate at (or, more specifically, what header or PDU the device processes). For example, switches are generally identified as Layer-2 devices, as switches process information stored in the Data-Link header of a frame (such as MAC addresses in Ethernet). Similarly, routers are identified as Layer-3 devices, as routers process logical addressing information in the Network header of a packet (such as IP addresses). However, the strict definitions of the terms switch and router have blurred over time, which can result in confusion. For example, the term switch can now refer to devices that operate at layers higher than Layer-2. This will be explained in greater detail in this guide. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
21
Icons for Network Devices The following icons will be used to represent network devices for all guides on routeralley.com:
Hub____
Multilayer Switch
Switch___
Router
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
22
Layer-1 Hubs Hubs are Layer-1 devices that physically connect network devices together for communication. Hubs can also be referred to as repeaters. Hubs provide no intelligent forwarding whatsoever. Hubs are incapable of processing either Layer-2 or Layer-3 information, and thus cannot make decisions based on hardware or logical addressing. Thus, hubs will always forward every frame out every port, excluding the port originating the frame. Hubs do not differentiate between frame types, and thus will always forward unicasts, multicasts, and broadcasts out every port but the originating port. Ethernet hubs operate at half-duplex, which allows a device to either transmit or receive data, but not simultaneously. Ethernet utilizes Carrier Sense Multiple Access with Collision Detect (CSMA/CD) to control media access. Host devices monitor the physical link, and will only transmit a frame if the link is idle. However, if two devices transmit a frame simultaneously, a collision will occur. If a collision is detected, the hub will discard the frames and signal the host devices. Both devices will wait a random amount of time before resending their respective frames. Remember, if any two devices connected to a hub send a frame simultaneously, a collision will occur. Thus, all ports on a hub belong to the same collision domain. A collision domain is simply defined as any physical segment where a collision can occur. Multiple hubs that are uplinked together still all belong to one collision domain. Increasing the number of host devices in a single collision domain will increase the number of collisions, which can significantly degrade performance. Hubs also belong to only one broadcast domain – a hub will forward both broadcasts and multicasts out every port but the originating port. A broadcast domain is a logical segmentation of a network, dictating how far a broadcast (or multicast) frame can propagate. Only a Layer-3 device, such as a router, can separate broadcast domains. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
23
Layer-2 Switching Layer-2 devices build hardware address tables, which will contain the following at a minimum: • Hardware addresses for host devices • The port each hardware address is associated with Using this information, Layer-2 devices will make intelligent forwarding decisions based on frame (Data-Link) headers. A frame can then be forwarded out only the appropriate destination port, instead of all ports. Layer-2 forwarding was originally referred to as bridging. Bridging is a largely deprecated term (mostly for marketing purposes), and Layer-2 forwarding is now commonly referred to as switching. There are some subtle technological differences between bridging and switching. Switches usually have a higher port-density, and can perform forwarding decisions at wire speed, due to specialized hardware circuits called ASICs (Application-Specific Integrated Circuits). Otherwise, bridges and switches are nearly identical in function. Ethernet switches build MAC-address tables through a dynamic learning process. A switch behaves much like a hub when first powered on. The switch will flood every frame, including unicasts, out every port but the originating port. The switch will then build the MAC-address table by examining the source MAC address of each frame. Consider the following diagram: Switch Fa0/10
Computer A
Fa0/11
Computer B
When ComputerA sends a frame to ComputerB, the switch will add ComputerA’s MAC address to its table, associating it with port fa0/10. However, the switch will not learn ComputerB’s MAC address until ComputerB sends a frame to ComputerA, or to another device connected to the switch. Switches always learn from the source MAC address.
A switch is in a perpetual state of learning. However, as the MAC-address table becomes populated, the flooding of frames will decrease, allowing the switch to perform more efficient forwarding decisions. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
24
Layer-2 Switching (continued) While hubs were limited to half-duplex communication, switches can operate in full duplex. Each individual port on a switch belongs to its own collision domain. Thus, switches create more collision domains, which results in fewer collisions. Like hubs though, switches belong to only one broadcast domain. A Layer2 switch will forward both broadcasts and multicasts out every port but the originating port. Only Layer-3 devices separate broadcast domains. Because of this, Layer-2 switches are poorly suited for large, scalable networks. The Layer-2 header provides no mechanism to differentiate one network from another, only one host from another. This poses significant difficulties. If only hardware addressing existed, all devices would technically be on the same network. Modern internetworks like the Internet could not exist, as it would be impossible to separate my network from your network. Imagine if the entire Internet existed purely as a Layer-2 switched environment. Switches, as a rule, will forward a broadcast out every port. Even with a conservative estimate of a billion devices on the Internet, the resulting broadcast storms would be devastating. The Internet would simply collapse. Both hubs and switches are susceptible to switching loops, which result in destructive broadcast storms. Switches utilize the Spanning Tree Protocol (STP) to maintain a loop-free environment. STP is covered in great detail in another guide. Remember, there are three things that switches do that hubs do not: • Hardware address learning • Intelligent forwarding of frames • Loop avoidance Hubs are almost entirely deprecated – there is no advantage to using a hub over a switch. At one time, switches were more expensive and introduced more latency (due to processing overhead) than hubs, but this is no longer the case.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
25
Layer-2 Forwarding Methods Switches support three methods of forwarding frames. Each method copies all or part of the frame into memory, providing different levels of latency and reliability. Latency is delay - less latency results in quicker forwarding. The Store-and-Forward method copies the entire frame into memory, and performs a Cycle Redundancy Check (CRC) to completely ensure the integrity of the frame. However, this level of error-checking introduces the highest latency of any of the switching methods. The Cut-Through (Real Time) method copies only enough of a frame’s header to determine its destination address. This is generally the first 6 bytes following the preamble. This method allows frames to be transferred at wire speed, and has the least latency of any of the three methods. No error checking is attempted when using the cut-through method. The Fragment-Free (Modified Cut-Through) method copies only the first 64 bytes of a frame for error-checking purposes. Most collisions or corruption occur in the first 64 bytes of a frame. Fragment-Free represents a compromise between reliability (store-and-forward) and speed (cut-through).
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
26
Layer-3 Routing Layer-3 routing is the process of forwarding a packet from one network to another network, based on the Network-layer header. Routers build routing tables to perform forwarding decisions, which contain the following: • The destination network and subnet mask • The next hop router to get to the destination network • Routing metrics and Administrative Distance Note that Layer-3 forwarding is based on the destination network, and not the destination host. It is possible to have host routes, but this is less common. The routing table is concerned with two types of Layer-3 protocols: • Routed protocols - assigns logical addressing to devices, and routes packets between networks. Examples include IP and IPX. • Routing protocols - dynamically builds the information in routing tables. Examples include RIP, EIGRP, and OSPF. Each individual interface on a router belongs to its own collision domain. Thus, like switches, routers create more collision domains, which results in fewer collisions. Unlike Layer-2 switches, Layer-3 routers also separate broadcast domains. As a rule, a router will never forward broadcasts from one network to another network (unless, of course, you explicitly configure it to). ☺ Routers will not forward multicasts either, unless configured to participate in a multicast tree. Multicast is covered in great detail in another guide. Traditionally, a router was required to copy each individual packet to its buffers, and perform a route-table lookup. Each packet consumed CPU cycles as it was forwarded by the router, resulting in latency. Thus, routing was generally considered slower than switching. It is now possible for routers to cache network-layer flows in hardware, greatly reducing latency. This has blurred the line between routing and switching, from both a technological and marketing standpoint. Caching network flows is covered in greater detail shortly.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
27
Collision vs. Broadcast Domain Example
Consider the above diagram. Remember that: • Routers separate broadcast and collision domains. • Switches separate collision domains. • Hubs belong to only one collision domain. • Switches and hubs both only belong to one broadcast domain. In the above example, there are THREE broadcast domains, and EIGHT collision domains:
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
28
VLANs – A Layer-2 or Layer-3 Function? By default, a switch will forward both broadcasts and multicasts out every port but the originating port. However, a switch can be logically segmented into multiple broadcast domains, using Virtual LANs (or VLANs). VLANs are covered in extensive detail in another guide. Each VLAN represents a unique broadcast domain: • Traffic between devices within the same VLAN is switched (forwarded at Layer-2). • Traffic between devices in different VLANs requires a Layer-3 device to communicate. Broadcasts from one VLAN will not be forwarded to another VLAN. This separation provided by VLANs is not a Layer-3 function. VLAN tags are inserted into the Layer-2 header. Thus, a switch that supports VLANs is not necessarily a Layer-3 switch. However, a purely Layer-2 switch cannot route between VLANs. Remember, though VLANs provide separation for Layer-3 broadcast domains, and are often associated with IP subnets, they are still a Layer-2 function.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
29
Layer-3 Switching In addition to performing Layer-2 switching functions, a Layer-3 switch must also meet the following criteria: • The switch must be capable of making Layer-3 forwarding decisions (traditionally referred to as routing). • The switch must cache network traffic flows, so that Layer-3 forwarding can occur in hardware. Many older modular switches support Layer-3 route processors – this alone does not qualify as Layer-3 switching. Layer-2 and Layer-3 processors can act independently within a single switch chassis, with each packet requiring a route-table lookup on the route processor. Layer-3 switches leverage ASICs to perform Layer-3 forwarding in hardware. For the first packet of a particular traffic flow, the Layer-3 switch will perform a standard route-table lookup. This flow is then cached in hardware – which preserves required routing information, such as the destination network and the MAC address of the corresponding next-hop. Subsequent packets of that flow will bypass the route-table lookup, and will be forwarded based on the cached information, reducing latency. This concept is known as route once, switch many. Layer-3 switches are predominantly used to route between VLANs:
Traffic between devices within the same VLAN, such as ComputerA and ComputerB, is switched at Layer-2 as normal. The first packet between devices in different VLANs, such as ComputerA and ComputerD, is routed. The switch will then cache that IP traffic flow, and subsequent packets in that flow will be switched in hardware. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
30
Layer-3 Switching vs. Routing – End the Confusion! The evolution of network technologies has led to considerable confusion over the terms switch and router. Remember the following: • The traditional definition of a switch is a device that performs Layer-2 forwarding decisions. • The traditional definition of a router is a device that performs Layer-3 forwarding decisions. Remember also that, switching functions were typically performed in hardware, and routing functions were typically performed in software. This resulted in a widespread perception that switching was fast, and routing was slow (and expensive). Once Layer-3 forwarding became available in hardware, marketing gurus muddied the waters by distancing themselves from the term router. Though Layer-3 forwarding in hardware is still routing in every technical sense, such devices were rebranded as Layer-3 switches. Ignore the marketing noise. A Layer-3 switch is still a router. Compounding matters further, most devices still currently referred to as routers can perform Layer-3 forwarding in hardware as well. Thus, both Layer-3 switches and Layer-3 routers perform nearly identical functions at the same performance. There are some differences in implementation between Layer-3 switches and routers, including (but not limited to): • Layer-3 switches are optimized for Ethernet, and are predominantly used for inter-VLAN routing. Layer-3 switches can also provide Layer-2 functionality for intra-VLAN traffic. • Switches generally have higher port densities than routers, and are considerably cheaper per port than routers (for Ethernet, at least). • Routers support a large number of WAN technologies, while Layer-3 switches generally do not. • Routers generally support more advanced feature sets. Layer-3 switches are often deployed as the backbone of LAN or campus networks. Routers are predominantly used on network perimeters, connecting to WAN environments. (Fantastic Reference: http://blog.ioshints.info/2011/02/how-did-we-ever-get-into-this-switching.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
31
Multilayer Switching Multilayer switching is a generic term, referring to any switch that forwards traffic at layers higher than Layer-2. Thus, a Layer-3 switch is considered a multilayer switch, as it forwards frames at Layer-2 and packets at Layer-3. A Layer-4 switch provides the same functionality as a Layer-3 switch, but will additionally examine and cache Transport-layer application flow information, such as the TCP or UDP port. By caching application flows, QoS (Quality of Service) functions can be applied to preferred applications. Consider the following example:
Network and application traffic flows from ComputerA to the Webserver and Fileserver will be cached. If the traffic to the Webserver is preferred, then a higher QoS priority can be assigned to that application flow. Some advanced multilayer switches can provide load balancing, content management, and other application-level services. These switches are sometimes referred to as Layer-7 switches.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
32
Section 3 - Switching Models Network Traffic Models When designing scalable, efficient networks, it is critical to consider how traffic “flows” through the network, rather than simply concentrating on the type of traffic. A traffic flow is a map of the path data takes to get from a source to a destination, and the type of data being transmitted. Originally, proper network design followed the 80/20 rule, which dictates that 80 percent of the traffic remains on the local network, and only 20 percent should be routed to another network. This allowed a majority of the traffic to be switched instead of routed, and thus latency was reduced. Servers and resources were thus placed close to the users that required them. However, the architecture of networks has been changing. Instead of placing “workgroup” servers in every local network, many organizations have centralized their resources. Internet web servers, email servers, and IP telephony are examples of this trend. Thus, a majority of traffic must be “routed” to a centralized network. This concept is identified as the 20/80 rule. Because routing introduces more latency than switching, the 20/80 rule has dictated a need for a faster Layer 3 technology, namely Layer 3 switching.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
33
The Cisco Hierarchical Network Model Cisco developed a hierarchical model to serve as a guideline to proper network design. This model is separated into three layers:
Si
Si
Si
Si
• Access Layer – The Access Layer is where the end user connects into the network. Access Layer switches generally have a high number of low-cost ports per switch, and VLANs are usually configured at this Layer. In a distributed environment (80/20 rule), servers and other such resources are kept close to users in the Access Layer. • Distribution Layer – The Distribution Layer provides end users with access to the Core (backbone) Layer. Security (using access-lists) and QoS are usually configured at the Distribution Layer. • Core Layer – The Core Layer is the “backbone” of the network. The Core Layer is concerned with switching data quickly and efficiently between all other “layers” or “sections” of the network. In a centralized environment (20/80 rule), servers and other such resources are placed in their own “dedicated” Access Layer, and the Core Layer must switch traffic from all other Access Layers to this Server Block.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
34
Example of the Cisco Hierarchical Network Model
Internet
Servers Internet Border Router
Server Farm Block
Si
Core MultiLayer Switch
Distribution MultiLayer Switch
Access Workgroup Switch
Distribution MultiLayer Switch
Si
Distribution MultiLayer Switch
Core Block
Si
Si
Access Workgroup Switch
“User” Switch Block
Core MultiLayer Switch
Si
Access Workgroup Switch
Enterprise Edge Block
Si
Distribution MultiLayer Switch
Access Workgroup Switch
“User” Switch Block
Cisco likes to break down network hierarchies into separate “blocks.” Notice that the Core Block, which connects all other blocks, has redundant links to all distribution layer switches. The Switch Block contains the Distribution and Access Layer switches that service end users. The Server Farm Block contains all network resources that end users need access to. The Enterprise Edge Block connects this Autonomous System to the Internet. The above is an example of a Dual Core design, where there is a clearly defined Core layer separated from the Distribution Layer. Network designs that do not require a separately defined Core layer can instead combine the functions of the Core and Distribution layers, in a Collapsed Core design. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
35
Cisco Switching Products Cisco offers a wide variety of Catalyst switches that fit within each Layer of the Cisco Hierarchical network model: Access Layer Switches: Model Catalyst 2950 Catalyst 3550 (SMI) Catalyst 4000/4500 with Supervisor Engine III or IV
Max. Port Density 48 “10/100” ports 48 “10/100” ports or 12 “10/100/1000” ports 240 “10/100/1000” ports
Max. Backplane 13.6 Gpbs 24 Gpbs 64 Gpbs
Distribution and Core Layer Switches: Model Catalyst 3550 (EMI) Catalyst 6500
Max. Port Density 48 “10/100” ports or 12 “10/100/1000” ports Over 500 “10/100/1000” ports
Max. Backplane 24 Gpbs 256 Gpbs
There are no hard rules that dictate that you must use a certain model of switch in a specific layer. The above tables are only guidelines. For example, if a network supports a large number of users in the Access Layer, it might be beneficial to use a Catalyst 6500 to support those users. A Supervisor Engine provides the software (usually the Cisco IOS) and processor to allow Cisco Catalyst switches to operate. The Supervisor Engine is the mechanism that allows multilayer switching to occur. The Cisco Catalyst 3550 has two specific software “images,” SMI (Standard MultiLayer Image) and EMI (Enhanced MultiLayer Image). The EMI software provides support for Layer 3 routing protocols, such as OSPF and EIGRP.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
36
Section 4 - Switching Tables The Layer 2 Switching “Process” Layer 2 switches contain queues where frames are stored after they are received and before they are sent. When a Layer 2 switch receives a frame on a port, it places that frame in one of the port’s ingress queues. When the switch decides which port that frame should sent out of, it places the frame in that port’s egress queue. If the destination MAC address in the frame is not in the MAC address table, the frame is placed in the egress queue of all ports and is flooded throughout the network. Each port can be configured with multiple ingress or egress queues. Using Quality of Service (QoS), each queue can be assigned a different priority. Thus, we can give a higher preference to more critical traffic, such as video conferencing, by placing that traffic in a high priority queue. Before a Layer 2 switch can take a frame from one port’s ingress queue to another port’s egress queue, it must consult two tables: • Content Addressable Memory (CAM), which is Cisco’s term for the MAC address table. It can also be referred to as the Layer 2 Forwarding Table. • Ternary Content Addressable Memory (TCAM), which contains access lists that can filter frames by MAC address, and QoS accesslists to prioritize traffic. In multi-layer switches, the TCAM also contains access lists to filter frames based on IP address or TCP/UDP port. Both the CAM and TCAM are stored in RAM, so that information lookup is quick. Throughout the rest of this guide, the MAC address table will be referred to as the CAM.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
37
Content Addressable Memory (CAM) As stated before, Cisco refers to a Catalyst switch’s MAC address table as Content Addressable Memory (CAM). Remember that switches only place the source MAC address of a frame in the CAM. Additionally, the CAM stores which port and VLAN the frame was received from. By default, dynamically learned MAC addresses are stored for 300 seconds in the CAM. After 300 seconds, if no activity is received from that MAC address, its entry is removed from the CAM. MAC address entries can also be statically entered into the CAM. The following is a sample output of the CAM, using the command: Switch# show mac address-table dynamic Destination Address ------------------0000.001e.2a52 0000.001e.345e 0000.001e.bb3a 0000.001e.eba3 0000.001e.face 0000.001e.3519 0000.001e.2dc1 0000.001e.8465 0000.001e.1532 0000.001e.8ab2 0000.001e.15b1 0000.005a.1b01 0000.005a.4214 0000.005a.5129 0000.00cc.bbe2 0000.00cc.2291
Address Type -----------Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic Dynamic
VLAN ---1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Destination Port -------------------FA1/1 FA1/1 FA1/1 FA1/2 FA1/3 FA1/4 FA1/5 FA1/5 FA1/5 FA1/6 FA1/6 FA1/6 FA1/7 FA1/8 FA1/9 FA1/10
Don’t be confused that the columns are labeled “destination” address and “destination” port. The MAC address is always learned from the source MAC. However, once the address is learned, that address is used as a possible “destination” address for any new frames the switch receives.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
38
Configuring the CAM To change the aging timer for dynamically learned MAC addresses in the CAM from its default of 300 seconds to 360 seconds: Switch(config)# mac address-table aging-time 360
To statically add to the CAM a MAC address of 0011.2233.4455, which resides on Port FA0/0 on VLAN 1: Switch(config)# mac address-table static 0011.2233.4455 vlan 1 interface fa0/0
Please note, in earlier versions of the Cisco IOS (prior to 12.1), the command syntax for the above commands contained an additional hyphen between “mac” and “address”: Switch(config)# mac-address-table aging-time 360 Switch(config)# mac-address-table static 0011.2233.4455 vlan 1 interface fa0/0
To view all dynamic MAC entries in the CAM: Switch# show mac address-table dynamic
To view a specific dynamic address in the CAM: Switch# show mac address-table dynamic address 1234.5678.90ab
To view the number of MAC addresses per VLAN: Switch# show mac address-table count
To clear the entire dynamic contents of the CAM: Switch# clear mac address-table dynamic
To clear a single entry of the CAM: Switch# clear mac address-table dynamic 1234.5678.90ab
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
39
Ternary Content Addressable Memory (TCAM) The TCAM integrates access lists into its table, allowing filtering to occur on the fly. On multi-layer switches, the TCAM can filter not only MAC addresses, but also IP addresses and TCP/UDP ports. Additionally, QoS access lists can be integrated into the TCAM to prioritize traffic. The TCAM consists of two components: • Feature Manager (FM) – Integrates access lists into the TCAM • Switching Database Manager (SDM) – Maintains TCAM partitions Multiple TCAMs can exist on a single router. For example, there are TCAMs for inbound traffic, outbound traffic, and for QoS information. The TCAM table is more complex than the CAM. The CAM is a flat table containing only MAC address, VLAN, and port information. Entries in the TCAM table contain three parameters: • Values – consists of the addresses or ports that must be matched • Masks – dictates how much of the address to match • Result – what action to take when a match occurs For example, if we created the following access list: access-list 150 permit tcp 172.16.0.0 0.0.255.255 host 172.17.1.1 eq 23 access-list 150 deny tcp 172.16.0.0 0.0.255.255 host 172.17.1.1 eq 80
The Feature Manager (FM) will automatically integrate the access-lists into the TCAM. Configuring the TCAM consists solely of creating the necessary access-lists. The values are the source of 172.16.0.0, and the destination of 172.17.1.1. The masks in this case are 0.0.255.255 for the 172.16.0.0 source network, dictating that the last two octets can be anything. A mask of 0.0.0.0 is given to the destination host 172.17.1.1, indicating it must be an exact match. The result in this case is either permit or deny. However, other results are possible when using QoS access-lists, which is more concerned with prioritizing traffic than filtering it.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
40
________________________________________________
Part II Switch Configuration ________________________________________________
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
41
Section 5 - Basic Switch Management Catalyst Operating Systems Catalyst switches, depending on the model, support one of two possible operating systems: • Catalyst OS (CatOS) • IOS The CatOS is an antiquated interface based on “set” commands. Retired Catalyst models such as the 40xx and 50xx series supported the CatOS interface. Modern Catalyst switches support the Cisco IOS, enhanced with switchingspecific commands. Catalyst models that support the Cisco IOS include: • 29xx series • 35xx series • 37xx series • 45xx series • 49xx series • 65xx series The Cisco IOS interface on Catalyst switches is nearly identical to that of the router IOS (with the exception of the switching-specific commands). The IOS is covered in great detail in other guides on this site, specifically: • Router Components • Introduction to the Cisco IOS • Advanced IOS Functions Some basic IOS concepts will be reviewed in this guide. For more comprehensive information, please consult the above guides.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
42
Using Lines to Configure the IOS Three methods (or lines) exist to configure Cisco IOS devices (including Catalyst switches): • Console ports • Auxiliary ports • VTY (telnet) ports Nearly every modern Cisco router or switch includes a console port, sometimes labeled on the device simply as con. The console port is generally a RJ-45 connector, and requires a rollover cable to connect to. The opposite side of the rollover cable connects to a PC’s serial port using a serial terminal adapter. From the PC, software such as HyperTerminal is required to make a connection from the local serial port to the router console port. The following settings are necessary for a successful connection: • Bits per second - 9600 baud • Data bits - 8 • Parity - None • Stop bits - 1 • Flow Control - Hardware Some Cisco devices include an auxiliary port, in addition to the console port. The auxiliary port can function similarly to a console port, and can be accessed using a rollover cable. Additionally, auxiliary ports support modem commands, thus providing dial-in access to Cisco devices. Telnet, and now SSH, are the most common methods of remote access to routers and switches. The standard edition of the IOS supports up to 5 simultaneous VTY connections. Enterprise editions of the IOS support up to 255 VTY connections. There are two requirements before a Catalyst switch will accept a VTY connection: • An IP address must be configured on the Management VLAN (by default, this is VLAN 1) • At least one VTY port must be configured with a password
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
43
IOS Modes on Cisco Catalyst Switches The Cisco IOS is comprised of several modes, each of which contains a set of commands specific to the function of that mode. By default, the first mode you enter when logging into a Cisco device is User EXEC mode. User mode appends a “>” after the device hostname: Switch>
No configuration can be changed or viewed from User mode. Only basic status information can be viewed from this mode. Privileged EXEC mode allows all configuration files, settings, and status information to be viewed. Privileged mode appends a “#” after the device hostname: Switch#
To enter Privileged mode, type enable from User mode: Switch> enable Switch#
To return back to User mode from Privileged mode, type disable: Switch# disable Switch>
Very little configuration can be changed directly from Privileged mode. Instead, to actually configure the Cisco device, one must enter Global Configuration mode: Switch(config)#
To enter Global Configuration mode, type configure terminal from Privileged Mode: Switch# configure terminal Switch(config)#
To return back to Privileged mode, type exit: Switch(config)# exit Switch#
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
44
IOS Modes on Cisco Catalyst Switches (continued) As its name implies, Global Configuration mode allows parameters that globally affect the device to be changed. Additionally, Global Configuration mode is sectioned into several sub-modes dedicated for specific functions. Among the most common sub-modes are the following: • Interface Configuration mode - Switch(config-if)# • Line Configuration mode Switch(config-line)# Recall the difference between interfaces and lines. Interfaces connect routers and switches to each other. In other words, traffic is actually routed or switched across interfaces. Examples of interfaces include Serial, ATM, Ethernet, Fast Ethernet, and Token Ring. To configure an interface, one must specify both the type of interface, and the interface number (which always begins at “0”). Thus, to configure the first Ethernet interface on a router: Switch(config)# interface ethernet 0 Switch(config-if)#
Lines identify ports that allow us to connect into, and then configure, Cisco devices. Examples would include console ports, auxiliary ports, and VTY (or telnet) ports. Just like interfaces, to configure a line, one must specify both the type of line, and the line number (again, always begins at “0”). Thus, to configure the first console line on a switch: Switch(config)# line console 0 Switch(config-line)#
Multiple telnet lines can be configured simultaneously. To configure the first sixteen telnet (or VTY) lines on a switch: Switch(config)# line vty 0 15 Switch(config-line)#
Notice that Catalyst switches natively support up to 16 VTY connections. A Cisco router running the standard IOS supports up to 5 VTY connections. Remember that the numbering for both interfaces and lines begins with “0.” *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
45
Enable Passwords The enable password protects a switch’s Privileged mode. This password can be set or changed from Global Configuration mode: Switch(config)# enable password MYPASSWORD Switch(config)# enable secret MYPASSWORD2
The enable password command sets an unencrypted password intended for legacy systems that do not support encryption. It is no longer widely used. The enable secret command sets an MD5-hashed password, and thus is far more secure. The enable password and enable secret passwords cannot be identical. The switch will not accept identical passwords for these two commands. Line Passwords and Configuration Passwords can additionally be configured on switch lines, such as telnet (vty), console, and auxiliary ports. To change the password for a console port and all telnet ports: Switch(config)# line console 0 Switch(config-line)# login Switch(config-line)# password cisco1234
Switch(config)# line vty 0 15 Switch(config-line)# login Switch(config-line)# password cisco1234
Switch(config-line)# exec-timeout 0 0 Switch(config-line)# logging synchronous
Switch(config-line)# exec-timeout 0 0 Switch(config-line)# logging synchronous
The exec-timeout 0 0 command is optional, and disables the automatic timeout of your connection. The two zeroes represent the timeout value in minutes and seconds, respectively. Thus, to set a timeout for 2 minutes and 30 seconds: Switch(config-line)# exec-timeout 2 30
The logging synchronous command is also optional, and prevents system messages from interrupting your command prompt. By default, line passwords are stored in clear-text in configuration files. To ensure these passwords are encrypted in all configuration files: Switch(config)# service password–encryption *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
46
Catalyst Configuration Files Like Cisco routers, Catalyst switches employ a startup-config file (stored in NVRAM) and a running-config (stored in RAM). The startup-config is the saved configuration used when a router boots, and the running-config is the currently active configuration. Any configuration change made to an IOS device is made to the runningconfig. Because the running-config file is stored in RAM, the contents of this file will be lost during a power-cycle. To save the contents of the running-config to the startup-config file: Switch# copy run start
Catalyst switches additionally employ the following configuration and diagnostic files, all stored in Flash memory: • vlan.dat • system_env_vars • crashinfo The vlan.dat file contains a list all created VLANs, and includes any VTP specific information. The vlan.dat file does not contain information on interface-to-VLAN assignments (which is stored in the startup-config). The system_env_vars file contains environmental information specific to the Catalyst switch, including serial/model numbers and MAC addresses. The crashinfo file contains memory-dump information about previous switch failures. To delete all files in flash: Switch# erase flash:
To delete a specific file in flash: Switch# erase flash:FILENAME
To delete a specific file in flash: Switch# format flash:
To upload an IOS image file from a TFTP server to flash: Switch# copy tftp: flash:FILENAME *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
47
Configuring Telnet Access on Catalyst Switches Recall the two requirements to configure a Catalyst switch for VTY access: • An IP address must be configured on the Management VLAN (by default, this is VLAN 1) • At least one VTY port must be configured with a password. Configuring passwords on VTY lines was covered previously: Switch(config)# line vty 0 15 Switch(config-line)# login Switch(config-line)# password cisco1234
To assign an IP address to the Management VLAN: Switch(config)# interface vlan 1 Switch(config-if)# ip address 192.168.123.151 255.255.255.0 Switch(config-if)# no shut
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
48
Section 6 - Switch Port Configuration Switch Port Configuration To enter interface configuration mode for interface Fast Ethernet 0/10: Switch(config)# interface fa0/10
Multiple individual ports can be configured simultaneously: Switch(config)# interface range fa0/10 , fa0/12 , fa0/14
The above command selects ports fa0/10, fa0/12, and fa0/14. Please note the space on either side of the commas. A contiguous range of interfaces can be specified: Switch(config)# interface range fa0/10 - 15
The above command selects ports fa0/10 through fa0/15. Please note the space on either side of the dash. Macros can be created for groups of ports that are configured often: Switch(config)# define interface-range MACRONAME fa0/10 – 15 Switch(config)# interface range macro MACRONAME
The first command creates a macro, or “group,” of interfaces called MACRONAME. The second command actually selects those interfaces for configuration. For documentation purposes, we can apply descriptions on interfaces: Switch(config)# interface fa0/0 Switch(config-if)# description DESCRIPTIONTEXT
To view the status of an interface (example, Fast Ethernet 0/10): Switch# show interface fa0/10
This will also display duplex, speed, and packet errors on this particular interface. To view the errdisable state (explained shortly) of an interface: Switch# show interface status err-disabled *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
49
Switch Port Configuration – Speed and Duplex To specify the port speed of an interface: Switch(config)# interface fa0/10 Switch(config-if)# speed 10 Switch(config-if)# speed 100 Switch(config-if)# speed 1000 Switch(config-if)# speed auto
To specify the duplex of an interface: Switch(config)# interface fa0/10 Switch(config-if)# duplex half Switch(config-if)# duplex full Switch(config-if)# duplex auto
Port Error Conditions Catalyst switches can detect error conditions on a port, and if necessary automatically disable that port. When a port is disabled due to an error, the port is considered to be in errdisable state. The following events can put a port into errdisable state: • bpduguard – when a port configured for STP Portfast and BPDU Guard receives a BDPU • dtp-flap – when trunking encapsulation (ISL or 802.1Q) is “flapping” • link-flap – when a port is flapping between an “up” or “down” state • pagp-flap – when EtherChannel ports become inconsistently configured • rootguard – when a non-designated port receives a BDPU from a root bridge • udld – when data appears to be only sent in one direction To enable all possible error conditions: Switch(config)# errdisable detect cause all
To enable a specific error condition: Switch(config)# errdisable detect cause link-flap
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
50
Port Error Conditions (continued) To take a port out of errdisable state: Switch(config)# interface fa0/10 Switch(config-if)# shut Switch(config-if)# no shut
To allow switch ports to automatically recover from an errdisable state: Switch(config)# errdisable recovery cause all Switch(config)# errdisable recovery interval 250
The last line specifies the duration a port will remain in errdisable before recovering. The default is 300 seconds.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
51
________________________________________________
Part III Switching Protocols and Functions ________________________________________________
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
52
Section 7 - VLANs and VTP Review of Collision vs. Broadcast Domains In a previous guide, it was explained that a “collision domain” is a segment where a collision can occur, and that a Layer-2 switch running in Full Duplex breaks up collision domains. Thus, Layer-2 switches create more collision domains, which results in fewer collisions. However, Layer-2 switches do not break up broadcast domains, and thus belong to only one broadcast domain. Layer-2 switches will forward a broadcast or multicast out every port, excluding the port the broadcast or multicast originated from. Only Layer-3 devices can break apart broadcast domains. Because of this, Layer-2 switches are not well suited for large, scalable networks. Layer-2 switches make forwarding decisions solely based on Data-Link layer MAC addresses, and thus have no way of differentiating between one network and another.
Virtual LANs (VLANs) Virtual LANs (or VLANs) separate a Layer-2 switch into multiple broadcast domains. Each VLAN is its own individual broadcast domain (i.e. IP subnet). Individual ports or groups of ports can be assigned to a specific VLAN. Only ports belonging to the same VLAN can freely communicate; ports assigned to separate VLANs require a router to communicate. Broadcasts from one VLAN will never be sent out ports belonging to another VLAN. Please note: a Layer-2 switch that supports VLANs is not necessarily a Layer-3 switch. A Layer-3 switch, in addition to supporting VLANs, must also be capable of routing, and caching IP traffic flows. Layer-3 switches allow IP packets to be switched as opposed to routed, which reduces latency.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
53
VLAN Example Consider the following example:
Four computers are connected to a Layer-2 switch that supports VLANs. Computers A and B belong to VLAN 1, and Computers C and D belong to VLAN 2. Because Computers A and B belong to the same VLAN, they belong to the same IP subnet and broadcast domain. They will be able to communicate without the need of a router. Computers C and D likewise belong to the same VLAN and IP subnet. They also can communicate without a router. However, Computers A and B will not be able to communicate with Computers C and D, as they belong to separate VLANs, and thus separate IP subnets. Broadcasts from VLAN 1 will never go out ports configured for VLAN 2. A router will be necessary for both VLANs to communicate. Most Catalyst multi-layer switches have integrated or modular routing processors. Otherwise, an external router is required for inter-VLAN communication. By default on Cisco Catalyst switches, all interfaces belong to VLAN 1. VLAN 1 is considered the Management VLAN (by default).
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
54
Advantages of VLANs VLANs provide the following advantages: Broadcast Control – In a pure Layer-2 environment, broadcasts are received by every host on the switched network. In contrast, each VLAN belongs to its own broadcast domain (or IP subnet); thus broadcast traffic from one VLAN will never reach another VLAN. Security – VLANs allow administrators to “logically” separate users and departments. Flexibility and Scalability – VLANs remove the physical boundaries of a network. Users and devices can be added or moved anywhere on the physical network, and yet remain assigned to the same VLAN. Thus, access to resources will never be interrupted.
VLAN Membership VLAN membership can be configured one of two ways: • Statically – Individual (or groups of) switch-ports must be manually assigned to a VLAN. Any device connecting to that switch-port(s) becomes a member of that VLAN. This is a transparent process – the client device is unaware that it belongs to a specific VLAN. • Dynamically – Devices are automatically assigned into a VLAN based on its MAC address. This allows a client device to remain in the same VLAN, regardless of which switch port the device is attached to. Cisco developed a dynamic VLAN product called the VLAN Membership Policy Server (VMPS). In more sophisticated systems, a user’s network account can be used to determine VLAN membership, instead of a device’s MAC address. Catalyst switches that participate in a VTP domain (explained shortly) support up to 1005 VLANs. Catalyst switches configured in VTP transparent mode support up to 4094 VLANs.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
55
Static VLAN Configuration The first step in configuring VLANs is to create the VLAN: Switch(config)# vlan 100 Switch(config-vlan)# name MY_VLAN
The first command creates VLAN 100, and enters VLAN configuration mode. The second command assigns the name MY_VLAN to this VLAN. Naming a VLAN is not required. The list of VLANs is stored in Flash in a database file named vlan.dat. However, information concerning which local interfaces are assigned to a specific VLAN is not stored in this file; this information is instead stored in the startup-config file of each switch. Next, an interface (or range of interfaces) must be assigned to this VLAN. The following commands will assign interface fa0/10 into the newly created MY_VLAN. Switch(config)# interface fa0/10 Switch(config-if)# switchport mode access Switch(config-if)# switchport access vlan 100
The first command enters interface configuration mode. The second command indicates that this is an access port, as opposed to a trunk port (explained in detail shortly). The third command assigns this access port to VLAN 100. Note that the VLAN number is specified, and not the VLAN name. To view the list of VLANs, including which ports are assigned to each VLAN: Switch# show vlan VLAN ---1 100 1002 1003 1004 1005
Name -------------------------default MY_VLAN fddi-default token-ring-default fddinet-default trnet-default
Status --------active active suspended suspended suspended suspended
Ports ----------fa0/1-9,11-24 fa0/10
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
56
VLAN Port “Types” There are two types of ports supported on a VLAN-enabled switch, access ports and trunk ports. An access port belongs to only one VLAN. Host devices, such as computers and printers, plug into access ports. A host automatically becomes a member of its access port’s VLAN. This is done transparently, and the host is usually unaware of the VLAN infrastructure. By default, all switch ports are access ports. VLANs can span multiple switches. There are two methods of connecting these VLANs together. The first requires creating “uplink” access ports between all switches, for each VLAN. Obviously, in large switching and VLAN environments, this quickly becomes unfeasible. A better alternative is to use trunk ports. Trunk ports do not belong to a single VLAN. Any or all VLANs can traverse trunk links to reach other switches. Only Fast or Gigabit Ethernet ports can be used as trunk links. The following diagram illustrates the advantage of using trunk ports, as opposed to uplinking access ports:
VLAN A
VLAN B
VLAN C
VLAN A, B, C
VLAN A
VLAN B
VLAN C
VLAN A, B, C
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
57
VLAN Frame-Tagging When utilizing trunk links, switches need a mechanism to identify which VLAN a particular frame belongs to. Frame tagging places a VLAN ID in each frame, identifying which VLAN the frame belongs to. Tagging occurs only when a frame is sent out a trunk port. Consider the following example:
If Computer 1 sends a frame to Computer 2, no frame tagging will occur. The frame never leaves the Switch 1, stays within its own VLAN, and will simply be switched to Computer 2. If Computer 1 sends a frame to Computer 3, which is in a separate VLAN, frame tagging will still not occur. Again, the frame never leaves the switch, but because Computer 3 is in a different VLAN, the frame must be routed. If Computer 1 sends a frame to Computer 5, the frame must be tagged before it is sent out the trunk port. It is stamped with its VLAN ID (in this case, VLAN A), and when Switch 2 receives the frame, it will only forward it out ports belonging to VLAN A (fa0/0, and fa0/1). If Switch 2 has Computer 5’s MAC address in its CAM table, it will only send it out the appropriate port (fa0/0). Cisco switches support two frame-tagging protocols, Inter-Switch Link (ISL) and IEEE 802.1Q.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
58
Inter-Switch Link (ISL) ISL is Cisco’s proprietary frame-tagging protocol, and supports Ethernet, Token Ring, FDDI, and ATM frames. ISL encapsulates a frame with an additional header (26 bytes) and trailer (4 bytes), increasing the size of an Ethernet frame up to 30 bytes. The header contains the 10 byte VLAN ID. The trailer contains an additional 4-byte CRC for data-integrity purposes. Because ISL increases the size of a frame, non-ISL devices (i.e. non-Cisco devices) will actually drop ISL-tagged frames. Many devices are configured with a maximum acceptable size for Ethernet frames (usually 1514 or 1518 bytes). ISL frames can be as large as 1544 bytes; thus, non-ISL devices will see these packets as giants (or corrupted packets). ISL has deprecated in use over time. Newer Catalyst models may not support ISL tagging.
IEEE 802.1Q IEEE 802.1Q, otherwise known as DOT1Q, is the standardized frametagging protocol supported by most switch manufacturers, including Cisco. Thus, switches from multiple vendors can be trunked together. Instead of adding an additional header and trailer, 802.1Q actually embeds a 4-byte VLAN ID into the Layer-2 frame header. This still increases the size of a frame from its usual 1514 bytes to 1518 bytes (or from 1518 bytes to 1522 bytes). However, most modern switches support 802.1Q tagging and the slight increase in frame size. Neither ISL nor 802.1Q tagging alter the source or destination address in the Layer-2 header.
Manual vs. Dynamic Trunking ISL or 802.1Q tagging can be manually configured on Catalyst trunk ports. Catalyst switches can also dynamically negotiate this using Cisco’s proprietary Dynamic Trunking Protocol (DTP). *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
59
Configuring Trunk Links To manually configure a trunk port, for either ISL or 802.1Q tagging: Switch(config)# interface fa0/24 Switch(config-if)# switchport trunk encapsulation isl Switch(config-if)# switchport mode trunk Switch(config)# interface fa0/24 Switch(config-if)# switchport trunk encapsulation dot1q Switch(config-if)# switchport mode trunk
The first line in each set of commands enters interface configuration mode. The second line manually sets the tagging (or encapsulation) protocol the trunk link will use. Always remember, both sides of the trunk line must be configured with the same tagging protocol. The third line manually sets the switchport mode to a trunk port. The Catalyst switch can negotiate the tagging protocol: Switch(config)# interface fa0/24 Switch(config-if)# switchport trunk encapsulation negotiate
Whichever tagging protocol is supported on both switches will be used. If the switches support both ISL and 802.1Q, ISL will be selected. By default, trunk ports allow all VLANs to traverse the trunk link. However, a list of allowed VLANs can be configured on each trunk port: Switch(config)# interface fa0/24 Switch(config-if)# switchport trunk allowed vlan remove 50-100 Switch(config-if)# switchport trunk allowed vlan add 60-65
The first switchport command will prevent the trunk port from passing traffic from VLANs 50-100. The second switchport command will re-allow the trunk port to pass traffic from VLANs 60-65. In both cases, the switchport trunk allowed commands are adding/subtracting from the current list of allowed VLANs, and not replacing that list. Switch(config)# interface fa0/24 Switch(config-if)# switchport trunk allowed vlan all Switch(config-if)# switchport trunk allowed vlan except 2-99
Certain VLANs are reserved and cannot be removed from a trunk link, including VLAN 1 and system VLANs 1002-1005. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
60
Native VLANs A native VLAN can also be configured on trunk ports: Switch(config)# interface fa0/24 Switch(config-if)# switchport mode trunk Switch(config-if)# switchport trunk native vlan 42
Frames from the native VLAN are not tagged when sent out trunk ports. A trunking interface can only be assigned one native VLAN. Only 802.1Q supports native VLANs, whereas ISL does not. (More accurately, ISL will tag frames from all VLANs, even if a VLAN is configured as native). The native VLAN should be configured identically on both sides of the 802.1Q trunk). Native VLANs are often configured when plugging Cisco VoIP phones into a Catalyst Switch (beyond the scope of this section). Native VLANs are also useful if a trunk port fails. For example, if an end user connects a computer into a trunk port, the trunking status will fail and the interface will essentially become an access port. The user’s computer will then be transparently joined to the Native VLAN. Native VLANs provide another benefit. A trunk port will accept untagged frames and place them in the Native VLAN. Consider the following example:
Assume that both 802.1Q switches have trunk links configured to the non802.1Q switch, and that the trunk ports are configured in Native VLAN 42. Not only will the 802.1Q switches be able to communicate with each other, the non-802.1Q switch will be placed in Native VLAN 42, and be able to communicate with any device in VLAN 42 on any switch. (Please note, that the author of this study guide finds the “benefit” of the above example of Native VLANs to be……dubious at best, and confusing as hell at worst). By default on all trunking interfaces, the Native VLAN is VLAN 1. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
61
Dynamic Trunking Protocol (DTP) Configuration Not only can the frame tagging protocol of a trunk port be auto-negotiated, but whether a port actually becomes a trunk can be negotiated dynamically as well using the Dynamic Trunking Protocol (DTP). To manually set a port to be a trunk: Switch(config)# interface fa0/24 Switch(config-if)# switchport mode trunk
To allow a port to dynamically decide whether to become a trunk, there are two options: Switch(config)# interface fa0/24 Switch(config-if)# switchport mode dynamic desirable Switch(config)# interface fa0/24 Switch(config-if)# switchport mode dynamic auto
If a switchport is set to dynamic desirable (the default dynamic setting), the interface will actively attempt to form a trunk with the remote switch. If a switchport is set to dynamic auto, the interface will passively wait for the remote switch to initiate the trunk. This results in the following: • If both ports are manually set to trunk - a trunk will form. • If one port is set to dynamic desirable, and the other is set to manual trunk, dynamic desirable, or dynamic auto - a trunk will form. • If one port is set to dynamic auto, and the other port is set to manual trunk or dynamic desirable - a trunk will form. • If both ports are set to dynamic auto, the link will never become a trunk, as both ports are waiting for the other to initialize the trunk. Trunk ports send out DTP frames every 30 seconds to indicate their configured mode. In general, it is best to manually specific the trunk link, and disable DTP using the switchport nonegotiate command: Switch(config)# interface fa0/24 Switch(config-if)# switchport mode trunk Switch(config-if)# switchport nonegotiate *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
62
Troubleshooting Trunks When troubleshooting a misbehaving trunk link, ensure that the following is configured identically on both sides of the trunk: • • • • •
Mode - both sides must be set to trunk or dynamically negotiated Frame-tagging protocol - ISL, 802.1Q, or dynamically negotiated Native VLAN VTP Domain Allowed VLANs
If the above parameters are not set identically on both sides, the trunk link will never become active. To view whether a port is an access or trunk port (such as fa0/5): Switch# show interface fa0/24 switchport Name: Fa0/24 Switchport: Enabled Administrative Mode: trunk Operational Mode: trunk Administrative Trunking Encapsulation: dot1q Operational Trunking Encapsulation: dot1q Negotiation of Trunking: On Access Mode VLAN: 1 (default) Trunking Native Mode VLAN: 42 <snip>
To view the status of all trunk links: Switch# show interface trunk Port Fa0/24
Mode on
Encapsulation 802.1q
Status trunking
Native VLAN 42
Port Fa0/24
Vlans allowed on trunk 1,100-4094
Port Fa0/24
Vlans allowed and active in management domain 1,100
Port Fa0/24
Vlans in spanning tree forwarding state and not pruned 1,100
If no interfaces are in a trunking state, the show interface trunk command will return no output. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
63
VLAN Trunking Protocol (VTP) In large switching environments, it can become difficult to maintain a consistent VLAN database across all switches on the network. The Ciscoproprietary VLAN Trunking Protocol (VTP) allows the VLAN database to be easily managed throughout the network. Switches configured with VTP are joined to a VTP domain. Only switches belonging to the same domain will share VLAN information, and a switch can only belong to a single domain. When an update is made to the VLAN database, this information is propagated to all switches via VTP advertisements. By default, VTP updates are sent out every 300 seconds, or anytime a change to the database occurs. VTP updates are sent across VLAN 1, and are only sent out trunk ports. There are three versions of VTP. The key additions provided by VTP Version 2 are support for Token Ring and Consistency Checks. VTP Version 1 is default on Catalyst switches, and is not compatible with VTP Version 2. Cisco describes VTP Version 3 as such: “VTP version 3 differs from earlier VTP versions in that it does not directly handle VLANs. VTP version 3 is a protocol that is only responsible for distributing a list of opaque databases over an administrative domain.” (If you are confused, don’t be alarmed. The author of this guide is not certain what that means either). Cisco further defines the enhancements that VTP version 3 provides: • Support for extended VLANs • Support for the creation and advertising of private VLANs • Support for VLAN instances and MST mapping propagation instances • Improved server authentication • Protection from the “wrong” database accidently being inserted into a VTP domain. • Interaction with VTP version 1 and VTP version 2 • Ability to be configured on a per-port basis. (Reference: http://www.cisco.com/en/US/tech/tk389/tk689/technologies_tech_note09186a0080094c52.shtml, http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/catos/8.x/configuration/guide/vtp.html#wp1017196)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
64
VTP Modes VTP-enabled switches can operate in one of three modes: • Server • Client • Transparent Only VTP Servers can create, modify or delete entries in the shared VLAN database. Servers advertise their VLAN database to all other switches on the network, including other VTP servers. This is the default mode for Cisco Catalyst switches. VTP servers can only advertise VLANs 1 - 1005. VTP Clients cannot make modifications to the VLAN database, and will receive all of their VLAN information from VTP servers. A client will also forward an update from a server to other clients out its trunk port(s). Remember, VTP switches must be in the same VTP Domain to share/accept updates to the VLAN database. A VTP Transparent switch maintains its own separate VLAN database, and will neither advertise nor accept any VLAN database information from other switches, even a server. However, transparent switches will forward VTP updates from servers to clients, thus acting as a pass-through. Transparent switches handle this pass-through differently depending on the VTP version: • VTP Version 1 – the transparent switch will only pass updates from the same VTP domain. • VTP Version 2 – the transparent switch will pass updates from any VTP domain. As a best practice, a new switch should be configured as a VTP client in the VTP domain, and have its configuration revision number (described in the next section) set back to zero before being installed into a production network. There is a specific reason for this: if by some circumstance a new switch’s configuration revision number is higher than that of the existing production switches, a new VTP switch could conceivably advertise a blank or incorrect VLAN database to all other switches. This could result in a significant network outage. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
65
VTP Updates VTP updates contain a 32-bit configuration revision number, to ensure that all devices have the most current VLAN database. Every change to the VLAN database increments the configuration revision number by 1. A VTP switch will only accept or synchronize an update if the revision number is higher (and thus more recent) than that of the currently installed VLAN database. This is true even if the advertising switch is a VTP Client. Updates with a lower revision number are ignored. REMEMBER: a VTP client can update other clients and VTP servers in the VTP domain, if its revision number is higher. The simplest way to reset the configuration revision on a VTP switch is to change the VTP domain name, and then change it back to the original name. VTP utilizes three message types: • Summary Advertisement – sent out every 300 seconds, informing all VTP switches of the current configuration revision number. • Subset Advertisement – sent out when there is a change to the VLAN database. The subset advertisement actually contains the updated VLAN database. • Advertisement Request – sent out when a switch requires the most current copy of the VLAN database. A switch that is newly joined to the VTP domain will send out an Advertisement Request. A switch will also send out an Advertisement Request if it receives a Summary Advertisement with a configuration revision number higher than its current VLAN database. A Subset Advertisement will then be sent to that switch, so that it can synchronize the latest VLAN database. A Subset Advertisement will contain the following fields: • VTP Version • VTP Domain • VTP Configuration Revision • VLAN IDs for each VLAN in the database • VLAN-specific information, such as the VLAN name and MTU (Reference: http://www.cisco.com/en/US/tech/tk389/tk689/technologies_tech_note09186a0080094c52.shtml)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
66
Configuring VTP To configure the VTP domain (the domain name is case sensitive): Switch(config)# vtp domain MYDOMAIN
To configure the VTP mode: Switch(config)# vtp mode server Switch(config)# vtp mode client Switch(config)# vtp mode transparent
The VTP domain can be further secured using a password: Switch(config)# vtp password PASSWORD
All switches participating in the VTP domain must be configured with the same password. The password will be hashed into a 16-byte MD5 value. By default, a Catalyst switch uses VTP version 1. VTP Version 1 and 2 are not compatible. If applied on a VTP server, the following command will enable VTP version 2 globally on all switches: Switch(config)# vtp version 2
To view status information about VTP: Switch# show vtp status VTP Version : 2 Configuration Revision : 42 Maximum VLANs supported locally : 1005 Number of existing VLANs : 7 VTP Operating Mode : Server VTP Domain Name : MYDOMAIN VTP Pruning Mode : Disabled VTP V2 Mode : Enabled VTP Traps Generation : Disabled MD5 digest : 0x42 0x51 0x69 0xBA 0xBE 0xFA 0xCE Configuration last modified by 0.0.0.0 at 3-12-09 4:07:52
0x34
To view VTP statistical information and error counters: Switch# show vtp counters
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
67
VTP Pruning VTP pruning is a process of preventing unnecessary VLAN broadcast or multicast traffic throughout the switching infrastructure. In the following example, VTP pruning would prevent VLAN C broadcasts from being sent to Switch 2. Pruning would further prevent VLAN A and B broadcast traffic from being sent to Switch 3.
With VTP pruning, traffic is only sent out the necessary VLAN trunk ports where those VLANs exist. VTP pruning is disabled by default on Catalyst IOS switches. If applied on a VTP server, the following command will enable VTP pruning globally on all switches: Switch(config)# vtp pruning
On trunk ports, it is possible to specify which VLANs are pruning eligible: Switch(config)# interface fa0/24 Switch(config-if)# switchport trunk pruning vlan add 2-50 Switch(config-if)# switchport trunk pruning vlan remove 50-100 Switch(config)# interface fa0/24 Switch(config-if)# switchport trunk pruning vlan all Switch(config-if)# switchport trunk pruning vlan except 2-100
VLAN 1 is never eligible for pruning. The system VLANs 1002-1005 are also pruning-ineligible.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
68
Section 8 - EtherChannel Port Aggregation When a switched network spans multiple switches, some method of linking those switches must be used. A single Fast Ethernet or Gigabit Ethernet port can be used to uplink between switches, but this introduces a bottleneck to the flow of traffic. For example, when using a 24-port Catalyst switch, imagine having to pipe the traffic of 23 ports over a single port to reach another switch! Unfortunately, we cannot simply connect two or more ports from one switch to another switch, as this introduces a switching loop to the network. The result would be an almost instantaneous broadcast storm. Port Aggregation allows us to tie multiple ports together into a single logical interface. Cisco’s implementation of port aggregation is called EtherChannel. The switch treats an EtherChannel as a single interface, thus eliminating the possibility of a switching loop. Not only does port aggregation increase the bandwidth of a link, but it also provides redundancy. If a single port fails, traffic will be redirected to the other port(s). This failover occurs quickly – in the span of milliseconds. A maximum of 8 Fast Ethernet or 8 Gigabit Ethernet ports can be grouped together when forming an EtherChannel. Thus, when running in full duplex, a Fast EtherChannel (FEC) has a maximum bandwidth of 1600 Mbps. A Gigabit EtherChannel (GEC) has a maximum bandwidth of 16 Gbps. A maximum of 64 EtherChannels can be configured on a single Catalyst 3550XL switch. A Catalyst 6500 switch supports up to 128 EtherChannels.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
69
EtherChannel Requirements EtherChannels can be formed with either access or trunk ports. An EtherChannel comprised of access ports provides increased bandwidth and redundancy to a host device, such as a server. The host device must support a port aggregation protocol, such as LACP. EtherChannels comprised of trunk ports provide increased bandwidth and redundancy to other switches. All interfaces in an EtherChannel must be configured identically. Specific settings that must be identical include: • Speed settings • Duplex settings • STP settings • VLAN membership (for access ports) • Native VLAN (for trunk ports) • Allowed VLANs (for trunk ports) • Trunking Encapsulation (ISL or 802.1Q, for trunk ports) When configuring an EtherChannel trunk to another switch, the above configuration should be identical on both switches. EtherChannels will not form if either dynamic VLANs or port security are enabled on the participating EtherChannel interfaces.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
70
EtherChannel Load-Balancing Data sent across an EtherChannel is not load-balanced equally between all interfaces. EtherChannel utilizes a load-balancing algorithm, which can be based on several forms of criteria, including: • • • • • • • • •
Source IP Address (src-ip) Destination IP Address (dst-ip) Both Source and Destination IP (src-dst-ip) Source MAC address (src-mac) Destination MAC address (dst-mac) Both Source and Destination MAC (src-dst-mac) Source TCP/UDP port number (src-port) Destination TCP/UDP port number (dst-port) Both Source and Destination port number (src-dst-port)
On a Catalyst 3550XL, the default load-balancing method for Layer 2 switching is src-mac. For Layer 3 switching, it’s src-dst-ip. (Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst4500/12.1/8aew/configuration/guide/channel.html)
EtherChannel Load-Balancing Configuration To configure what load-balancing method to utilize: Switch(config)# port-channel load-balance TYPE
For example, to switch the load-balancing method to source TCP/UDP port number: Switch(config)# port-channel load-balance src-port
To view the currently configured load-balancing method, including the current load on each link: Switch# show etherchannel port-channel
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
71
EtherChannel Load-Balancing Example Consider the following example, where ports fa0/10 and fa0/18 are configured as a single EtherChannel on both switches: Switch A
Fa0/10
Fa0/18
Fa0/10
Fa0/18
Switch B
Assume that the EtherChannel load-balancing method we are using is src-ip. The two links in the EtherChannel can be represented in one bit. A bit can either be off (“0”) or on (“1”). The first interface in the EtherChannel will become Link 0; the second will become Link 1. Consider the following source IP addresses and their binary equivalents: 10.1.1.1 – 00001010.00000001.00000001.00000001 10.1.1.2 – 00001010.00000001.00000001.00000010 Because there are only two channels in our link, only one bit needs to be observed in the source IP addresses – the last bit. The first address ends with a “1” bit, and thus would be sent down Link 1. The second address ends with a “0” bit, and thus would be sent down Link 0. Simple, right? This method of load-balancing can lead to one link being overburdened, in the odd circumstance that there are a disproportionate number of even or odd addresses. In general, EtherChannels should be formed with an even number of interfaces, to provide the best chance for equal load-balancing. Four interfaces can be represented with two bits; eight interfaces with three bits. Odd numbers of interfaces CAN be used in EtherChannel. However, one of the links will be severely overburdened compared to other links.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
72
EtherChannel Load-Balancing Example (continued) Consider again the following example: Switch A
Fa0/10
Fa0/18
Fa0/10
Fa0/18
Switch B
This time, assume that the EtherChannel load-balancing method we are using is src-dst-ip. The load-balancing algorithm will use both the source and destination IP when choosing a link. Again, the first interface in our EtherChannel will become Link 0; the second will become Link 1. Consider the following source and destination IP addresses and their binary equivalents: 192.168.1.10 – 11000000.10101000.00000001.00001010 192.168.1.25 – 11000000.10101000.00000001.00011001 The Catalyst switch performs an exclusive OR (XOR) to determine the appropriate link. Again, looking at the last bit of each address: Source Destination Result
0 0 0
1 0 1
0 1 1
1 1 0
Based on the XOR operation, the result can either be “off” (“0”) or “on” (“1”). This determines the link the switch will use. In the above example of source/destination IP address, the XOR operation would result in a “1”, and thus we would use Link 1.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
73
EtherChannel Protocols EtherChannel can either be configured manually, or can be dynamically negotiated via one of two protocols: • PAgP (Port Aggregation Protocol) – Cisco’s proprietary aggregating protocol. • LACP (Link Aggregation Control Protocol) – The IEEE standardized aggregation protocol, otherwise known as 802.3ad. Both PAgP and LACP exchange packets between switches in order to form the EtherChannel. However, when the EtherChannel is manually configured (i.e., set to on), no update packets are exchanged. Thus, an EtherChannel will not be formed if one switch has a manually configured EtherChannel, and the other switch is configured with a dynamic protocol (PAgP or LACP). Furthermore, PAgP and/or LACP configuration must be removed from a switch’s interfaces before a manual EtherChannel can be formed.
EtherChannel Manual Configuration To manually force an EtherChannel on two ports: Switch(config)# interface range fa0/23 - 24 Switch(config-if)# channel-group 1 mode on
The other switch must also have the EtherChannel manually configured as on. Remember that speed, duplex, VLAN, and STP information must be the same on every port in the EtherChannel. The channel-group number identifies this particular EtherChannel. The channel-group number does not need to be configured identically on both switches. Remember, a maximum of 64 EtherChannels are allowed on a Catalyst 3550XL switch.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
74
EtherChannel PAgP Configuration To configure PAgP negotiation on two ports, there are two options: Switch(config)# interface range fa0/23 – 24 Switch(config-if)# channel-protocol pagp Switch(config-if)# channel-group 1 mode desirable Switch(config)# interface range fa0/23 – 24 Switch(config-if)# channel-protocol pagp Switch(config-if)# channel-group 1 mode auto
Obviously, the other switch must also be configured with channel-protocol pagp. The channel-group number identifies this particular EtherChannel The PAgP channel-group mode can be configured to either desirable or auto. A switch configured as desirable will actively request to form an EtherChannel. When set to auto, the switch will passively wait for another switch to make the request. When set to desirable, the switch will form an EtherChannel with another switch configured as either desirable or auto. When set to auto, the switch will form an EtherChannel only with another switch configured as desirable. If both switches are set to auto, no EtherChannel will be formed. Regardless if set to desirable or auto, a Catalyst switch configured with PAgP will not form an EtherChannel with a switch that has a manually configured EtherChannel. Again, remember that speed, duplex, VLAN, and STP information must be the same on every port in the EtherChannel.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
75
EtherChannel LACP Configuration To configure LACP negotiation on two ports, there are also two options: Switch(config)# interface range fa0/23 – 24 Switch(config-if)# channel-protocol lacp Switch(config-if)# channel-group 1 mode active Switch(config)# interface range fa0/23 – 24 Switch(config-if)# channel-protocol lacp Switch(config-if)# channel-group 1 mode passive
The other switch must also be configured with channel-protocol lacp. The LACP channel-group mode can be configured to either active or passive. A switch configured as active will actively request to form an EtherChannel. When set to passive, the switch will passively wait for another switch to make the request. When set to active, the switch will form an EtherChannel with another switch configured as either active or passive. When set to passive, the switch will form an EtherChannel only with another switch configured as active. If both switches are set to passive, no EtherChannel will be formed. LACP provides an additional configuration option, a numerical priority that allows LACP to determine which ports can become active in the EtherChannel. This priority can either be set globally: Switch(config)# lacp system-priority PRIORITY
Or on interfaces: Switch(config)# interface range fa0/23 – 24 Switch(config-if)# lacp port-priority PRIORITY
A lower value indicates a higher priority. The ports with the lowest values (highest priorities) become active in the EtherChannel.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
76
Troubleshooting EtherChannel To view the current status of all configured EtherChannels: Switch# show etherchannel summary Flags:
D I R U
-
down P stand-alone s Layer3 S port-channel in
in port-channel suspended Layer2 use
Group Port-channel Ports ---------- --------------- --------------1 Po1(SU) Fa0/23(P) Fa0/24(P)
To view information about the configured EtherChannel protocol: Switch# show etherchannel port-channel Channel-group listing: ----------------------Group: 1 ---------Port-channels in the group: ---------------------Port-channel: Po1
(Primary Aggregator)
-----------Age of the Port-channel Logical slot/port Port state Protocol
= = = =
2d:42h:2m:69s 1/1 Number of ports = 2 Port-channel Ag-Inuse LACP
Ports in the Port-channel: Index Load Port EC state No of bits ------+------+------+------------------+----------0 11 Fa0/23 Active 2 1 22 Fa0/24 Active 2
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
77
Section 9 - Spanning Tree Protocol Switching Loops By default, a switch will forward a broadcast or multicast out all ports, excluding the port the broadcast/multicast was sent from. When a loop is introduced into the network, a highly destructive broadcast storm can develop within seconds. Broadcast storms occur when broadcasts are endlessly switched through the loop, choking off all other traffic. Consider the following looped environment: Switch 1
Switch 2
Switch 3
Switch 4
Switch 5
If the computer connected to Switch 4 sends out a broadcast, the switch will forward the broadcast out all ports, including the ports connecting to Switch 2 and Switch 5. Those switches, likewise, will forward that broadcast out all ports, including to their neighboring switches. The broadcast will loop around the switches infinitely. In fact, there will be two separate broadcast storms cycling in opposite directions through the switching loop. Only powering off the switch or physically removing the loop will stop the storm. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
78
Spanning Tree Protocol (STP) Switches (and bridges) needed a mechanism to prevent loops from forming, and thus Spanning Tree Protocol (STP, or IEEE 802.1D) was developed. STP is enabled by default on all VLANs on Catalyst switches. STP-enabled switches communicate to form a topology of the entire switching network, and then shutting down (or blocking) a port if a loop exists. The blocked port can be reactivated if another link on the switching network goes down, thus preserving fault-tolerance. Once all switches agree on the topology database, the switches are considered converged. STP switches send BPDU’s (Bridge Protocol Data Units) to each other to form their topology databases. BPDU’s are sent out all ports every two seconds, are forwarded to a specific MAC multicast address: 0180.c200.0000.
STP Types Various flavors of 802.1D STP exist, including: • Common Spanning Tree (CST) – A single STP process is used for all VLANs. • Per-VLAN Spanning Tree (PVST) – Cisco proprietary version of STP, which employs a separate STP process for each VLAN. • Per-VLAN Spanning Tree Plus (PVST+) – Enhanced version of PVST that allows CST-enabled switches and PVST-enabled switches to interoperate. This is default on newer Catalyst switches.
The STP Process To maintain a loop-free environment, STP performs the following functions: • A Root Bridge is elected • Root Ports are identified • Designated Ports are identified • If a loop exists, a port is placed in Blocking state. If the loop is removed the blocked port is activated again. If multiple loops exist in the switching environment, multiple ports will be placed in a blocking state. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
79
Electing an STP Root Bridge The first step in the STP process is electing a Root Bridge, which serves as the centralized point of the STP topology. Good design practice dictates that the Root Bridge be placed closest to the center of the STP topology. The Root Bridge is determined by a switch’s priority. The default priority is 32,768, and the lowest priority wins. In case of a tie in priority, the switch with the lowest MAC address will be elected root bridge. The combination of a switch’s priority and MAC address make up that switch’s Bridge ID. Consider the following example:
Remember that the lowest priority determines the Root Bridge. Switches 2, 3, and 5 have the default priority set. Switches 1 and 4 each have a priority of 100 configured. However, Switch 1 will become the root bridge, as it has the lowest MAC address. Switches exchange BPDU’s to perform the election process. By default, all switches “believe” they are the Root Bridge, until a switch with a lower Bridge ID is discovered. Root Bridge elections are a continuous process. If a new switch with a lower Bridge ID is added to the topology, it will be elected as the new Root Bridge.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
80
Identifying Root Ports The second step in the STP process is identifying Root Ports, or the port on each switch that has the lowest path cost to get to the Root Bridge. Each switch has only one Root Port, and the Root Bridge cannot have a Root Port. Path Cost is a cumulative cost based on the bandwidth of the links. The higher the bandwidth, the lower the Path Cost: Bandwidth 4 Mbps 10 Mbps 16 Mbps 100 Mbps 1 Gbps
Cost 250 100 62 19 4
Consider the following example:
Assume the links between all switches are 10Mbps Ethernet, with a Path Cost of 100. Each switch will identify the port with the least cumulative Path Cost to get to the Root Bridge. For Switch 4, the port leading up to Switch 2 has a Path Cost of 200, and becomes the Root Port. The port to Switch 5 has a higher Path Cost of 300. The Root Port is said to have received the most superior BPDU to the Root Bridge. Likewise, non-Root Ports are said to have received inferior BPDU’s to the Root Bridge. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
81
Identifying Designated Ports The third and final step in the STP process is to identify Designated Ports. Each network segment requires a single Designated Port, which has the lowest path cost leading to the Root Bridge. This port will not be placed in a blocking state. A port cannot be both a Designated Port and a Root Port. Consider the following example:
Ports on the Root Bridge are never placed in a blocking state, and thus become Designated Ports for directly attached segments. The network segments between Switches 2 and 4, and between Switches 3 and 5, both require a Designated Port. The ports on Switch 2 and Switch 3 have the lowest Path Cost to the Root Bridge for the two respective segments, and thus both become Designated Ports. The segment between Switch 4 and Switch 5 does not contain a Root Port. One of the ports must be elected the Designated Port for that segment, and the other must be placed in a blocking state. Normally, Path Cost is used to determine which port is blocked. However, the ports connecting Switches 4 and 5 have the same Path Cost to reach the Root Bridge (200). Whichever switch has the lowest Bridge ID is awarded the Designated Port. Whichever switch has the highest Bridge ID has its port placed in a blocking state. In this example, Switch 4 has the lowest priority, and thus Switch 5’s port goes into a blocking state. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
82
Port ID In certain circumstances, a tie will occur in both Path Cost and Bridge ID. Consider the following example: Switch 1 Root Bridge
Fa0/10
Fa0/11
Switch 2
If the bandwidth of both links are equal, then both of Switch 2’s interfaces have an equal path cost to the Root Bridge. Which interface will become the Root Port? The tiebreaker should be the lowest Bridge ID, but that cannot be used in this circumstance (unless Switch 2 has become schizophrenic). In this circumstance, Port ID will be used as the tiebreaker. An interface’s Port ID consists of two parts - a 6-bit port priority value, and the MAC address for that port. Whichever interface has the lowest Port ID will become the Root Port. By default, the port priority of an interface is 128. Lowering this value will ensure a specific interface becomes the Root Port: Switch(config)# int fa0/10 Switch(config-if)# spanning-tree port-priority 50
Remember, that port priority is the last tiebreaker STP will consider. STP decides Root and Designated Ports based on the following criteria, and in this order: • Lowest Path Cost to the Root Bridge • Lowest Bridge ID • Lowest Port ID
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
83
Extended System IDs Normally, a switch’s Bridge ID is a 64-bit value that consists of a 16-bit Bridge Priority value, and a 48-bit MAC address. However, it is possible to include a VLAN ID, called an extended System ID, into a Bridge ID. Instead of adding bits to the existing Bridge ID, 12 bits of the Bridge Priority value are used for this System ID, which identifies the VLAN this STP process represents. Because 12 bits have been stolen from the Bridge Priority field, the range of priorities has been reduced. Normally, the Bridge Priority can range from 0 (or off) to 65,535, with a default value of 32,768. With extended System ID enabled, the Priority range would be 0 – 61,440, and only in multiples of 4,096. To enable the extended System ID: Switch(config)# spanning-tree extend system-id
Enabling extended System ID accomplishes two things: • Increases the amount of supported VLANs on the switch from 1005 to 4094. • Includes the VLAN ID as part of the Bridge ID. Thus, when this command is enabled, the 64-bit Bridge ID will consist of the following: • 4-bit Priority Value • 12-bit System ID value (VLAN ID) • 48-bit MAC address
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
84
Per-VLAN Spanning Tree (PVST) Example Remember that PVST+ is the default implementation of STP on Catalyst switches. Thus, each VLAN on the switch is allotted its own STP process. Consider the following example:
With Common Spanning Tree (CST), all VLANS would belong to the same STP process. Thus, if one Switch 4’s ports entered a blocking state to eliminate the loop, all VLANs would be blocked out that port. For efficiency purposes, this may not be ideal.
In the above examples, the benefit of PVST becomes apparent. STP runs a separate process for each VLAN, allowing a port to enter a blocking state only for that specific VLAN. Thus, it is possible to load balance VLANs, allowing traffic to flow more efficiently. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
85
STP Port States Switch ports participating in STP progress through five port states: Blocking – The default state of an STP port when a switch is powered on, and when a port is shut down to eliminate a loop. Ports in a blocking state do not forward frames or learn MAC addresses. It will still listen for BPDUs from other switches, to learn about changes to the switching topology. Listening – A port will progress from a Blocking to a Listening state only if the switch believes that the port will not be shut down to eliminate a loop. The port will listen for BPDU’s to participate in the election of a Root Bridge, Root Ports, and Designated Ports. Ports in a listening state will not forward frames or learn MAC addresses. Learning – After a brief period of time, called a Forward Delay, a port in a listening state will be elected either a Root Port or Designated Port, and placed in a learning state. Ports in a learning state listen for BPDUs, and also begin to learn MAC addresses. However, ports in a learning state will still not forward frames. (Note: If a port in a listening state is not kept as a Root or a Designated Port, it will be placed into a blocking state and not a learning state.) Forwarding – After another Forward Delay, a port in learning mode will be placed in forwarding mode. Ports in a forwarding state can send and receive all data frames, and continue to build the MAC address table. All designated, root, and non-uplink ports will eventually be placed in a forwarding state. Disabled – A port in disabled state has been administratively shut down, and does not participate in STP or forward frames at all. On average, a port in a blocking state will take 30 to 50 seconds to reach a forwarding state. To view the current state of a port (such fa0/10): Switch# show spanning-tree interface fa0/10 Interface Fa0/10 in Spanning tree 1 is Forwarding Port path cost 100, Port priority 128 <snip> (Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst4500/12.1/8aew/configuration/guide/spantree.html#wp1020487)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
86
STP Timers STP utilizes three timers to ensure all switches remain synchronized, and to allow enough time for the Spanning Tree process to ensure a loop-free environment. • Hello Timer – Default is 2 seconds. Indicates how often BPDU’s are sent by switches. • Forward Delay – Default is 15 seconds. Indicates a delay period in both the listening and learning states of a port, for a total of 30 seconds. This delay ensures STP has ample time to detect and eliminate loops. • Max Age – Default is 20 seconds. Indicates how long a switch will keep BPDU information from a neighboring switch before discarding it. In other words, if a switch fails to receive BPDU’s from a neighboring switch for the Max Age period, it will remove that switch’s information from the STP topology database. All timer values can be adjusted, and should only be adjusted on the Root Bridge. The Root Bridge will propagate the changed timers to all other switches participating in STP. Non-Root switches will ignore their locally configured timers. To adjust the three STP timers for VLAN 10: Switch(config)# spanning-tree vlan 10 hello-time 10 Switch(config)# spanning-tree vlan 10 forward-time 20 Switch(config)# spanning-tree vlan 10 max-age 40
The timers are measured in seconds. The above examples represent the maximum value each timer can be configured to. Remember that STP is configured on a VLAN by VLAN basis on Catalyst Switches.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
87
STP Topology Changes Switch 1 Root Bridge
Root Port
Root Port
Switch 2
Switch 3
Root Port Switch 4
Root Port Switch 5
An STP topology change will occur under two circumstances: • When an interface is placed into a Forwarding state. • When an interface already in a Forwarding or Learning state is placed into a Blocking state. The switch recognizing this topology change will send out a TCN (Topology Change Notification) BPDU, destined for the Root Bridge. The TCN BPDU does not contain any data about the actual change – it only indicates that a change occurred. For example, if the interface on Switch 4 connecting to Switch 5 went down, Switch 4 would send a TCN out its Root Port to Switch 2. Switch 2 will acknowledge this TCN by sending a BPDU back to Switch 4 with the Topology Change Acknowledgement (TCA) bit set. Switch 2 would then forward the TCN out its Root Port to Switch 1 (the Root Bridge). Once the Root Bridge receives the TCN, it will send out a BPDU with the Topology Change (TC) bit set to all switches. When a switch receives this Root BPDU, it will temporarily lower its MAC-address Aging Timer from 300 seconds to 15 seconds, so that any erroneous MAC addresses can be quickly flushed out of the CAM table. The MAC-Address Aging Timer will stay lowered to 15 seconds for a period of 35 seconds by default, or one Max Age (20 seconds) plus one Forward Delay (15 seconds) timer. (Reference: http://www.cisco.com/en/US/tech/tk389/tk621/technologies_tech_note09186a0080094797.shtml)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
88
Basic STP Configuration To disable STP for a specific VLAN: Switch(config)# no spanning-tree vlan 10
To adjust the Bridge Priority of a switch from its default of 32,768, to increase its chances of being elected Root Bridge of a VLAN: Switch(config)# spanning-tree vlan 10 priority 150
To change an interface’s Path Cost from its defaults: Switch(config)# int fa0/24 Switch(config-if)# spanning-tree cost 42
To force a switch to become the Root Bridge: Switch(config)# spanning-tree vlan 10 root primary
The root primary parameter in the above command automatically lowers the switch’s priority to 24,576. If another switch on the network has a lower priority than 24,576, the above command will lower the priority by 4096 less than the priority of the other switch. It is possible to assign a Secondary Root Bridge for redundancy. To force a switch to become a Secondary Root Bridge: Switch(config)# spanning-tree vlan 10 root secondary
The root secondary parameter in the above command automatically lowers the switch’s priority to 28,672. To specify the diameter of the switching topology: Switch(config)# spanning-tree vlan 10 root primary diameter 7
The diameter parameter in the preceding command indicates the length of the STP topology (number of switches). The maximum (and default) value for the diameter is 7. Note that the switching topology can contain more than seven switches; however, each branch of the switching tree can only extend seven switches deep, from the Root Bridge. The diameter command will also adjust the Hello, Forward Delay, and Max Age timers. This is the recommended way to adjust timers, as the hello timers are tuned specifically to the diameter of the switching network. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
89
STP PortFast PortFast allows switch ports that connect a host device (such as a printer or a workstation), to bypass the usual progression of STP states. Theoretically, a port connecting to a host device can never create a switching loop. Thus, Port Fast allows the interface to move from a blocking state to a forwarding state immediately, eliminating the normal 30 second STP delay. To configure PortFast on an interface: Switch(config)# int fa0/10 Switch(config-if)# spanning-tree portfast
To enable PortFast globally on all interfaces: Switch(config)# spanning-tree portfast default
PortFast should not be enabled on switch ports connecting to another hub/switch, as this may result in a loop. Note that PortFast does not disable STP on an interface - it merely speeds up the convergence. PortFast additionally reduces unnecessary BPDU traffic, as TCN BPDU’s will not be sent out for state changes on a PortFast-enabled interface.
STP UplinkFast Switches can have multiple uplinks to other upstream switches. If the multiple links are not placed in an EtherChannel, then at least one of the ports is placed into a blocking state to eliminate the loop. If a directly-connected interface goes down, STP needs to perform a recalculation to bring the other interface out of a blocking state. As stated earlier, this calculation can take from 30 to 50 seconds. UplinkFast allows the port in a blocking state to be held in standby-mode, and activated immediately if the forwarding interface fails. If multiple ports are in a blocking state, whichever port has the lowest Root Path Cost will become unblocked. The Root Bridge cannot have UplinkFast enabled. UplinkFast is configured globally for all VLANs on the switch: Switch(config)# spanning-tree uplinkfast (Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/release/12.2_35_se/configuration/guide/swstpopt.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
90
STP BackboneFast While UplinkFast allows faster convergence if a directly-connected interface fails, BackboneFast provides the same benefit is an indirectly-connected interface fails. For example, if the Root Bridge fails, another switch will be elected the Root. A switch learning about the new Root Bridge must wait its Max Age timer to flush out the old information, before it will accept the updated info. By default, the Max Age timer is 20 seconds. BackboneFast allows a switch to bypass the Max Age timer if it detects an indirect failure on the network. It will update itself with the new Root info immediately. BackboneFast is configured globally, and should be implemented on all switches in the network when used: Switch(config)# spanning-tree backbonefast
Protecting STP STP is vulnerable to attack for two reasons: • STP builds its topology information by accepting a neighboring switch’s BPDU’s. • The Root Bridge is always determined by the lowest Bridge ID. Switches with a low priority can be maliciously placed on the network, and elected the Root Bridge. This may result in a suboptimal or unstable STP topology. Cisco implemented three mechanisms to protect the STP topology: • Root Guard • BPDU Guard • BPDU Filtering All three mechanisms are configured on an individual interface basis, and are disabled by default. When enabled, these mechanisms apply to all VLANs for that particular interface. (Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/release/12.2_35_se/configuration/guide/swstpopt.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
91
Root Guard Root Guard prevents an unauthorized switch from advertising itself as a Root Bridge. Switch(config)# interface fa0/10 Switch(config-if)# spanning-tree guard root
The above command will prevents the switch from accepting a new Root Bridge off of the fa0/10 interface. If a Root Bridge advertises itself to this port, the port will enter a root-inconsistent state (a pseudo-blocking state): Switch# show spanning-tree inconsistentports Name Interface Inconsistency -------------------- -------------------- -----------------VLAN100 FastEthernet0/10 Root Inconsistent
BPDU Guard and BPDU Filtering BPDU Guard is employed on interfaces that are PortFast-enabled. Under normal circumstances, a PortFast-enabled interface connects to a host device, and thus the interface should never receive a BPDU. If another switch is accidentally or maliciously connected into a PortFast interface, BPDU Guard will place the interface into an errdisable state. More accurately, if an interface configured for BPDU Guard receives a BPDU, then the errdisable state will occur. To enable BPDU Guard: Switch(config)# interface fa0/10 Switch(config-if)# spanning-tree bpduguard enable
To take an interface out of an errdisable state, simply disable and re-enable the interface: Switch(config)# interface fa0/10 Switch(config-if)# shutdown Switch(config-if)# no shutdown
BPDU Filtering essentially disables STP on a particular interface, by preventing it from sending or receiving BPDU’s: Switch(config)# interface fa0/10 Switch(config-if)# spanning-tree bpdufilter enable *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
92
Unidirectional Link Detection (UDLD) Most communication in a switching network is bi-directional. STP requires that switches send BPDU’s bi-directionally to build the topology database. If a malfunctioning switch port only allows traffic one way, and the switch still sees that port as up, a loop can form without the switch realizing it. Unidirectional Link Detection (UDLD) periodically tests ports to ensure bi-directional communication is maintained. UDLD sends out ID frames on a port, and waits for the remote switch to respond with its own ID frame. If the remote switch does not respond, UDLD assumes the interface has malfunctioned and become unidirectional. By default, UDLD sends out ID frames every 15 seconds, and must be enabled on both sides of a link. UDLD can run in two modes: • Normal Mode – If a unidirectional link is detected, the port is not shut down, but merely flagged as being in an undetermined state • Aggressive Mode – If a unidirectional link is detected, the port is placed in an errdisable state UDLD can be enabled globally (but only for Fiber ports on the switch): Switch(config)# udld enable message time 20 Switch(config)# udld aggressive message time 20
The enable parameter sets UDLD into normal mode, and the aggressive parameter is for aggressive mode (obviously). The message time parameter modifies how often ID frames are sent out. UDLD can be configured on individual interfaces: Switch(config-if)# udld enable Switch(config-if)# udld aggressive Switch(config-if)# udld disable
To view UDLD status on ports, or re-enable UDLD errdisabled ports: Switch# show udld Switch# udld reset
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
93
STP Troubleshooting Commands To view STP information for a specific VLAN: Switch# show spanning-tree vlan 100 VLAN0100 Spanning tree enabled protocol ieee Root ID Priority 24576 Address 00a.5678.90ab Cost 19 Port 24 (FastEthernet0/24) Hello Time 2 sec Max Age 20 sec Bridge ID
Priority Address Hello Time Aging Time
Interface ----------------Fa0/24 Fa0/23
Role --Root Altn
Forward Delay 15 sec
32768 (priority 32768 sys-id-ext 1) 000c.1234.abcd 2 sec Max Age 20 sec Forward Delay 15 sec 300 Sts ----FWD BLK
Cost ----------19 19
Prio.Nbr ---------------128.24 128.23
To view STP information for all VLANS: Switch# show spanning-tree
To view detailed STP interface information: Switch# show spanning-tree detail VLAN100 is executing the ieee compatible Spanning Tree protocol Bridge Identifier has priority 32768, address 000c.1234.abcd Configured hello time 2, max age 20, forward delay 15 <snip> Port 23 (FastEthernet0/23) of VLAN100 is forwarding Port path cost 19, Port priority 128, Port Identifier 128.23. Designated root has priority 24576, address 00a.5678.90ab Designated bridge has priority 24576, address 00a.5678.90ab Designated port id is 128.23, designated path cost 0 <snip>
(Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.1E/native/command/reference/show4.html#wp1026768)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
94
Rapid Spanning Tree Protocol (RSTP) To further alleviate the 30 to 50 second convergence delays with STP, enhancements were made to the original IEEE 802.1D standard. The result was 802.1w, or Rapid Spanning Tree Protocol (RSTP). RSTP is similar in many respects to STP. BPDU’s are forwarded between switches, and a Root Bridge is elected, based on the lowest Bridge ID. Root Ports and Designated Ports are also elected. RSTP defines five port types: • Root Port – Switch port on each switch that has the best Path Cost to the Root Bridge (same as STP). • Alternate Port – A backup Root Port, that has a less desirable Path Cost. An Alternate Port is placed in a discarding state. • Designated Port – Non-Root port that represents the best Path Cost for each network segment to the Root Bridge (same as STP). Designated ports are also referred to as Point-to-Point ports. • Backup Port – A backup Designated Port, that has a less desirable Path Cost. A Backup Port is placed in a discarding state. • Edge Port – A port connecting a host device, which is moved to a Forwarding state immediately. If an Edge Port receives a BPDU, it will lose its Edge Port status and participate in RSTP calculations. On Cisco Catalyst switches, any port configured with PortFast becomes an Edge Port. The key benefit of RSTP is speedier convergence. Switches no longer require artificial Forwarding Delay timers to ensure a loop-free environment. Switches instead perform a handshake synchronization to ensure a consistent topology table. During initial convergence, the Root Bridge and its directly-connected switches will place their interfaces in a discarding state. The Root Bridge and those switches will exchange BPDU’s, synchronize their topology tables, and then place their interfaces in a forwarding state. Each switch will then perform the same handshaking process with their downstream neighbors. The result is convergence that completes in a few seconds, as opposed to 30 to 50 seconds. (Reference: http://www.cisco.com/en/US/tech/tk389/tk621/technologies_white_paper09186a0080094cfa.shtml)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
95
Rapid Spanning Tree Protocol (RSTP) (continued) Changes to the RSTP topology are also handled more efficiently than 802.1D STP. Recall in that in 802.1D STP, a switch recognizing a topology change will send out a TCN (Topology Change Notification) BPDU, destined for the Root Bridge. Once the Root Bridge receives the TCN, it will send out a BPDU with the Topology Change (TC) bit set to all switches. When a switch receives this Root BPDU, it will temporarily lower its MAC-address Aging Timer from 300 seconds to 15 seconds, so that any erroneous MAC addresses can be quickly flushed out of the CAM table. In RSTP, a switch recognizing a topology change does not have to inform the Root Bridge first. Any switch can generate and forward a TC BPDU. A switch receiving a TC BPDU will flush all MAC addresses learned on all ports, except for the port that received the TC BPDU. RSTP incorporates the features of UplinkFast by allowing Alternate and Backup ports to immediately enter a Forwarding state, if the primary Root or Designated port fails. RSTP also inherently employs the principles of BackboneFast, by not requiring an arbitrary Max Age timer for accepting inferior BPDU’s if there is an indirect network failure. 802.1w RSTP is backwards-compatible with 802.1D STP. However, when RSTP switches interact with STP switches, RSTP loses its inherent advantages, as will perform according to 802.1D specifications. Two separate standards of RSTP have been developed: • Rapid Per-VLAN Spanning Tree Protocol (RPVST+) – Cisco’s proprietary implementation of RSTP. • Multiple Spanning Tree (MST) – The IEEE 802.1s standard or RSTP.
(Reference: http://www.cisco.com/en/US/tech/tk389/tk621/technologies_white_paper09186a0080094cfa.shtml)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
96
Multiple Spanning Tree (MST) Earlier in this guide, two types of STP were defined: • Common Spanning Tree (CST) – All VLANs utilize one STP process • Per-VLAN Spanning Tree (PVST) – Each VLAN is allotted its own STP process PVST allows for more efficient traffic flow throughout the switching network. However, each VLAN must run its own separate STP process, often placing an extreme burden on the switch’s processor. Multiple Spanning Tree (MST) allows groups of VLANs to be allotted their own STP process. Each STP process is called an instance. MST separates the STP topology into regions that must contain identical parameters, including: • Configuration Name - a 32-bit value similar to a VTP domain • Revision Number – a 16-bit value that identifies the current MST configuration’s revision. • VLAN-to-Instance Mappings Each region runs its own Internal Spanning Tree (IST) to eliminate loops within that region. IST is essentially an enhanced form of RSTP that supports MST-specific parameters. MST is fully compatible with all other implementations of STP.
(Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst4500/12.2/31sg/configuration/guide/spantree.pdf)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
97
MST Configuration MST must first be enabled globally on a switch: Switch(config)# spanning-tree mode mst
Most other MST configuration is completed in “MST Configuration” mode: Switch(config)# spanning-tree mst configuration
To configure the switch’s MST Configuration Name: Switch(config-mst)# name MYMSTNAME
To configure the switch’s Revision Number: Switch(config-mst)# revision 10
To map VLANs to a specific MST instance: Switch(config-mst)# instance 2 vlan 1-100
A maximum of 16 instances are allowed (0 – 15). By default, all VLANs belong to instance 0. Recall that the above three parameters (configuration name, revision number, and mappings) must be identical on all MST switches in a region. To view the changes to the configuration: Switch(config-mst)# show pending Pending MST configuration Name [MYMSTNAME] Revision 10 Instance Vlans mapped -------------------------------------------------------0 101-4094 2 1-100
All other configuration of MST is identical to standard STP, with two exceptions. The parameter “mst” must be used, and all settings are applied to instances instead of VLANs. Switch(config)# spanning-tree mst 2 root primary Switch(config)# spanning-tree mst 2 priority 32000
The above two configurations are applied to MST Instance 2. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
98
Section 10 - Multilayer Switching Routing Between VLANs VLANs separate a Layer-2 switch into multiple broadcast domains. Each VLAN becomes its own individual broadcast domain (or IP subnet). Only interfaces belonging to the same VLAN can communicate without an intervening device. Interfaces assigned to separate VLANS require a router to communicate. Routing between VLANs can be accomplished one of three ways: • Using an external router that has an interface to each VLAN. This is the least scalable solution, and completely impractical in environments with a large number of VLANs:
• Using an external router that has a single link into the switch, over which all VLANs can be routed. The router must understand either 802.1Q or ISL trunking encapsulations, and the switch port must be configured as a trunk. This method is known as router-on-a-stick:
• Using a Multilayer switch with a built-in routing processor:
This guide will demonstrate the function and configuration of router-on-astick and Multilayer switching. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas
99
Configuring Router on a Stick
Consider the above router-on-a-stick example. To enable inter-VLAN communication, three elements must be configured: • Interface fa0/10 on Switch B must be configured as a trunk port. • Interfaces fa0/14 and fa0/15 on Switch B must be assigned to their respective VLANs. • Interface fa0/1 on the Router A must be split into separate subinterfaces for each VLAN. Each subinterface must support the frame-tagging protocol used by the switch’s trunk port. Configuration on Switch B would be as follows: Switch(config)# interface fa0/10 Switch(config-if)# switchport mode trunk Switch(config-if)# switchport trunk encapsulation dot1q
Switch(config)# interface fa0/14 Switch(config-if)# switchport access vlan 101 Switch(config)# interface fa0/15 Switch(config-if)# switchport access vlan 102
Configuration on the Router A would be as follows: Router(config)# interface fa0/1 Router(config-if)# no shut Router(config)# interface fa0/1.101 Router(config-subif)# encapsulation dot1q 101 Router(config-subif)# ip address 172.16.1.1 255.255.0.0
Router(config)# interface fa0/1.102 Router(config-subif)# encapsulation dot1q 102 Router(config-subif)# ip address 10.1.1.1 255.255.0.0
Host devices in each VLAN will point to their respective subinterface on Router A. For example, Computer A’s default gateway would be 172.16.1.1, and Computer B’s would be 10.1.1.1. This will allow Router A to perform all inter-VLAN communication on behalf of Switch B. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 100
Multilayer Switch Port Types Multilayer switches support both Layer-2 (switching) and Layer-3 (routing) functions. Three port types can exist on Multilayer switches: • Switchports – Layer-2 ports on which MAC addresses are learned. • Layer-3 Ports – Essentially routing ports on multi-layer switches. • Switched Virtual Interfaces (SVI) – A VLAN virtual interface where an IP address can be assigned to the VLAN itself. The port type for each interface can be modified. By default, on Catalyst 2950’s and 3550’s, all interfaces are switchports. To configure a port as a switchport: Switch(config)# interface fa0/10 Switch(config-if)# switchport
To configure a port as a Layer-3 (routing) port, and assign an IP address: Switch(config)# interface fa0/11 Switch(config-if)# no switchport Switch(config-if)# ip address 192.168.1.1 255.255.0.0 Switch(config-if)# no shut
To assign an IP address to an SVI (virtual VLAN interface): Switch(config)# interface vlan 101 Switch(config-if)# ip address 192.168.1.1 255.255.0.0 Switch(config-if)# no shut
Note that the VLAN itself is treated as an interface, and supports most IOS interface commands. To view the port type of a particular interface: Switch# show int fa0/10 switchport Name: Switchport:
Fa0/10 Enabled
<snip>
A Layer-3 interface would display the following output: Switch# show int fa0/10 switchport Name: Switchport:
Fa0/10 Disabled
<snip> *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 101
Multilayer Switching Methods Multilayer switches contain both a switching and routing engine. A packet must first be routed, allowing the switching engine to cache the IP traffic flow. After this cache is created, subsequent packets destined for that flow can be switched and not routed, reducing latency. This concept is often referred to as route once, switch many. Cisco implemented this type of Multilayer switching as NetFlow switching or route-cache switching. As is their habit, Cisco replaced NetFlow multilayer switching with a more advanced method called Cisco Express Forwarding (CEF), to address some of the disadvantages of route-cache switching: • CEF is less intensive than Netflow for the multilayer switch CPU. • CEF does not cache routes, thus there is no danger of having stale routes in the cache if the routing topology changes. CEF contains two basic components: • Layer-3 Engine – Builds the routing table and then routes data • Layer-3 Forwarding Engine – Switches data based on the FIB. The Layer-3 Engine builds the routing table using standard methods: • Static routes. • Dynamically via a routing protocol (such as RIP or OSPF). The routing table is then reorganized into a more efficient table called the Forward Information Base (FIB). The most specific routes are placed at the top of the FIB. The Layer-3 Forwarding Engine utilizes the FIB to then switch data in hardware, as opposed to routing it through the Layer-3 Engine’s routing table. Additionally, CEF maintains an Adjacency Table, containing the hardware address of the next-hop for each entry in the FIB. Entries in the adjacency table are populated as new neighboring routers are discovered, using ARP. This is referred to as gleaning the next-hop hardware address. Creating an adjacency table eliminates latency from ARP lookups for nexthop information when data is actually routed/switched. (Reference: http://www.cisco.com/en/US/docs/ios/12_1/switch/configuration/guide/xcdcef.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 102
CEF Configuration CEF is enabled by default on all Catalyst multi-layer switches that support CEF. CEF cannot even be disabled on Catalyst 3550, 4500 and 6500 switches. To manually enable CEF: Switch(config)# ip cef
To disable CEF on a specific interface: Switch(config)# interface fa0/24 Switch(config-if)# no ip route-cache cef
To view the CEF Forward Information Base (FIB) table: Switch# show ip cef Prefix
Next Hop
Interface
172.16.1.0/24 172.16.2.0/24 172.16.0.0/16 0.0.0.0/0
10.5.1.1 10.5.1.2 10.5.1.2 10.1.1.1
Vlan100 Vlan100 Vlan100 Vlan42
Note that the FIB contains the following information: • The destination prefix (and mask) • The next-hop address • The interface the next-hop device exists off of The most specific routes are placed at the top of the FIB. To view the CEF Adjacency table: Switch# show adjacency Protocol IP
Interface Vlan100
Address 10.5.1.1(6) 0 packets, 0 bytes 0001234567891112abcdef120800 ARP 01:42:69
Protocol IP
Interface Vlan100
Address 10.5.1.2(6) 0 packets, 0 bytes 000C765412421112abcdef120800 ARP 01:42:69
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 103
Multilayer Switching vs. Router on a Stick The configuration of router-on-a-stick was demonstrated earlier in this section. Unfortunately, there are inherent disadvantages to router-on-a-stick: • There may be insufficient bandwidth for each VLAN, as all routed traffic will need to share the same router interface. • There will be an increased load on the router processor, to support the ISL or DOT1Q encapsulation taking place. A more efficient (though often more expensive) alternative is to use a multilayer switch.
Configuration of inter-VLAN routing on a multilayer switch is simple. First, create the required VLANs: Switch(config)# vlan 101 Switch(config-vlan)# name VLAN101 Switch(config)# vlan 102 Switch(config-vlan)# name VLAN102
Then, routing must be globally enabled on the multilayer switch: Switch(config)# ip routing
Next, each VLAN SVI is assigned an IP address: Switch(config)# interface vlan 101 Switch(config-if)# ip address 192.168.1.1 255.255.0.0 Switch(config-if)# no shut Switch(config)# interface vlan 102 Switch(config-if)# ip address 10.1.1.1 255.255.0.0 Switch(config-if)# no shut
These IP addresses will serve as the default gateways for the clients on each VLAN. By adding an IP address to a VLAN, those networks will be added to the routing table as directly connected routes, allowing routing to occur. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 104
Fallback Bridging The Catalyst 3550 only supports IP when using CEF multilayer switching. If other protocols (IPX, Appletalk, SNA) need to be routed between VLANs, fallback bridging can be used. To configure fallback bridging, a bridge-group must first be created. Then specific VLANs can be assigned to that bridge-group. A maximum of 31 bridge-groups can be created. Switch(config)# bridge-group 1 protocol vlan-bridge Switch(config)# interface vlan 100 Switch(config-if)# bridge-group 1 Switch(config)# interface vlan 101 Switch(config-if)# bridge-group 1
The first command creates the bridge-group. The next command place VLANs 100 and 101 in bridge-group 1. If protocols other than IP utilize these VLANs, they will be transparently bridged across the VLANs. To view information about all configured bridge groups: Switch# show bridge group
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 105
Section 11 - SPAN Monitoring Traffic Various technologies and packet sniffers exist to monitor traffic on a network. Catalyst switches support a feature called Switched Port Analyzer (SPAN) to simplify this process. SPAN works by copying or mirroring the traffic from one or more source ports, to a destination port. Because the traffic is only copied, SPAN will never affect any of the traffic on the source port(s). A packet sniffer or similar device can be connected to this “destination” port, capturing traffic without interfering with the actual data. A SPAN source can consist of: • One or more access switchports (Local SPAN) • One or more routed interface • An EtherChannel • A trunk port • An entire VLAN (VSPAN) SPAN can mirror data coming inbound or outbound on a source interface, or both. A SPAN destination can consist of only a single switchport or routed interface. Once an interface is identified as a SPAN destination, it is dedicated to that purpose. No user traffic will be sent down that link. If you configure a SPAN destination as a trunk port, it will be able to capture all VLAN tagged data. A SPAN destination cannot be an EtherChannel. Under some circumstances, the traffic from the SPAN source can exceed the capacity of the destination interface. For example, if the SPAN source was an entire VLAN, this could very easily exceed the bandwidth capabilities of a single Fast Ethernet interface. In this instance, packets in the destination queue will be dropped to ease the congestion. Always remember, that the source port(s)/VLAN are never affected.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 106
Configuring SPAN The first step in configuring SPAN is to identify a source: Switch(config)# monitor session 1 source interface fa0/10 rx Switch(config)# monitor session 1 source interface fa0/11 tx Switch(config)# monitor session 1 source vlan 100 both
The first command creates a monitor session, and assigns it a number of 1. When we specify a destination interface, we must use the same session number. The rest of the command identifies a source interface of fa0/10, and monitors all received (rx) traffic. The second command adds a second interface to our monitor session 1, this time specifying transmitted (tx) traffic. The third command adds a vlan to our monitor session 1, and specifies both incoming and outgoing traffic. If monitoring a source trunk port, we can specify which specific VLANs we wish to SPAN to mirror: Switch(config)# monitor session 1 filter vlan 1-5
Next, we must identify our destination port: Switch(config)# monitor session 1 destination interface fa0/15
The above command associates destination interface fa0/15 to monitor session 1. To stop this monitoring session: Switch(config)# no monitor session 1
To view the status of SPAN sessions: Switch(config)# show monitor
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 107
Remote SPAN (RSPAN)
Consider the above example. The previous page described how to configure SPAN if both the source and destination ports were on the same switch. However, it is also possible to utilize SPAN if the source and destination are on different switches, using Remote SPAN (RSPAN). Each switch in the chain must support RSPAN, and the information is sent across a configured RSPAN VLAN. Configuration on Switch 1 would be: Switch(config)# vlan 123 Switch(config-vlan)# remote-span Switch(config)# monitor session 1 source interface fa0/10 Switch(config)# monitor session 1 destination vlan 123
Configuration on Switch 2 would be: Switch(config)# vlan 123 Switch(config-vlan)# remote-span
Configuration on Switch 3 would be: Switch(config)# vlan 123 Switch(config-vlan)# remote-span Switch(config)# monitor session 1 source vlan 123 Switch(config)# monitor session 1 destination interface fa0/12
On all three switches, we must create the RSPAN VLAN, and apply the remote-span parameter to it. On Switch 1, we configure our SPAN source as normal, but point to the RSPAN VLAN as our destination. On Switch 3, we configure our SPAN destination as normal, but point to the RSPAN VLAN as our source. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 108
________________________________________________
Part IV Advanced Switch Services ________________________________________________
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 109
Section 12 - Redundancy and Load Balancing Importance of Redundancy Consider the following example:
The users utilize a single gateway to reach the Internet. In this example, the gateway is a multilayer switch; however, a Layer-3 router is just as common. Throughout the rest of this section, the terms router and multilayer switch will be used interchangeably. The gateway represents a single point of failure on this network. If that gateway fails, users will lose access to all resources beyond that gateway. This lack of redundancy may be unacceptable on business-critical systems that require maximum uptime. It is possible to provide multiple gateways for host devices:
However, this required a solution transparent to the end user (or host device). Cisco devices support three protocols that provide this transparent redundancy: • Hot Standby Router Protocol (HSRP) • Virtual Router Redundancy Protocol (VRRP) • Gateway Load Balancing Protocol (GLBP) *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 110
Hot Standby Router Protocol (HSRP) Cisco developed a proprietary protocol named Hot Standby Router Protocol (HSRP) that allows multiple routers or multilayer switches to masquerade as a single gateway. This is accomplished by assigning a virtual IP address to all routers participating in HSRP. All routers are assigned to a single HSRP group (numbered 0-255). Note however, that most Catalyst switches will support only 16 configured HSRP groups. HSRP routers are elected to specific roles: • Active Router – the router currently serving as the gateway. • Standby Router – the backup router to the Active Router. • Listening Router – all other routers participating in HSRP. Only one Active and one Standby router are allowed per HSRP group. HSRP routers regularly send Hello packets (by default, every 3 seconds) to ensure all routers are functioning. If the current Active Router fails, the Standby Router is made active, and a new Standby is elected. The role of an HSRP router is dictated by its priority. The priority can range from 0 – 255, with a default of 100. The router with the highest (a higher value is better) priority is elected the Active Router; the router with the second highest priority becomes the Standby Router. If all priorities are equal, whichever router has the highest IP Address on its HSRP interface is elected the Active Router.
In the above example, Switch 2 would become the Active HSRP router, as it has the highest priority. Switch 1 would become the Standby router. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 111
HSRP States A router or multilayer switch configured for HSRP will progress through several states before settling into a role: • Disabled – the interfaces is not configured for HSRP, or is administratively shut down. • Init – this is the starting state when an interface is first brought up. • Learn – the router is waiting to hear hellos from the Active Router, to learn the configured Virtual Address. • Listen – the router has learned the Virtual IP address, but was not elected the Active or Standby Router. • Speak – the router is currently participating in an Active Router election, and is sending Hello packets. • Standby – the router is acting as a backup to the Active Router. Standby routers monitor and send hellos to the Active Router. • Active – the router is currently accepting and forwarding user traffic, using the Virtual IP address. The Active Router actively exchanges hellos with the Standby Router. By default, HSRP Hello packets are sent every 3 seconds. Routers in a listening state will only listen for and not periodically send hello packets. While the HSRP is fully converged, only the Active and Standby Routers will send hellos. Routers will also send out hellos when Speaking, or electing the Active and Standby routers. When electing the Active and Standby routers, the routers will enter a Speaking state. HSRP hellos are used to complete the election process. Thus, the three states which send out hello packets as follows: • Speak • Standby • Active
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 112
HSRP Configuration All HSRP configuration is completed on the interface that is accepting traffic on behalf of host devices. To configure the priority of a router: Switch(config)# interface fa0/10 Switch(config-if)# standby 1 priority 150
The standby 1 command specifies the HSRP group that interface belongs to. The priority 150 parameter changes the actual priority value. Remember that a higher value is preferred, and that the default priority is 100. However, if a new router is added to the HSRP group, and it has the best priority, it will not automatically assume the role of the Active router. In fact, the first router to be powered on will become the Active router, even if it has the lowest priority! To force the highest-priority router to assume the role of Active router: Switch(config-if)# standby 1 preempt delay 10
The standby 1 preempt command allows this switch to force itself as the Active router, if it has the highest priority. The optional delay 10 parameter instructs the router to wait 10 seconds before assuming an Active status. HSRP routers send out Hello packets to verify each other’s status: Switch(config-if)# standby 1 timers 4 12
The standby 1 timers command configures the two HSRP timers. The first setting 4 sets the Hello timer to 4 seconds. The second setting 12 sets the holddown timer to 12 seconds. Remember, by default, Hello packets are sent every 3 seconds. Only the Standby router listens to Hello packets from the Active router. If the Standby router does not hear any Hellos from the Active router for the holddown period, then it will assume the Active router is down. In general, the holddown timer should be three times the Hello timer (the default holddown time is 10 seconds). HSRP Hello packets are sent to the multicast address 224.0.0.2 over UDP port 1985. (Reference: http://www.cisco.com/en/US/docs/internetworking/case/studies/cs009.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 113
HSRP Configuration (continued) Each router in the HSRP group retains the address configured on its local interface. However, the HSRP group itself is assigned a virtual IP address. Host devices use this virtual address as their default gateway. To configure the virtual HSRP IP address: Switch(config)# int fa0/10 Switch(config-if)# standby 1 ip 192.168.1.5
Multiple virtual HSRP IP addresses can be used: Switch(config-if)# standby 1 ip 192.168.1.5 Switch(config-if)# standby 1 ip 192.168.1.6 secondary
The HSRP group is also assigned a virtual MAC address. By default, a reserved MAC address is used: 0000.0c07.acxx …where xx is the HSRP group number in hexadecimal. For example, if the HSRP Group number was 8, the resulting virtual MAC address would be: 0000.0c07.ac08 The HSRP virtual MAC address can be manually specified: Switch(config-if)# standby 1 mac-address 0000.00ab.12ef
Authentication can be configured for HSRP. All HSRP routers in the group must be configured with the same authentication string. To specify a cleartext authentication string: Switch(config-if)# standby 1 authentication CISCO
To specify an MD5-hashed authentication string: Switch(config-if)# standby 1 authentication md5 key-string 7 CISCO
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 114
HSRP Tracking
In the above example, Switch 2 becomes the Active Router, and Switch 1 becomes the Standby router. Both Switch 1 and Switch 2 send out Hello packets with updates on their status. On Switch 2, if port Fa0/12 goes down, the switch is still able to send Hello packets to Switch 1 via Fa0/10. Thus, Switch 1 is unaware that Switch 2 is no longer capable of forwarding traffic, as Switch 2 still appears to be active (sending hellos). To combat this, HSRP can track interfaces. If the tracked interface fails, the router’s (or multilayer switch’s) priority is decreased by a specific value. Observe the following tracking configuration on Switch 2: Switch2(config-if)# standby 1 track fa0/12 50
The above command sets tracking for the fa0/12 interface, and will decrease the priority of the switch by 50 if the interface fails. The objective is to decrement the priority enough to allow another router to assume an Active status. This requires conscientious planning by the network administrator. In the above example, Switch 2’s priority would be decremented to 25 if its fa0/12 interface failed, which is less than Switch 1’s priority of 50. Tracking of interfaces will not be successful unless the other router is configured to preempt the current Active Router. Switch1(config-if)# standby 1 preempt
If the above command was not present, Switch 1 would never assume an Active state, even if Switch 2’s priority was decreased to 1. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 115
Practical HSRP Example
Switch1(config)# int fa0/10 Switch1(config-if)# no switchport Switch1(config-if)# ip address 192.168.1.5 255.255.255.0 Switch1(config-if)# standby 1 priority 50 Switch1(config-if)# standby 1 preempt Switch1(config-if)# standby 1 ip 192.168.1.1 Switch1(config-if)# standby 1 authentication CISCO
Switch2(config)# int fa0/10 Switch2(config-if)# no switchport Switch2(config-if)# ip address 192.168.1.6 255.255.255.0 Switch2(config-if)# standby 1 priority 75 Switch2(config-if)# standby 1 preempt Switch2(config-if)# standby 1 ip 192.168.1.1 Switch2(config-if)# standby 1 authentication CISCO Switch2(config-if)# standby 1 track fa0/12 50
The no switchport command specifies that interface fa0/10 is a Layer-3 (routed) port. Both switches are assigned a unique ip address to their local interfaces. However, both are given a single HSRP virtual IP address. Host devices will use this virtual address as their default gateway. Because of its higher priority, Switch 2 will become the Active Router. Its priority will decrement by 50 if interface fa0/12 should fail. Because Switch 1 is configured with the preempt command, it will take over as the Active Router if this should occur. To view the status of a configured HSRP group: Switch2# show standby Fastethernet0/10 - Group 1 State is Active 1 state changes, last state change 00:02:19 Virtual IP address is 192.168.1.1 Active virtual MAC address is 0000.0c07.ac01 Local virtual MAC address is 0000.0c07.ac01 (bia) Hello time 3 sec, hold time 10 sec Next hello sent in 1.412 secs Preemption enabled, min delay 50 sec, sync delay 40 sec Active router is local Standby router is 192.168.1.5, priority 50 (expires in 6.158 sec) Priority 75 (configured 75) Tracking 1 objects, 1 up *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 116
Virtual Router Redundancy Protocol (VRRP) The industry-standard equivalent of HSRP is the Virtual Router Redundancy Protocol (VRRP), defined in RFC 2338. It is nearly identical to HSRP, with some notable exceptions: • The router with the highest priority becomes the Master Router. • All other routers become Backup Routers. • By default, the virtual MAC address is 0000.5e00.01xx, where xx is the hexadecimal group number. • Hellos are sent every 1 second, by default. • VRRP Hellos are sent to multicast address 224.0.0.18. • VRRP will preempt by default. • VRRP cannot track interfaces. Configuration of VRRP is also very similar to HSRP: Switch(config)# int fa0/10 Switch(config-if)# no switchport Switch(config-if)# ip address 192.168.1.6 255.255.255.0 Switch(config-if)# vrrp 1 priority 75 Switch(config-if)# vrrp 1 authentication CISCO Switch(config-if)# vrrp 1 ip 192.168.1.1
As with HSRP, the default VRRP priority is 100, and a higher priority is preferred. Unlike HSRP, preemption is enabled by default. To manually disable preempt: Switch(config-if)# no vrrp 1 preempt
To view VRRP status: Switch# show vrrp Fastethernet 0/10 - Group 1 State is Master Virtual IP address is 192.168.1.1 Virtual MAC address is 0000.5e00.0101 Advertisement interval is 3.000 sec Preemption is enabled min delay is 0.000 sec Priority 75 Master Router is 192.168.1.6 (local), priority is 75 Master Advertisement interval is 3.000 sec Master Down interval is 9.711 sec (Reference: http://www.cisco.com/en/US/docs/ios/12_0st/12_0st18/feature/guide/st_vrrpx.html) *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 117
HSRP’s and VRRP’s “Pseudo” Load-Balancing While HSRP and VRRP do provide redundant gateways for fault tolerance, they do not provide load-balancing between those gateways. Cisco pretends that load balancing is possible. Theoretically, two separate HSRP or VRRP groups can be configured on each router: Switch1(config)# int fa0/10 Switch1(config-if)# no switchport Switch1(config-if)# ip address 192.168.1.5 255.255.255.0
Switch2(config)# int fa0/10 Switch2(config-if)# no switchport Switch2(config-if)# ip address 192.168.1.6 255.255.255.0
Switch1(config-if)# standby 1 priority 100 Switch1(config-if)# standby 1 preempt Switch1(config-if)# standby 1 ip 192.168.1.1
Switch2(config-if)# standby 1 priority 50 Switch2(config-if)# standby 1 preempt Switch2(config-if)# standby 1 ip 192.168.1.1
Switch1(config-if)# standby 2 priority 50 Switch1(config-if)# standby 2 preempt Switch1(config-if)# standby 2 ip 192.168.1.2
Switch2(config-if)# standby 2 priority 100 Switch2(config-if)# standby 2 preempt Switch2(config-if)# standby 2 ip 192.168.1.2
In the above example, each HSRP group (1 and 2) has been assigned a unique virtual IP address. By adjusting the priority, each multilayer switch will become the Active router for one HSRP group, and the Standby router for the other group. Switch1# show standby brief Interface Fa0/10 Fa0/10
Grp Prio P State 1 100 P Active 2 50 P Standby
Active addr local 192.168.1.6
Standby addr 192.168.1.6 local
Group addr 192.168.1.1 192.168.1.2
Standby addr local 192.168.1.5
Group addr 192.168.1.1 192.168.1.2
Switch2# show standby brief Interface Fa0/10 Fa0/10
Grp Prio P State 1 50 P Standby 2 100 P Active
Active addr 192.168.1.5 local
To achieve HSRP redundancy with this setup, half of the host devices would need to point to first virtual address (192.168.1.1), and the remaining half to the other virtual address (192.168.1.2). That’s simple and dynamic, right? Nothing like having to manually configure half of the clients to use one gateway address, and half of them to use the other. Or set up two separate DHCP scopes…. But hey – it’s not a limitation, it’s a feature! <unnecessary obscene commentary edited out> *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 118
Gateway Load Balancing Protocol (GLBP) To overcome the…. shortcomings in HSRP and VRRP, Cisco developed the oh-so proprietary Gateway Load Balancing Protocol (GLBP). Routers or multilayer switches are added to a GLBP group - but unlike HSRP/VRRP, all routers are Active. Thus, both redundancy and load-balancing are achieved. GLBP utilizes multicast address 224.0.0.102. As with HSRP and VRRP, GLBP routers are placed in a group (1-255). Routers are assigned a priority (default is 100) - the router with the highest priority becomes the Active Virtual Gateway (AVG). If priorities are equal, the router with the highest IP on its interface will become the AVG.
Routers in the GLBP group are assigned a single virtual IP address. Host devices will use this virtual address as their default gateway, and will broadcast an ARP request to determine the MAC address for that virtual IP. The router elected as the AVG listens for these ARP requests. In addition to the AVG, up to three other routers can elected as Active Virtual Forwarders (AVF’s). The AVG assigns each AVF (including itself) a virtual MAC address, for a maximum total of 4 virtual MAC addresses. When a client performs an ARP request, the AVG will provide the client with one of the virtual MAC addresses. In this way, load balancing can be achieved. GLBP is not limited to four routers. Any router not elected to be an AVF will become a Secondary Virtual Forwarder (SVF), and will wait in standby until an AVF fails. (Reference: http://www.cisco.com/en/US/docs/ios/12_2t/12_2t15/feature/guide/ft_glbp.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 119
Gateway Load Balancing Protocol (GLBP) (continued) What determines whether a router becomes an AVF or SVF? Each router is assigned a weight, and the default weight is 100. Weight can be statically configured, or dynamically decided by the router. When dynamically decided, a router’s weight will drop if a tracked interface fails. Weight thresholds can be configured, forcing a router to relinquish its AVF status if it falls below the minimum threshold. GLBP supports three load balancing methods: • Round Robin – Traffic is distributed equally across all routers. The first host request receives Router 1’s virtual MAC address, the second request will receive Router 2’s virtual MAC address, etc. This is the default load balancing mechanism. • Weighted – Traffic is distributed to routers proportional to their configured weight. Routers with a higher weight will be utilized more frequently. • Host-Dependent – A host device will always receive the same virtual MAC-address when it performs an ARP request. To configure a GLBP router’s priority to 150, and enable preempt (preemption is not enabled by default): Switch(config)# int fa0/10 Switch(config-if)# glbp 1 priority 150 Switch(config-if)# glbp 1 preempt
To track an interface, to reduce a router’s weight if that interface fails: Switch(config)# track 10 interface fa0/12 Switch(config-if)# glbp 1 weighting track 10 decrement 50
The first command creates a track object 10, which is tracking interface fa0/12. The second command assigns that track object to glbp group 1, and will decrease this router’s weight by 50 if interface fa0/12 fails. Another router cannot become an AVF unless it is configured to preempt. To specify the Virtual IP, and the load-balancing method: Switch(config-if)# glbp 1 ip 192.168.1.2 Switch(config-if)# glbp 1 load-balancing weighted
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 120
Server Load Balancing (SLB) HSRP, VRRP, and GLBP provide gateway redundancy for clients. Cisco routers and switches also support a basic clustering service. Server Load Balancing (SLB) allows a router to apply a virtual IP address to a group of servers. All of the servers should be configured identically (with the exception of their IP addresses), and provide the same function. Having multiple servers allows for both redundancy and load-balancing. Clients point to a single virtual IP address to access the server farm. The client is unaware of which server it is truly connecting to. If a specific server fails, the server farm can stay operational. Individual servers can be brought down for repair or maintenance, and the server farm can stay functional. The following diagram demonstrates SLB:
Assume the servers are Web servers. To access the Web resource, users will connect to the Virtual IP address of 192.168.1.10. The multilayer switch intercepts this packet, and redirects it to one of the physical servers inside the server farm. In essence, the multilayer switch is functioning as a Virtual Server.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 121
SLB Load Balancing Two load balancing methods exist for SLB: • Weighted Round Robin – Traffic is forwarded to the physical servers in a round robin fashion. However, servers with a higher weight are assigned more traffic. This is the default method. • Weighted Least Connections – Traffic is assigned to the server with the least amount of current connections. SLB Configuration Two separate elements need to be configured with SLB, the Server Farm, and the Virtual Server. To configure the Server Farm: Switch(config)# ip slb serverfarm MYFARM Switch(config-slb-sfarm)# predictor leastconns Switch(config-slb-sfarm)# real 192.168.1.20 Switch(config-slb-real)# weight 150 Switch(config-slb-real)# inservice Switch(config-slb-sfarm)# real 192.168.1.21 Switch(config-slb-real)# weight 100 Switch(config-slb-real)# inservice Switch(config-slb-sfarm)# real 192.168.1.22 Switch(config-slb-real)# weight 75 Switch(config-slb-real)# inservice
The ip slb serverfarm command sets the server farm name, and enters SLB Server Farm configuration mode. The predictor command sets the loadbalancing method. The real command identifies the IP address of a physical server in the farm, and enters SLB Real Server configuration mode. The weight command assigns the load-balancing weight for that server. The inservice command activates the real server. To deactivate a specific server: Switch(config-slb-sfarm)# real 192.168.1.22 Switch(config-slb-real)# no inservice (Reference: http://www.cisco.com/en/US/docs/ios/12_1/12_1e8/feature/guide/iosslb8e.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 122
SLB Configuration (continued) To configure the Virtual Server: Switch(config)# ip slb vserver VSERVERNAME Switch(config-slb-vserver)# serverfarm MYFARM Switch(config-slb-vserver)# virtual 192.168.1.10 Switch(config-slb-vserver)# client 192.168.0.0 0.0.255.255 Switch(config-slb-vserver)# inservice
The ip slb vserver command sets the Virtual Server name, and enters SLB Virtual Server configuration mode. The serverfarm command associates the server farm to this Virtual Server. The virtual command assigns the virtual IP address for the server farm. The client command specifies which clients can access the server farm. It utilizes a wildcard mask like an access-list. In the above example, client 192.168.0.0 0.0.255.255 would allow all clients in the 192.168.x.x Class B network. The inservice activates the Virtual Server. To deactivate a Virtual Server: Switch(config-slb-vserver)# no inservice
To troubleshoot SLB: Switch# show ip slb serverfarms Switch# show ip slb vserver Switch# show ip slb real
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 123
Switch Chassis Redundancy Modular Catalyst switches support the installation of multiple Supervisor Engines for redundancy. This redundancy can be configured in one of three modes: • Route Processor Redundancy (RPR) – The redundant Supervisor engine is not fully initialized. If the primary Supervisor fails, the standby Supervisor must reinitialize all other switch modules in the chassis before functionality is restored. This process can take several minutes. • Route Processor Redundancy Plus (RPR+) – The redundant Supervisor engine is fully initialized, but performs no Layer-2 or Layer-3 functions. If the primary Supervisor fails, the standby Supervisor will activate Layer-2 and Layer-3 functions, without having to reinitialize all other switch modules in the chassis. This process usually takes less than a minute. • Stateful Switchover (SSO) – The redundant Supervisor engine is fully initialized, and synchronizes all Layer-2 and Layer-3 functions with the primary Supervisor. If the primary Supervisor fails, failover can occur immediately to the standby Supervisor. To enable redundancy on the Catalyst switch, and to choose the appropriate redundancy mode: Switch(config)# redundancy Switch(config-red)# mode rpr Switch(config-red)# mode rpr-plus Switch(config-red)# mode sso
The redundancy commands would need to be enabled on both Supervisor engines. RPR+ mode requires that both Supervisor engines utilize the exact same version of the Cisco IOS.
(Reference: http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper0900aecd801c5cd7.html. http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper09186a0080088874.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 124
Section 13 - Multicast Types of “packets” Three types of packets can exist on an IPv4 network: Unicast – A packet sent from one host to only one other host. A hub will forward a unicast out all ports. If a switch has a table entry for the unicast’s MAC address, it will forward it out only the appropriate port. Broadcast – A packet sent from one host to all hosts on the IP subnet. Both hubs and switches will forward a broadcast out all ports. By definition, a router will not forward a broadcast from one segment to another. Multicast – A packet sent from one host to a specific group of hosts. Switches, by default, will forward a multicast out all ports. A router, by default, will not forward a multicast from one segment to another.
Multicast Concepts Remember, a multicast is a packet sent from one computer to a group of hosts. A host must join a multicast group in order to accept a multicast. Joining a multicast group can be accomplished statically or dynamically. Multicast traffic is generally sent from a multicast server, to multicast clients. Very rarely is a multicast packet sent back from a client to the server. Multicasts are utilized in a wide range of applications, most notably voice or video systems that have one source “serving” out data to a very specific group of clients. The key to configuring multicast is to ensure only the hosts that require the multicast traffic actually receive it.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 125
Multicast Addressing IPv4 addresses are separated into several “classes.” Class A: 1.1.1.1 – 127.255.255.255 Class B: 128.0.0.0 – 191.255.255.255 Class C: 192.0.0.0 – 223.255.255.255 Class D: 224.0.0.0 – 239.255.255.255 Class D addresses have been reserved for multicast. Within the Class D address space, several ranges have been reserved for specific purposes: • 224.0.0.0 – 224.0.0.255 – Reserved for routing and other network protocols, such as OSPF, RIP, VRRP, etc. • 224.0.1.0 – 238.255.255.255 – Reserved for “public” use, can be used publicly on the Internet. Many addresses in this range have been reserved for specific applications • 239.0.0.0 – 239.255.255.255 – Reserved for “private” use, and cannot be routed on the Internet. The following outlines several of the most common multicast addresses reserved for routing protocols: • • • • • • • • • • • •
224.0.0.1 – all hosts on this subnet 224.0.0.2 – all routers on this subnet 224.0.0.5 – all OSPF routers 224.0.0.6 – all OSPF Designated routers 224.0.0.9 – all RIPv2 routers 224.0.0.10 – all IGRP routers 224.0.0.12 – DHCP traffic 224.0.0.13 – all PIM routers 224.0.0.19-21 – ISIS routers 224.0.0.22 – IGMP traffic 224.0.1.39 – Cisco RP Announce 224.0.1.40 – Cisco RP Discovery
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 126
Multicast MAC Addresses Unfortunately, there is no ARP equivalent protocol for multicast addressing. Instead, a reserved range of MAC addresses were created for multicast IPs. All multicast MAC addresses begin with: 0100.5e Recall that the first six digits of a MAC address identify the vendor code, and the last 6 digits identify the specific host address. To complete the MAC address, the last 23 bits of the multicast IP address are used. For example, consider the following multicast IP address and its binary equivalent: 224.65.130.195 = 11100000.01000001.10000010.11000011 Remember that a MAC address is 48 bits long, and that a multicast MAC must begin with 0100.5e. In binary, that looks like: 00000001.00000000.01011110.0 Add the last 23 bits of the multicast IP address to the MAC, and we get: 00000001.00000000.01011110.01000001.10000010.11000011 That should be exactly 48 bits long. Converting that to Hex format, our full MAC address would be: 0100.5e41.82c3 How did I convert this to Hex? Remember that hexadecimal is Base 16 mathematics. Thus, to represent a single hexadecimal digit in binary, we would need 4 bits (24 = 16). So, we can break down the above binary MAC address into groups of four bits: Binary Decimal Hex
0000 0001 0000 0000 0101 1110 0100 0001 1000 0010 1100 0011 0 1 0 0 5 14 4 1 8 2 12 3 0 1 0 0 5 e 4 1 8 2 c 3
Hence the MAC address of 0100.5e41.82c3.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 127
Multicast MAC Addresses (continued) Ready for some more math, you binary fiends? Calculate what the multicast MAC address would be for the following IP addresses: 225.2.100.15 231.130.100.15
= 11100001.00000010.01100100.00001111 = 11100111.10000010.01100100.00001111
Remember that all multicast MACs begin with: 0100.5e
= 00000001.00000000.01011110.0
So, add the last 23 digits of each of the above IP addresses to the MAC address, and we get: 225.2.100.15 = 00000001.00000000.01011110.00000010.01100100.00001111 231.130.100.15 = 00000001.00000000.01011110.00000010.01100100.00001111
In Hex, that would be: 225.2.100.15 231.130.100.15
= 0100.5e02.640f = 0100.5e02.640f
Wait a second…. That’s the exact same multicast MAC address, right? Double-checking our math, we see that it’s perfect. Believe it or not, each multicast MAC address can match 32 multicast IP addresses, because we’re only taking the last 23 bits of our IP address. We already know that all multicast IP addresses MUST begin 1110. Looking at the 225.2.100.15 address in binary: 11100001.00000010.01100100.00001111 That leaves 5 bits in between our starting 1110, and the last 23 bits of our IP. Those 5 bits could be anything, and the multicast MAC address would be the same. Because 25 = 32, there are 32 multicast IP’s per multicast MAC. According to the powers that be, the likelihood of two multicast systems utilizing the same multicast MAC is rare. The worst outcome would be that hosts joined to either multicast system would receive multicasts from both. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 128
Multicasts and Routing A router, by default, will drop multicast traffic, unless a Multicast routing protocol is utilized. Multicast routing protocols ensure that data sent from a multicast source are received by (and only by) its corresponding multicast clients. Several multicast routing protocols exist, including: • • • •
Protocol Independent Multicast (PIM) Multicast OSPF (MOSPF) Distance Vector Multicast Routing Protocol (DVMRP) Core-Based Trees (CBT)
Multicast routing must be enabled globally on a Cisco router or switch, before it can be used: Switch(config)# ip multicast-routing
Multicast Path Forwarding Normally, routers build routing tables that contain destination addresses, and route packets towards that destination. With multicast, routers are concerned with routing packets away from the multicast source. This concept is called Reverse Path Forwarding (RPF). Multicast routing protocols build tables that contain several elements: • The multicast source, and its associated multicast address (labeled as “S,G”, or “Source,Group”) • Upstream interfaces that point towards the source • Downstream interfaces that point away from the source towards multicast hosts.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 129
Multicast Path Forwarding Example
A router interface will not be designated as a downstream interface unless multicast hosts actually exist downstream. In the above example, no multicast hosts exist downstream of Router 5. In fact, because no multicast hosts exist downstream of Router 1 towards Router 2, no multicast traffic for this multicast group will be forwarded down that path. Thus, Router 1’s interface connecting to Router 2 will not become a downstream port. This pruning allows for efficient use of bandwidth. No unnecessary traffic is sent down a particular link. This “map” of which segments contain multicast hosts is called the multicast tree. The multicast tree is dynamically updated as hosts join or leave the multicast group (otherwise known as pruning the branches). By designating upstream and downstream interfaces, the multicast tree remains loop-free. No multicast traffic should ever be sent back upstream towards the multicast source. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 130
Internet Group Management Protocol (IGMP) Remember, multicast works by having a source send data to a specific set of clients that belong to the same multicast group. The multicast group is configured (or assigned) a specific multicast address. The multicast clients need a mechanism to join multicast groups. Internet Group Management Protocol (IGMP) allows clients to send “requests” to multicast-enabled routers to join a multicast group. IGMP only handles group membership. To actually route multicast data to a client, a multicast routing protocol is required, such as PIM or DVMRP. Three versions of IGMP exist, IGMPv1, IGMPv2, and IGMPv3. IGMPv1 routers send out a “query” every 60 seconds to determine if any hosts need access to a multicast server. This query is sent out to the 224.0.0.1 address (i.e., all hosts on the subnet). Interested hosts must reply with a Membership Report stating what multicast group they wish to join. Unfortunately, IGMPv1 does not allow hosts to dynamically “leave” a group. Instead, if no Membership Reports are received after 3 times the query interval, the router will flush the hosts out of its IGMP table. IGMPv2 adds additional functionality. Queries can be sent out either as General Queries (224.0.0.1) or Group-Specific Queries (only sent to specific group members). Additionally, hosts can send a Leave Group message to IGMPv2 routers, to immediately be flushed out of the IGMP table. Thus, IGMPv2 allows the multicast tree to by updated more efficiently. All versions of IGMP elect one router to be the Designated Querier for that subnet. The router with the lowest IP address becomes Designated. IGMPv1 is not compatible with IGMPv2. If any IGMPv1 routers exist on the network, all routers must operate in IGMPv1 mode. Cisco IOS version 11.1 and later support IGMPv2 by default. IGMPv3 enhances v2 by supporting source-based filtering of multicast groups. Essentially, when a host responds to an IGMP query with a Membership Report, it can specifically identify which sources within a multicast group to join (or even not join). *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 131
IGMP Example
In the above example, assume the router is using IGMPv2. Interface fa0/1 points towards the multicast source, and thus becomes the upstream interface. Initially, the router will sent out Group Specific Queries out all nonupstream interfaces. Any multicast hosts will respond with a Membership Report stating what multicast group they wish to join. Interfaces fa0/2 and fa0/3 will become downstream interfaces, as they contain multicast hosts. No multicast traffic will be sent out fa0/4. If all multicast hosts leave the multicast group off of interface fa0/2, it will be removed from the multicast tree. If a multicast host is ever added off of interface fa0/4, it will become a downstream interface.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 132
IGMP Configuration No configuration is required to enable IGMP, except to enable IP multicast routing (ip multicast-routing). We can change the version of IGMP running on a particular interface (by default, it is Version 2): Switch(config-if)# ip igmp version 1
To view which multicast groups the router is aware of: Switch# show ip igmp groups
We can join a router interface to a specific multicast group (forcing the router to respond to ICMP requests to this multicast group): Switch(config-if)# ip igmp join-group 226.1.5.10
WE can also simply force a router interface to always forward the traffic of a specific multicast group out an interface: Switch(config-if)# ip igmp static-group 226.1.5.10
We can also restrict which multicast groups a host, off of a particular interface, can join: Switch(config)# access-list 10 permit 226.1.5.10 Switch(config)# access-list 10 permit 226.1.5.11 Switch(config-if)# ip igmp access-group 10
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 133
Protocol Independent Multicast (PIM) While IGMP concerns itself with allowing multicast hosts to join multicast groups, Protocol Independent Multicast (PIM) is a multicast routing protocol that is concerned about getting the multicast data to its destination (or, more accurately, taking the data away from the multicast source). PIM is also responsible for creating the multicast tree, and “pruning” the tree so that no traffic is sent unnecessarily down a link. PIM can operate in three separate modes: • PIM Dense Mode (PIM-DM) • PIM Sparse Mode (PIM-SM) • PIM Sparse-Dense Mode (PIM-SM-DM, Cisco proprietary) The key difference between PIM Dense and Sparse Mode is how the multicast tree is created. With PIM Dense Mode, all networks are flooded with the multicast traffic from the source. Afterwards, networks that don’t need the multicast are pruned off of the tree. The network that contains the multicast source becomes the “root” of the multicast network. With PIM Sparse Mode, no “flooding” occurs. Only networks that contain “requesting” multicast hosts are added to the multicast tree. A centralized PM router, called the Rendezvous Point (RP), is elected to be the “root” router of the multicast tree. PIM routers operating in Sparse Mode build their tree towards the RP, instead of towards the multicast source. The RP allows multiple multicast “sources” to utilize the same multicast tree. PIM Sparse-Dense Mode allows either Sparse or Dense Mode to be used, depending on the multicast group. Any group that points to an RP utilizes Sparse Mode. PIM Sparse-Dense Mode is Cisco proprietary. Consider these key points: • Dense Mode should be used when a large number of multicast hosts exist across the internetwork. The “flooding” process allows for a quick creation of the multicast tree, at the expense of wasting bandwidth. • Sparse Mode should be used when only a limited number of multicast hosts exist. Because hosts must explicitly join before that network segment is added to the multicast tree, bandwidth is utilized more efficiently. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 134
PIM Dense Mode Example
Multicast Source Router 1
Router 2
Router 3
Router 4
Router 7
Router 6
Router 5
Multicast Hosts
No Multicast Hosts
No Multicast Hosts
Multicast Hosts
Consider the above example. When PIM routers operate in Dense Mode, all segments of the multicast tree are flooded initially. Eventually, “branches” that do not require the multicast traffic are pruned off:
Multicast Source Router 1
Router 2
Router 3
Router 7
Router 6
Router 5
Multicast Hosts No Multicast Hosts
Router 4
No Multicast Hosts
Multicast Hosts
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 135
PIM Sparse Mode Example
When PIM routers operate in Sparse Mode, multicast traffic is not initially flooded throughout the entire multicast tree. Instead, a Rendezvous Point (RP) is elected or designated, and all multicast sources and clients must explicitly register with the RP. This provides a centralized method of directing the multicast traffic of multiple multicast sources:
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 136
Configuring Manual PIMv1 Two versions of PIM exist (PIMv1 and PIMv2), though both are very similar. PIM must be enabled on each participating interface in the multicast tree. To enable PIM and specify its mode on an interface: Switch(config)# interface fa0/10 Switch(config-if)# no switchport Switch(config-if)# ip pim dense-mode Switch(config-if)# ip pim sparse-mode Switch(config-if)# ip pim sparse-dense-mode
When utilizing PIM-SM, we must configure a Rendezvous Point (RP). RP’s can be identified manually, or dynamically chosen using a process called auto-RP (Cisco-proprietary). To manually specify an RP on a router: Switch(config)# ip pim rp-address 192.168.1.1
The above command must be configured on every router in the multicast tree, including the RP itself. To restrict the RP to a specific set of multicast groups: Switch(config)# access-list 10 permit 226.10.10.1 Switch(config)# access-list 10 permit 226.10.10.2 Switch(config)# ip pim rp-address 192.168.1.1 10
The first two commands create an access-list 10 specifying the multicast groups this RP will support. The third command identifies the RP, and applies access-list 10 to the RP.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 137
Configuring Dynamic PIMv1 When using Cisco’s auto-RP, one router is designated as a Mapping Agent. To configure a router as a mapping agent: Switch(config)# ip pim send-rp-discovery scope 10
The 10 parameter in the above command is a TTL (Time to Live) setting, indicating that this router will serve as a mapping agent for up to 10 hops away. Mapping agents listen for candidate RP’s over multicast address 224.0.1.39 (Cisco RP Announce). To configure a router as a candidate RP: Switch(config)# access-list 10 permit 226.10.10.1 Switch(config)# access-list 10 permit 226.10.10.2 Switch(config)# ip pim send-rp-announce fa0/10 scope 4 group-list 10
The first two commands create an access-list 10 specifying the multicast groups this RP will support. The third command identifies this router as a candidate RP for the multicast groups specified in group-list 10. This RP’s address will be based on the IP address configured on fa0/10. The scope 4 parameter indicates the maximum number of hops this router will advertise itself for. The above commands essentially create a “mapping” of specific RP’s to specific multicast groups. Once a mapping agent learns of these mappings from candidate RPs, it sends the information to all PIM routers over multicast address 224.0.1.40 (Cisco RP Discovery).
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 138
Configuring Dynamic PIMv2 Configuring PIMv2 is very similar to PIMv1, except that PIMv2 is a standards-based protocol. Also, there are terminology differences. Instead of mapping agents, PIMv2 uses Bootstrap Routers (BSR), which performs the same function. To configure a router as a BSR: Switch(config)# ip pim bsr-candidate fa0/10
To configure candidate RP’s in PIMv2: Switch(config)# access-list 10 permit 226.10.10.1 Switch(config)# access-list 10 permit 226.10.10.2 Switch(config)# ip pim rp-candidate fa0/10 4 group-list 10
The first two commands create an access-list 10 specifying the multicast groups this RP will support. The third command identifies this router as a candidate RP for the multicast groups specified in group-list 10. This RP’s address will be based on the IP address configured on fa0/10. The 4 parameter indicates the maximum number of hops this router will advertise itself for. With PIMv2, we can create border routers to prevent PIM advertisements (from the BSR or Candidate RPs) from passing a specific point. To configure a router as a PIM border router: Switch(config)# ip pim border
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 139
Multicasts and Layer 2 Switches Up to this point, we’ve discussed how multicasts interact with routers or multilayer switches. By default, a Layer 2 switch will forward a multicast out all ports, excluding the port it received the multicast on. To eliminate the need of “flooding” multicast traffic, two mechanisms have been developed for Layer 2 switches: • IGMP snooping • CGMP IGMP snooping allows a Layer 2 switch to “learn” the multicast MAC address of multicast groups. It does this by eavesdropping on IGMP Membership Reports sent from multicast hosts to PIM routers. The Layer 2 switch then adds a multicast MAC entry in the CAM for the specific port that needs the multicast traffic. IGMP snooping is enabled by default on the Catalyst 2950 and 3550. If disabled, it can be enabled with the following command: Switch(config)# ip igmp snooping
If a Layer 2 switch does not support IGMP snooping, Cisco Group Membership Protocol (CGMP) can be used. Three guesses as to whether this is Cisco-proprietary or not. Instead of the Layer 2 switch “snooping” the IGMP Membership Reports, CGMP allows the PIM router to actually inform the Layer 2 switch of the multicast MAC address, and the MAC of the host joining the group. The Layer 2 switch can then add this information to the CAM. CGMP must be configured on the PIM router (or multilayer switch). It is disabled by default on all PIM routers. To enable CGMP: Switch(config-if)# ip cgmp
No configuration needs to occur on the Layer 2 switch.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 140
Troubleshooting Multicasting To view IGMP groups and current members: Switch# show ip igmp groups
To view the IGMP snooping status: Switch# show ip igmp snooping
To view PIM “neighbors”: Switch# show ip pim neighbor
To view PIM RPs: Switch# show ip pim rp
To view PIM RP-to-Group mappings: Switch# show ip pim rp mapping
To view the status of PIMv1 Auto-RP: Switch# show ip pim autorp
To view PIMv2 BSRs: Switch# show ip pim bsr-router
We can also debug multicasting protocols: Switch# debug ip igmp Switch# debug ip pim
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 141
Viewing the Multicast Table Just like unicast routing protocols (such as OSPF, RIP), multicast routing protocols build a routing table. Again, these tables contain several elements: • The multicast source, and its associated multicast address (labeled as “S,G”, or “Source,Group”) • Upstream interfaces that point towards the source • Downstream interfaces that point away from the source towards multicast hosts. To view the multicast routing table: Switch# show ip mroute
If using PIM in Dense Mode, the output would be similar to the following: IP Multicast Routing Table Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned R - RP-bit set, F - Register flag, T - SPT-bit set Timers: Uptime/Expires Interface state: Interface, Next-Hop, State/Mode (10.1.1.1/24, 239.5.222.1), uptime 1:11:11, expires 0:04:29, flags: C Incoming interface: Serial0, RPF neighbor 10.5.11.1 Outgoing interface list: Ethernet0, Forward/Sparse, 2:52:11/0:01:12
Remember that a multicast source with its associated multicast address is labeled as (S,G). Thus, in the above example, 10.1.1.1/24 is the multicast source, while 239.5.222.1 is the multicast address/group that the source belongs to. The Incoming interface indicates the upstream interface. The RPF neighbor is the next hop router “upstream” towards the source. The outgoing interface(s) indicate downstream interfaces. Notice that the S – Sparse flag is not set. That’s because PIM is running in Dense Mode.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 142
Viewing the Multicast Table (continued) Remember, to view the multicast routing table: Switch# show ip mroute
If using PIM in Sparse Mode, the output would be similar to the following: IP Multicast Routing Table Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned R - RP-bit set, F - Register flag, T - SPT-bit set Timers: Uptime/Expires Interface state: Interface, Next-Hop, State/Mode (*, 224.59.222.10), uptime 2:11:05, RP is 10.1.1.10, flags: SC Incoming interface: Serial0, RPF neighbor 10.3.35.1, Outgoing interface list: Ethernet0, Forward/Sparse, 4:41:22/0:05:21
Notice that the (S,G) pairing is labeled as (*, 224.59.222.10). In Sparse Mode, we can have multiple sources share the same multicast tree. The Rendezvous Point (RP) is 10.1.1.10. The flags are set to SC, indicating this router is running in Sparse Mode. Just like with Dense Mode, the Incoming interface indicates the upstream interface, and the outgoing interface(s) indicate downstream interfaces. However, the RPF neighbor is the next hop router “upstream” towards the RP now, and not the source.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 143
________________________________________________
Part V Switch Security ________________________________________________
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 144
Section 14 - AAA AAA Securing access to Cisco routers and switches is a critical concern. Often, access is secured using enable and vty/console passwords, configured locally on the device. For large networks with many devices, this can become unmanageable, especially when passwords need to be changed. A centralized form of access security is required. AAA is a security system based on Authentication, Authorization, and Accounting. Authentication is used to grant or deny access based on a user account and password. Authorization determines what level of access that user has on the Router/router when authenticated. Accounting can keep track of who logged into what device, and for how long. AAA must be enabled globally on a router/Router. By default, it is disabled. Router(config)# aaa new-model
Privilege Levels IOS devices have a total of 16 privilege levels, numbered 0 through 15. User Exec mode is privilege level 1. Privileged Exec mode is privilege level 15. We can create a custom Privilege level, including the commands users are allowed to input at that mode: Router(config)# privilege exec all level 3 show interface Router(config)# privilege exec all level 3 show ip route Router(config)# privilege exec all level 3 show reload
To then enter that privilege level from User Mode: Router> enable 3 *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 145
Configuring Authentication Authentication can be handled several different ways. We can use a username and password configured locally on the router/Router: Router(config)# username MYNAME password MYPASSWORD
Or we can point to a centralized RADIUS or TACACS+ server, which can host the username/password database for all devices on the network: Router(config)# radius-server host 172.16.10.150 Router(config)# radius-server key MYKEY Router(config)# tacacs-server host 172.16.10.151 key MYKEY Router(config)# tacacs-server key MYKEY
The above commands point to a host server. A measure of security is maintained by using a shared key that must be configured both on the router and the RADIUS/TACACS+ server. We can also create groups of RADIUS or TACACS+ servers to point to: Router(config)# aaa group server radius MYGROUP Router(config-sg-radius)# server 172.16.10.150 Router(config-sg-radius)# server 172.16.10.152 Router(config-sg-radius)# server 172.16.10.153
There are several key differences between RADIUS and TACACS+ servers: • RADIUS is an industry standard protocol, while TACACS+ is Cisco proprietary • RADIUS utilizes UDP, while TACACS+ utilizes TCP • RADIUS encrypts only the password during the authentication process, while TACACS+ encrypts the entire packet There is one additional key difference: TACACS+ allows for the authorization of a user, in addition to the authentication of a user. Thus, TACACS+ allows us to control what commands a particular user can input. RADIUS provides only authentication services.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 146
Configuring Login Authentication On the previous page, we directed our router to a specific RADIUS or TACACS server. Next, we must specify which methods of authentication we want our router to consider when a user logs in. We can actually configure the router to use multiple forms of authentication (up to four): Router(config)# aaa authentication login default radius tacacs+ local
The above command creates an authentication profile for router login named default, directing the router to use the RADIUS server(s), TACACS+ server(s), and local forms of authentication, in that order. Thus, the RADIUS server(s) will always be used, unless they fail. Then the TACACS+ server will be used and then finally local authentication. This provides fault-tolerance and automatic failover. You should always include local at the end of this command. Otherwise, if all RADIUS and TACACS+ servers are down, you won’t be able to log into the router. Multiple authentication profiles can be created. Each must have a unique profile name. Obviously, default is the default profile name. If we wanted a separate profile named ONLYLOCAL: Router(config)# aaa authentication login ONLYLOCAL local
The last step in configuring authentication is to apply the profile to a “line,” such as the console or telnet ports. Router(config)# line vty 0 15 Router(config-line)# login authentication default
Notice we referenced the authentication profile’s name of default.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 147
Configuring PPP Authentication The previous page illustrates the use of AAA Authentication to control user login to routers and switches. Additionally, we can use AAA to authenticate both ends of a PPP connection. Point-to-Point Protocol (PPP) is a standardized WAN encapsulation protocol that can be used on a wide variety of WAN technologies, including: • Serial dedicated point-to-point lines • Asynchronous dial-up (essentially dialup) • ISDN To specify the authentication methods for PPP: Router(config)# aaa authentication ppp MYPROFILE radius local
Notice the new keyword of ppp, as opposed to login. Once we have specified the desired authentication methods, we must apply this profile to the appropriate interface: Router(config)# interface serial 0 Router(config-if)# encapsulation ppp Router(config-if)# ppp authentication pap MYPROFILE
Or: Router(config)# interface serial 0 Router(config-if)# encapsulation ppp Router(config-if)# ppp authentication chap MYPROFILE
Notice that the top example uses PAP (Password Authentication Protocol), while the bottom example uses CHAP Challenge Handshake Authentication Protocol. PAP sends the password in clear text, whereas CHAP encrypts the password with an MD5 hash. Thus, CHAP is far more secure.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 148
Configuring Authorization Authorization allows us to dictate what rights a user has to the router once they have logged in: Router(config)# Router(config)# Router(config)# Router(config)# Router(config)#
aaa authorization commands default radius aaa authorization config-commands default radius aaa authorization exec default radius aaa authorization network default radius aaa authorization reverse-access default radius
The Router will consult the RADIUS server to “authorize” access to specific privilege modes (or in the case of TACACS+, even specific commands). A user trying to access Global Configuration mode must be authorized to do so on the RADIUS server. Explanations of the above “sections” we can authorize: • • • • •
commands – access to any Router command at any mode config-commands – access to any Router configuration command exec – access to privileged mode network – access to network-related commands reverse-access – ability to reverse telnet from the Router
We can then apply this authorization to a line: Router(config)# line vty 0 15 Router(config-line)# authorization default
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 149
Configuring Accounting We can configure accounting to log access to routers and switches: Router(config)# Router(config)# Router(config)# Router(config)#
aaa accounting system default stop-only aaa accounting exec default start-stop aaa accounting commands 3 default start-stop aaa accounting commands 15 default start-stop
We can configure accounting on three separate functions: • System – records system-level events, such as reloads • Exec – records user authentication events, including duration of the session • Commands (1-15) – records every command typed in at that privilege level. In our above example, we’re logging our custom Privilege Level 3 We can then specify when these functions should be recorded: • Start-stop – recorded when the event starts and stop • Stop-only – recorded only when the event stops Finally, we must apply this to a line: Router(config)# line vty 0 15 Router(config-line)# accounting default
Troubleshooting AAA To debug the various functions of AAA: Router# Router# Router# Router# Router#
debug aaa authentication debug aaa authorization debug aaa accounting debug radius debug tacacs
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 150
Section 15 - Switch and VLAN Security Switch Port Security Port Security adds an additional layer of security to the switching network. The MAC address of a host generally does not change. If it is certain that a specific host will always remain plugged into a specific switch port, then the switch can filter all MAC addresses except for that host’s address using Port Security. The host’s MAC address can be statically mapped to the switch port, or the switch can dynamically learn it from traffic. Port security cannot be enabled on trunk ports, dynamic access ports, Etherchannel ports, or a SPAN destination port. To enable Port Security on an interface: Switch(config)# interface fa0/5 Switch(config-if)# switchport port-security
By default, Port Security will allow only one MAC on an interface. The maximum number of allowed MACs can be adjusted, up to 1024: Switch(config-if)# switchport port-security maximum 2
To statically specify the allowed MAC address(es) on a port: Switch(config-if)# switchport port-security mac-address 0001.1111.2222 Switch(config-if)# switchport port-security mac-address 0001.3333.5555
Only hosts configured with the above two MAC addresses will be able to send traffic through this port. If the maximum number of MAC addresses for this port had instead been set to 10, but only two were statically specified, the switch would dynamically learn the remaining eight MAC addresses. MAC addresses that are dynamically learned with Port Security are referred to as Sticky Addresses. Dynamically learned addresses can be aged out after a period of inactivity (measured in minutes): Switch(config-if)# switchport port-security aging time 10
Port Security aging is disabled by default. (Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.1E/native/configuration/guide/port_sec.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 151
Switch Port Security (continued) Port Security can instruct the switch on how to react if an unauthorized MAC address attempts to forward traffic through an interface (this is considered a violation). There are three violation actions a switch can take: • Shutdown – If a violation occurs, the interface is placed in an errdisable state. The interface will stop forwarding all traffic, including non-violation traffic, until taken out of the errdisable state. This is the default action for Port Security. • Restrict – If a violation occurs, the interface will stays online, forwarding legitimate traffic and dropping the unauthorized traffic. Violations are logged, either to a SYSLOG server or via an SNMP trap. • Protect – If a violation occurs, the interface will stays online, forwarding legitimate traffic and dropping the unauthorized traffic. No logging of violations will occur. To configure the desired Port Security violation action: Switch(config-if)# switchport port-security violation shutdown Switch(config-if)# switchport port-security violation restrict Switch(config-if)# switchport port-security violation protect
To view Port Security configuration and status for a specific interface: Switch# show port-security interface fastethernet 0/5 Port Security: Enabled Port status: SecureUp Violation mode: Shutdown Maximum MAC Addresses: 10 Total MAC Addresses: 10 Configured MAC Addresses: 2 Aging time: 10 mins Aging type: Inactivity SecureStatic address aging: Enabled Security Violation count: 0
Note that the Maximum MAC Addresses is set to 10, and that the Total MAC Addresses is currently at 10 as well. If another MAC address attempts to forward data through this interface, it will be place in an errdisable state, as the violation action is set to Shutdown. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 152
802.1x Port Authentication 802.1x Port Authentication forces a host device to authenticate with the switch, before the switch will forward traffic on behalf of that host. This is accomplished using the Extensible Authentication Protocol over LANs (EAPOL). 802.1x only supports RADIUS servers to provide authentication. Both the switch and the host must support 802.1x to use port authentication: • If the host supports 802.1x, but the switch does not – the host will not utilize 802.1x and will communicate normally with the switch. • If the switch supports 802.1x, but the host does not – the interface will stay in an unauthorized state, and will not forward traffic. A switch interface configured for 802.1x authentication stays in an unauthorized state until a client successfully authenticates. The only traffic permitted through an interface in an unauthorized state is as follows: • EAPOL (for client authentication) • Spanning Tree Protocol (STP) • Cisco Discovery Protocol (CDP) To globally enable 802.1x authentication on the switch: Switch(config)# dot1x system-auth-control
To specify the authenticating RADIUS servers, and configure 802.1x to employ those RADIUS servers: Switch(config)# aaa new-model Switch(config)# radius-server host 192.168.1.42 key CISCO Switch(config)# aaa authentication dot1x default group radius
Finally, 802.1x authentication must be configured on the desired interfaces. An interface can be configured in one of three 802.1x states: • force-authorized – The interface will always authorize any client, essentially disabling authentication. This is the default state. • force-unauthorized – The interface will never authorize any client, essentially preventing traffic from being forwarded. • auto – The interface will actively attempt to authenticate the client. Switch(config)# interface fa0/5 Switch(config-if)# dot1x port-control auto (Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst2950/software/release/12.1_9_ea1/configuration/guide/Sw8021x.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 153
VLAN Access-Lists Normally, access-lists are used to filter traffic between networks or VLANs. VLAN Access-Lists (VACLs) filter traffic within a VLAN, with granular precision. VACLs can filter IP, IPX, or MAC address traffic. Assume that host 10.1.5.10 should be filtered from communicating to any other device on the 10.1.x.x/16 network, in VLAN 102. First, an access-list must be created to identify the traffic to be filtered within the VLAN: Switch(config)# ip access-list extended BLOCKTHIS Switch(config-ext-nacl)# permit ip host 10.1.5.10 10.1.0.0 0.0.255.255
The first line creates an extended named access-list called BLOCKTHIS. This contains a single entry, permiting host 10.1.5.10 to reach any other device on the 10.1.0.0 network. Confused as to why the 10.1.5.10 host was permitted, and not denied? In this instance, the access-list is not being used to deny traffic, but merely to identify the traffic. The permit functions as a true statement, and a deny would function as a false statement. The next step is to create the actual VACL: Switch(config)# vlan access-map MYVACL 5 Switch(config-access-map)# match ip address BLOCKTHIS Switch(config-access-map)# action drop Switch(config-access-map)# vlan access-map MYVACL 10 Switch(config-access-map)# action forward Switch(config)# vlan filter MYVACL vlan-list 102
The first line creates a vlan access-map named MYVACL. Traffic that matches entries in the BLOCKTHIS access-list will be dropped. The final vlan access-map entry contains only an action to forward. This will apply to all other traffic, as no IP or access-list was specified. The above configuration would block all traffic from the 10.1.5.10 host to any other host on VLAN 102, while passing all other traffic. Notice that every access-map statement contains a sequence number (in the above example, 5 and 10). This dictates the order in which these rules should be followed. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 154
Private VLANs Private VLANs (PVLANs) allow for further segmentation of a subnet within a VLAN. Essentially, multiple sub-VLANs (considered secondary VLANs) are created beneath a primary VLAN. The secondary VLAN can only communicate with the primary VLAN, and not any other secondary VLANs. There are two types of secondary VLANs: • Community – interfaces within the secondary VLAN can communicate with each other. • Isolated – interfaces within the secondary VLAN cannot communicate with each other. Private VLANs are only locally-significant to the switch - VTP will not pass this information to other switches. Each switch interface in a private VLAN assumes a specific role: • Promiscuous - communicates with the primary VLAN and all secondary VLANs. Gateway devices such as routers and switches should connect to promiscuous ports. • Host – communicates only with promiscuous ports, or ports within the local community VLAN. Host devices connect to host ports. PVLANs thus allow groups of host devices to be segmented within a VLAN, while still allowing those devices to reach external networks via a promiscuous gateway.
(Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/release/12.2_25_see/configuration/guide/swpvlan.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 155
Private VLAN Configuration The first step to configuring PVLANs is to specify the secondary VLANs: Switch(config)# vlan 100 Switch(config-vlan)# private-vlan community Switch(config)# vlan 101 Switch(config-vlan)# private-vlan isolated
Next, the primary VLAN must be specified, and the secondary VLANs associated with it: Switch(config)# vlan 50 Switch(config-vlan)# private-vlan primary Switch(config-vlan)# private-vlan association 100,101
Secondary VLANs 100 and 101 have been associated with the primary VLAN 50. Next, Host ports must be identified, and associated with a primary and secondary VLAN: Switch(config)# interface range fa0/5 – 6 Switch(config-if)# switchport private-vlan host Switch(config-if)# switchport private-vlan host-association 50 101
Interfaces fa0/5 and fa0/6 have been identified as host ports, and associated with primary VLAN 50, and secondary VLAN 101. Finally, promiscuous ports must be identified, and associated with the primary VLAN and all secondary VLANs. Switch(config)# interface range fa0/20 Switch(config-if)# switchport private-vlan promiscuous Switch(config-if)# switchport private-vlan mapping 50 100.101
Interface fa0/20 has been identified as a promiscuous port, and associated with primary VLAN 50, and secondary VLANs 100 and 101.
(Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/release/12.2_25_see/configuration/guide/swpvlan.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 156
DHCP Snooping Dynamic Host Control Protocol (DHCP) provides administrators with a mechanism to dynamically allocate IP addresses, rather than manually setting the address on each device. DHCP servers lease out IP addresses to DHCP clients, for a specific period of time. There are four steps to this DHCP process: • When a DHCP client first boots up, it broadcasts a DHCPDiscover message, searching for a DHCP server. • If a DHCP server exists on the local segment, it will respond with a DHCPOffer, containing the “offered” IP address, subnet mask, etc. • Once the client receives the offer, it will respond with a DHCPRequest, indicating that it will accept the offered protocol information. • Finally, the server responds with a DHCPACK, acknowledging the clients acceptance of offered protocol information. Malicious attackers can place a rogue DHCP server on the trusted network, intercepting DHCP packets while masquerading as a legitimate DHCP server. This is one form of a Spoofing attack, or an attack aimed at gaining unauthorized access or stealing information by sourcing packets from a trusted source. This is also referred to as a man-in-the-middle attack. DHCP attacks of this sort can be mitigated by using DHCP Snooping. Only specified interfaces will accept DHCPOffer packets – unauthorized interfaces will discard these packets, and then place the interface in an errdisable state. DHCP Snooping must first be globally enabled on the switch: Switch(config)# ip dhcp snooping
Then, DHCP snooping must be enabled for a specific VLAN(s): Switch(config)# ip dhcp snooping vlan 5
By default, all interfaces are considered untrusted by DHCP Snooping. Interfaces connecting to legitimate DHCP servers must be trusted: Switch(config)# interface fa0/15 Switch(config)# ip dhcp snooping trust (Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst4500/12.1/12ew/configuration/guide/dhcp.pdf)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 157
Dynamic ARP Inspection Another common man-in-the-middle attack is ARP Spoofing (sometimes referred to as ARP Poisoning). A malicious host can masquerade as another host, by intercepting ARP requests and responding with its own MAC address. Dynamic ARP Inspection (DAI) mitigates the risk of ARP Spoofing, but inspecting all ARP traffic on untrusted ports. DAI will confirm that a legitimate MAC-to-IP translation has occurred, by comparing it against a trusted database. This MAC-to-IP database can be statically configured, or DAI can utilize the DHCP Snooping table (assuming DHCP Snooping has been enabled). DAI can be globally enabled for a specific VLAN(s): Switch(config)# ip arp inspection vlan 100
By default, all interfaces in VLAN 100 will be considered untrusted, and subject to inspection by DAI. Interfaces to other switches should be configured as trusted (no inspection will occur), as each switch should handle DAI locally: Switch(config)# interface fa0/24 Switch(config-if)# ip arp inspection trust
To create a manual MAC-to-IP database for DAI to reference: Switch(config)# arp access-list DAI_LIST Switch(config-acl)# permit ip host 10.1.1.5 mac host 000a.1111.2222 Switch(config-acl)# permit ip host 10.1.1.6 mac host 000b.3333.4444 Switch(config)# ip arp inspection filter DAI_LIST vlan 100
If an ARP response does not match the MAC-to-IP entry for a particular IP address, then DAI drops the ARP response and generates a log message.
(Reference: http://www.cisco.com/en/US/docs/switches/lan/catalyst3560/software/release/12.2_20_se/configuration/guide/swdynarp.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 158
________________________________________________
Part VI QoS ________________________________________________
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 159
Section 16 - Introduction to QoS Obstacles to Network Communication Modern networks support traffic beyond the traditional data types, such as email, file sharing, or web traffic. Increasingly, data networks share a common medium with more sensitive forms of traffic, like voice and video. These sensitive traffic types often require guaranteed or regulated service, as such traffic is more susceptible to the various obstacles of network communication, including: Lack of Bandwidth – Describes the simple lack of sufficient throughput, which can severely impact sensitive traffic. Increasing bandwidth is generally considered the best method of improving network communication, though often expensive and time-consuming. Bandwidth is generally measured in bits-per-second (bps), and can be offered at a fixed-rate (as Ethernet usually is), or at a variable-rate (as Frame-Relay often is). Various mechanisms, such as compression, can be used to pseudo-increase the capacity of a link. Delay – Defines the latency that occurs when traffic is sent end-to-end across a network. Delay will occur at various points on a network, and will be discussed in greater detail shortly. Jitter – Describes the fragmentation that occurs when traffic arrives at irregular times or in the wrong order. Jitter is thus a varying amount of delay. Voice communication is especially susceptible to jitter. Jitter can be somewhat mitigated using a de-jitter buffer. Data Loss – Defines the packet loss that occurs due to link congestion. A full queue will drop newly-arriving packets - an effect known as tail drop. All of above factors adversely affect network communication. Voice over IP (VoIP) traffic, for example, begins to degrade when delay is higher than 150 ms, and when data loss is greater than 1%. Quality of Service (QoS) tools have been developed as an alternative to merely increasing bandwidth. These QoS mechanisms are designed to provide specific applications with guaranteed or consistent service in the absence of optimal bandwidth conditions. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 160
Types of Delay Delay can occur at many points on a network. Collectively, this is known as end-to-end delay. The various types of delay include: • Serialization Delay – refers to the time necessary for an interface to encode bits of data onto a physical medium. Calculating serialization delay can be accomplished using a simple formula: ________# of bits________ bits per second (bps)
Thus, the serialization delay to encode 128,000 bits on a 64,000 bps link would be 2 seconds. • Propagation Delay – refers to the time necessary for a single bit to travel end-to-end on a physical wire. For the incredibly anal geeks, the rough formula to estimate propagation delay on a copper wire: ____Length of the Physical Wire (in meters)___ 2.1 x 108 meters/second
• Forwarding (or Processing) Delay – refers to the time necessary for a router or switch to move a packet between an ingress (input) queue and an egress (output) queue. Forwarding delay is affected by a variety of factors, such as the routing or switching method used, the speed of the device’s CPU, or the size of the routing table. • Queuing Delay – refers to the time spent in an egress queue, waiting for previously-queued packets to be serialized onto the wire. Queues that are too small can become congested, and start dropping newly arriving packets (tail drop). This forces a higher-layer protocol (such as TCP) to resend data. Queues that are too large can actually queue too many packets, causing long queuing delays. • Network (Provider) Delay – refers to the time spent in a WAN provider’s cloud. Network delay can be very difficult to quantify, as it is often impossible to determine the structure of the cloud. • Shaping Delay – refers to the delay initiated by shaping mechanisms intended to slow down traffic to prevent dropped packet due to congestion. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 161
QoS Methodologies There are three key methodologies for implementing QoS: • Best-Effort • Integrated Services (IntServ) • Differentiated Services (DiffServ) Best-Effort QoS is essentially no QoS. Traffic is routed on a first-come, first-served basis. Sensitive traffic is treated no differently than normal traffic. Best-Effort is the default behavior of routers and switches, and as such is easy to implement and very scalable. The Internet forwards traffic on a Best-Effort basis. Integrated Services (IntServ) QoS is also known as end-to-end or hard QoS. IntServ QoS requires an application to signal that it requires a specific level of service. An Admission Control protocol responds to this request by allocating or reserving resources end-to-end for the application. If resources cannot be allocated for a particular request, then it is denied. Every device end-to-end must support the IntServ QoS protocol(s). IntServ QoS is not considered a scalable solution for two reasons: • There is only a finite amount of bandwidth available to reserved. • IntServ QoS protocols add significant overhead on devices end-toend, as each traffic flow must be statefully maintained. The Resource Reservation Protocol (RSVP) is an example IntServ QoS protocol. Differentiated Services (DiffServ) QoS was designed to be a scalable QoS solution. Traffic types are organized into specific classes, and then marked to identify their classification. Policies are then created on a per-hop basis to provide a specific level of service, depending on the traffic’s classification. DiffServ QoS is popular because of its scalability and flexibility in enterprise environments. However, DiffServ QoS is considered soft QoS, as it does not absolutely guarantee service, like IntServ QoS. DiffServ QoS does not employ signaling, and does not enforce end-to-end reservations.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 162
QoS Tools Various tools have been developed to enforce QoS. Many of these tools are used in tandem as part of a complete QoS policy: • Classification and Marking • Queuing • Queue Congestion Avoidance Classification is a method of identifying and then organizing traffic based on service requirements. This traffic is then marked or tagged based on its classification, so that the traffic can be differentiated. Classification and marking are covered in great detail in another guide. Queuing mechanisms are used to service higher priority traffic before lower priority traffic, based on classification. A variety of queuing methods are available: • First-In First-Out (FIFO) • Priority Queuing (PQ) • Custom Queuing (CQ) • Weighted Fair Queuing (WFQ) • Class-Based Weighted Fair Queuing (CBWFQ) • Low-Latency Queuing (LLQ) Each will be covered in detail in a separate guide. Queue Congestion Avoidance mechanisms are used to regulate queue usage so that saturation (and thus, tail drop) does not occur. Random Early Detection (RED) and Weighted RED (WRED) are two methods of congestion avoidance, and are both covered in a separate guide.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 163
Configuring QoS on IOS Devices There are four basic methods of implementing QoS on Cisco IOS devices: • Legacy QoS CLI • Modular QoS CLI • AutoQoS • Security Device Manager (SDM) QoS Wizard Legacy QoS CLI is a limited and deprecated method of implementing QoS via the IOS command-line. Legacy CLI combined the classification of traffic with the enforcement of QoS policies. All configuration occurs on a per-interface basis. Modular QoS CLI (MQC) is an improved command-line implementation of QoS. MQC is considered modular because it separates classification (using class-maps to match traffic) from policy configuration (using policymaps to apply a specific level of service per classification). Policy-maps are then applied to an interface using a service-policy. AutoQoS is an automated method of generating QoS configurations on IOS devices. AutoQoS, originally developed for VoIP traffic, can run a discovery process to analyze and classify a variety of traffic types. AutoQoS can then create QoS policies based on those classifications. Afterwards, MQC can be used to fine-tune AutoQoS’s generated configuration. The Cisco Security Device Manager (SDM) is a web-based management GUI for Cisco IOS devices. The SDM QoS Wizard provides a graphical method of configuring and monitoring QoS. The Wizard separates traffic into three categories: • Real-Time – for VoIP and signaling traffic. • Business-Critical – for transactional, network management, and routing traffic. • Best Effort – for all other traffic. A percentage of the interface bandwidth can then be allocated for each traffic category. MQC and AutoQoS will be covered in greater detail in separate guides.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 164
Section 17 - QoS Classification and Marking Classifying and Marking Traffic Conceptually, DiffServ QoS involves three steps: • Traffic must be identified and then classified into groups. • Traffic must be marked on trust boundaries. • Policies must be created to describe the per-hop behavior for classified traffic. DiffServ QoS relies on the classification of traffic, to provide differentiated levels of service on a per-hop basis. Traffic can be classified based on a wide variety of criteria called traffic descriptors, which include: • Type of application • Source or destination IP address • Incoming interface • Class of Service (CoS) value in an Ethernet header • Type of Service (ToS) value in an IP header (IP Precedence or DSCP) • MPLS EXP value in a MPLS header Access-lists can be used to identify traffic for classification, based on address or port. However, a more robust solution is Cisco’s Network-Based Application Recognition (NBAR), which will dynamically recognize standard or custom applications, and can classify based on payload. Once classification has occurred, traffic should be marked, to indicate the required level of QoS service for that traffic. Marking can occur within either the Layer-2 header or the Layer-3 header. The point on the network where traffic is classified and marked is known as the trust boundary. QoS marks originating from outside this boundary should be considered untrusted, and removed or changed. As a general rule, traffic should be marked as close to the source as possible. In VoIP environments, this is often accomplished on the VoIP phone itself. Traffic classification should not occur in the network core. Configuring DiffServ QoS on IOS devices requires three steps: • Classify traffic using a class-map. • Define a QoS policy using a policy-map. • Apply the policy to an interface, using the service-policy command. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 165
Layer-2 Marking Layer-2 marking can be accomplished for a variety of frame types: • Ethernet – using the 802.1p Class of Service (CoS) field. • Frame Relay – using the Discard Eligible (DE) bit. • ATM - using the Cell Loss Priority (CLP) bit. • MPLS - using the EXP field. Marking Ethernet frames is accomplished using the 3-bit 802.1p Class of Service (CoS) field. The CoS field is part of the 4-byte 802.1Q field in an Ethernet header, and thus is only available when 802.1Q VLAN frame tagging is employed. The CoS field provides 8 priority values: Type
Decimal
Binary
Routine Priority Immediate Flash Flash-Override Critical Internet Network Control
0 1 2 3 4 5 6 7
000 001 010 011 100 101 110 111
General Application Best effort forwarding Medium priority forwarding High priority forwarding VoIP call signaling forwarding Video conferencing forwarding VoIP forwarding Inter-network control (Reserved) Network control (Reserved)
Frame Relay and ATM frames provide a less robust marking mechanism, compared to the Ethernet CoS field. Both Frame Relay and ATM frames reserve a 1-bit field, to prioritize which traffic should be dropped during periods of congestion. Frame Relay identifies this bit as the Discard Eligible (DE) field, while ATM refers to this bit as the Cell Loss Priority (CLP) field. A value of 0 indicates a lower likelihood to get dropped, while a value of 1 indicates a higher likelihood to get dropped. MPLS employs a 3-bit EXP (Experimental) field within the 4-byte MPLS header. The EXP field provides similar QoS functionality to the Ethernet CoS field.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 166
Layer-3 Marking Layer-3 marking is accomplished using the 8-bit Type of Service (ToS) field, part of the IP header. A mark in this field will remain unchanged as it travels from hop-to-hop, unless a Layer-3 device is explicitly configured to overwrite this field. There are two marking methods that use the ToS field: • IP Precedence - uses the first three bits of the ToS field. • Differentiated Service Code Point (DSCP) – uses the first six bits of the ToS field. When using DSCP, the ToS field is often referred to as the Differentiated Services (DS) field. These values determine the per-hop behavior (PHB) received by each classification of traffic. IP Precedence IP Precedence utilizes the first three bits (for a total of eight values) of the ToS field to identify the priority of a packet. Packets with a higher IP Precedence value should be provided with a better level of service. IP Precedence values are comparable to Ethernet CoS values: Type
Decimal
Binary
Routine Priority Immediate Flash Flash-Override Critical Internet Network Control
0 1 2 3 4 5 6 7
000 001 010 011 100 101 110 111
General Application Best effort forwarding Medium priority forwarding High priority forwarding VoIP call signaling forwarding Video conferencing forwarding VoIP forwarding Inter-network control (Reserved) Network control (Reserved)
By default, all traffic has an IP Precedence of 000 (Routine), and is forwarded on a best-effort basis. Normal network traffic should not (and in most cases, cannot) be set to 110 (Inter-Network Control) or 111 (Network Control), as it could interfere with critical network operations, such as STP calculations or routing updates. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 167
Differentiated Service Code Point (DSCP) DSCP utilizes the first six bits of the ToS header to identify the priority of a packet. The first three bits identify the Class Selector of the packet, and is backwards compatible with IP Precedence. The following three bits identify the Drop Precedence of the packet. Class Name Default AF11 AF12 AF13 AF21 AF22 AF23 AF31 AF32 AF33 AF41 AF42 AF43 EF
Binary
Class Selector
000 000 001 010 001 100 001 110 010 010 010 100 010 110 011 010 011 100 011 110 100 010 100 100 100 110 101 110
0 1
2
3
4
Drop Precedence
Low Medium High Low Medium High Low Medium High Low Medium High
5
DSCP identifies six Class Selectors for traffic (numbered 0 - 5). Class 0 is default, and indicates best-effort forwarding. Packets with a higher Class value should be provided with a better level of service. Class 5 is the highest DSCP value, and should be reserved for the most sensitive traffic. Within each Class Selector, traffic is also assigned a Drop Precedence. Packets with a higher Drop Precedence are more likely to be dropped during congestion than packets with a lower Drop Precedence. Remember that this is applied only within the same Class Selector. The Class Name provides a simple way of identifying the DSCP value. AF is short for Assured Forwarding, and is the type of service applied to Classes 1 – 4. If a packet is marked AF23, then the Class Selector is 2 (the 2 in 23) and its Drop Precedence is High (the 3 in 23). Packets marked as Class 0 (Default) or Class 5 (Expedited Forwarding or EF) do not have a Drop Precedence. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 168
Modular QoS CLI (MQC) The Modular QoS CLI (MQC) is an improved command-line implementation of QoS that replaced legacy CLI commands on IOS devices. MQC is considered modular because it separates classification from policy configurations. There are three steps to configuring QoS using MQC: • Classify traffic using a class-map. • Define a QoS policy using a policy-map. • Apply the policy to an interface, using the service-policy command. Classifying and Marking Traffic using MQC Traffic is classified using one or more of the traffic descriptors listed earlier in this guide. This is accomplished using the class-map command: Router(config)# access-list 101 permit tcp any 10.1.5.0 0.0.0.255 eq www Router(config)# class-map match-any LOWCLASS Router(config-cmap)# match access-group 101
The access-list matches all http traffic destined for 10.1.5.0/24. The class-map command creates a new classification named LOWCLASS. The match-any parameter dictates that traffic can match any of the traffic descriptors within the class-map. Alternatively, specifying match-all dictates that traffic must match all of the descriptors within the class-map. Within the class-map, match statements are used to identify specific traffic descriptors. The above example (match access-group) references an accesslist. To match other traffic descriptors: Router(config)# class-map match-any HICLASS Router(config-cmap)# match input-interface fastethernet0/0 Router(config-cmap)# match ip precedence 4 Router(config-cmap)# match ip dscp af21 Router(config-cmap)# match any
The above is not a comprehensive list of descriptors that can be matched. Reference the link below for a more complete list. (Reference: http://www.cisco.com/en/US/docs/ios/12_2/qos/configuration/guide/qcfmcli2.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 169
Network-Based Application Recognition (NBAR) Cisco’s Network-Based Application Recognition (NBAR) provides an alternative to using static access-lists to identify protocol traffic for classification. NBAR introduces three key features: • Dynamic protocol discovery • Statistics collection • Automatic traffic classification NBAR provides classification abilities beyond that of access-lists, including: • Ability to classify services that use dynamic port numbers. This is accomplished using the stateful inspection of traffic flows. • Ability to classify services based on sub-protocol information. For example, NBAR can classify HTTP traffic based on payload, such as the host, URL, or MIME type. NBAR employs a Protocol Discovery process to determine the application traffic types traversing the network. The Protocol Discovery process will then maintain statistics on these traffic types. NBAR recognizes applications using NBAR Packet Description Language Modules (PDLMs), which are stored in flash on IOS devices. Updated PDLMs are provided by Cisco so that IOS devices can recognize newer application types. NBAR has specific requirements and limitations: • NBAR requires that Cisco Express Forwarding (CEF) be enabled. • NBAR does not support Fast EtherChannel interfaces. • NBAR supports only 24 concurrent host, URL, or MIME types. • NBAR can only analyze the first 400 bytes of a packet. Note: This restriction is only for IOS versions previous to 12.3(7), which removed this restriction. • NBAR cannot read sub-protocol information in secure (encrypted) traffic types, such as HTTPS. • NBAR does not support fragmented packets.
(Reference Reference: CCNP ONT Official Exam Certification Guide. Amir Ranjbar. Pages 110-112: http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6558/ps6612/ps6653/prod_qas09186a00800a3ded.pdf)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 170
Configuring NBAR To enable NBAR Protocol Discovery on an interface: Router(config)# ip cef Router(config)# interface fa0/0 Router(config-if)# ip nbar protocol-discovery
To view statistics for NBAR-discovered protocol traffic: Router# show ip nbar protocol-discovery FastEthernet0/0 Input ----Protocol Packet Count Byte Count 30sec Bit Rate 30sec Max Bit Rate ----------------------- ---------------------http 15648 154861743 123654 654123 ftp 4907 954604255 406588 1085994
Output -----Packet Count Byte Count 30sec Bit Rate 30sec Max Bit Rate -----------------15648 154861743 123654 654123 4907 954604255 406588 1085994
NBAR classification occurs within a MQC class-map, using the match protocol command: Router(config)# class-map match-any LOWCLASS Router(config-cmap)# match protocol http Router(config-cmap)# match protocol ftp
Matching traffic based on sub-protocol information supports wildcards: Router(config)# class-map match-any HICLASS Router(config-cmap)# match protocol http host *routeralley.com* Router(config-cmap)# match protocol http mime “*pdf”
Custom protocol types can be manually added to the NBAR database: Router(config)# ip nbar port-map MYPROTOCOL tcp 1982
Updated PDLMs can be downloaded into flash and then referenced for NBAR: Router(config)# ip nbar pdlm flash://unrealtournament.pdlm (Reference: http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122newft/122t/122t8/dtnbarad.pdf)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 171
Creating and Applying a QoS Policy using MQC After traffic has been appropriately classified, policy-maps are used to dictate how that traffic should be treated (the per-hop behavior). Router(config)# policy-map THEPOLICY Router(config-pmap)# class LOWCLASS Router(config-pmap-c)# set ip precedence 1 Router(config-pmap)# class HICLASS Router(config-pmap-c)# set ip dscp af41
The policy-map command creates a policy named THEPOLICY. The class commands associate the LOWCLASS and HICLASS class-maps created earlier to this policy-map. Within the policy-map class sub-configuration mode, set statements are used to specify the desired actions for the classified traffic. In the above example, specific ip precedence or ip dscp values have been marked on their respective traffic classes. A wide variety of policy actions are available: Router(config)# policy-map LOWPOLICY Router(config-pmap)# class LOWCLASS Router(config-pmap-c)# bandwidth 64 Router(config-pmap-c)# queue-limit 40 Router(config-pmap-c)# random-detect
The above is by no means a comprehensive list of policy actions. Reference the link below for a more complete list. Policy actions such as queuing and congestion avoidance will be covered in great detail in other guides. Once the appropriate class-map(s) and policy are created, the policy must be applied directionally to an interface. An interface can have up to two QoS policies, one each for inbound and outbound traffic. Router(config)# int fa0/0 Router(config-if)# service-policy input THEPOLICY
Any traffic matching the criteria of class-maps LOWCLASS and HICLASS, coming inbound on interface fa0/0, will have the actions specified in the policy-map THEPOLICY applied. (Reference: http://www.cisco.com/en/US/docs/ios/12_2/qos/configuration/guide/qcfmcli2.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 172
Troubleshooting MQC QoS To view all configured class-maps: Router# show class-map Class Map LOWCLASS Match access-group 101 Class Map HICLASS Match protocol http host *routeralley.com* Match protocol http mime “*pdf”
To view all configured policy-maps: Router# show policy-map Policy Map THEPOLICY Class LOWCLASS set ip precedence 1 Class HIGHCLASS set ip dscp af41
To view the statistics of a policy-map on a specific interface: Router# show policy-map interface fastethernet0/1 FastEthernet0/0 Service-policy input: THEPOLICY Class-map: LOWCLASS (match-all) 15648 packets, 154861743 bytes 1 minute offered rate 512000 bps, drop rate 0 bps Match: access-group 101 QoS Set ip precedence 1 Packets marked 15648
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 173
Section 18 - QoS and Queuing Queuing Overview A queue is used to store traffic until it can be processed or serialized. Both switch and router interfaces have ingress (inbound) queues and egress (outbound) queues. An ingress queue stores packets until the switch or router CPU can forward the data to the appropriate interface. An egress queue stores packets until the switch or router can serialize the data onto the physical wire. Switch ports and router interfaces contain both hardware and software queues. Both will be explained in detail later in this guide. Queue Congestion Switch (and router) queues are susceptible to congestion. Congestion occurs when the rate of ingress traffic is greater than can be successfully processed and serialized on an egress interface. Common causes for congestion include: • The speed of an ingress interface is higher than the egress interface. • The combined traffic of multiple ingress interfaces exceeds the capacity of a single egress interface. • The switch/router CPU is insufficient to handle the size of the forwarding table. By default, if an interface’s queue buffer fills to capacity, new packets will be dropped. This condition is referred to as tail drop, and operates on a firstcome, first-served basis. If a standard queue fills to capacity, any new packets are indiscriminately dropped, regardless of the packet’s classification or marking. QoS provides switches and routers with a mechanism to queue and service higher priority traffic before lower priority traffic. This guide covers various queuing methods in detail. QoS also provides a mechanism to drop lower priority traffic before higher priority traffic, during periods of congestion. This is known as Weighted Random Early Detection (WRED), and is covered in detail in another guide. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 174
Types of Queues Recall that interfaces have both ingress (inbound) queues and egress (outbound) queues. Each interface has one or more hardware queues (also known as transmit (TxQ) queues). Traffic is placed into egress hardware queues to be serialized onto the wire. There are two types of hardware queues. By default, traffic is placed in a standard queue, where all traffic is regarded equally. However, interfaces can also support strict priority queues, dedicated for higher-priority traffic. DiffServ QoS can dictate that traffic with a higher DSCP or IP Precedence value be placed in strict priority queues, to be serviced first. Traffic in a strict priority queue is never dropped due to congestion. A Catalyst switch interface may support multiple standard or strict priority queues, depending on the switch model. Cisco notates strict priority queues with a “p”, standard queues with a “q”, and WRED thresholds per queue (explained in a separate guide) with a “t”. If a switch interface supports one strict priority queue, two standard queues, and two WRED thresholds, Cisco would notate this as: 1p2q2t To view the supported number of hardware queues on a given Catalyst switch interface: Switch# show interface fa0/12 capabilities
The strict priority egress queue must be explicitly enabled on an interface: Switch(config)# interface fa0/12 Switch(config-if)# priority-queue out
To view the size of the hardware queue of a router serial interface: Router# show controller serial
The size of the interface hardware queue can modified on some Cisco models, using the following command: Router(config)# interface serial 0/0 Router(config-if)# tx-ring-limit 3 (Reference: http://www.cisco.com/en/US/tech/tk389/tk813/technologies_tech_note09186a00801558cb.shtml)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 175
Forms of Queuing The default form of queuing on nearly all interfaces is First-In First-Out (FIFO). This form of queuing requires no configuration, and simply processes and forwards packets in the order that they arrive. If the queue becomes saturated, new packets will be dropped (tail drop). This form of queuing may be insufficient for real-time applications, especially during times of congestion. FIFO will never discriminate or give preference to higher-priority packets. Thus, applications such as VoIP can be starved out during periods of congestion. Hardware queues always process packets using the FIFO method of queuing. In order to provide a preferred level of service for high-priority traffic, some form of software queuing must be used. Software queuing techniques can include: • First-In First-Out (FIFO) (default) • Priority Queuing (PQ) • Custom Queuing (CQ) • Weighted Fair Queuing (WFQ) • Class-Based Weighted Fair Queuing (CBWFQ) • Low-Latency Queuing (LLQ) Each of the above software queuing techniques will be covered separately in this guide. Software queuing usually employs multiple queues, and each is assigned a specific priority. Traffic can then be assigned to these queues, using accesslists or based on classification. Traffic from a higher-priority queue is serviced before the traffic from a lower-priority queue. Please note: traffic within a single software queue (sometimes referred to as sub-queuing) is always processed using FIFO. Note also: if the hardware queue is not congested, software queues are ignored. Remember, software-based queuing is only used when the hardware queue is congested. Software queues serve as an intermediary, deciding which traffic types should be placed in the hardware queue first and how often, during periods of congestion.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 176
Priority Queuing (PQ) Priority Queuing (PQ) employs four separate queues: • High • Medium • Normal (default) • Low Traffic must be assigned to these queues, usually using access-lists. Packets from the High queue are always processed before packets from the Medium queue. Likewise, packets from the Medium queue are always processed before packets in the Normal queue, etc. Remember that traffic within a queue is processed using FIFO. As long as there are packets in the High queue, no packets from any other queues are processed. Once the High queue is empty, then packets in the Medium queue are processed… but only if no new packets arrive in the High queue. This is referred to as a strict form of queuing. The obvious advantage of PQ is that higher-priority traffic is always processed first. The nasty disadvantage to PQ is that the lower-priority queues can often receive no service at all. A constant stream of Highpriority traffic can starve out the lower-priority queues. To configure PQ, traffic can first be identified using access-lists: Router(config)# access-list 2 permit 150.1.1.0 0.0.0.255 Router(config)# access-list 100 permit tcp any 10.1.1.0 0.0.0.255 eq www
Then, the traffic should be placed in the appropriate queues: Router(config)# Router(config)# Router(config)# Router(config)# Router(config)#
priority-list 1 protocol ip high list 2 priority-list 1 protocol ip medium list 100 priority-list 1 protocol ip normal priority-list 1 protocol ipx low priority-list 1 default normal
The size of each queue (measured in packets) can be specified: Router(config)# priority-list 1 queue-limit 30 40 50 60
Finally, the priority-list must be applied to an interface: Router(config)# interface serial0 Router(config-if)# priority-group 1 *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 177
Custom Queuing (CQ) A less strict form of queuing is Custom Queuing (CQ), which employs a weighed round-robin queuing methodology. Each queue is processed in order, but each queue can have a different weight or size (measured either in bytes, or the number of packets). Each queue processes its entire contents during its turn. CQ supports a maximum of 16 queues. To configure CQ, traffic must first be identified by protocol or with an access-list, and then placed in a custom queue: Router(config)# access-list 101 permit tcp 172.16.0.0 0.0.255.255 any eq 1982 Router(config)# Router(config)# Router(config)# Router(config)# Router(config)# Router(config)# Router(config)# Router(config)#
queue-list 1 protocol ip 1 list 101 queue-list 1 protocol ip 1 tcp smtp queue-list 1 protocol ip 2 tcp domain queue-list 1 protocol ip 2 udp domain queue-list 1 protocol ip 3 tcp www queue-list 1 protocol cdp 4 queue-list 1 protocol ip 5 lt 1000 queue-list 1 protocol ip 5 gt 800
Each custom queue is identified with a number (1, 2, 3 etc.). Once traffic has been assigned to custom queues, then each queue’s parameters must be specified. Parameters can include: • A limit – size of the queue, measured in number of packets. • A byte-count – size of the queue, measured in number of bytes. Configuration of queue parameters is straight-forward: Router(config)# Router(config)# Router(config)# Router(config)# Router(config)#
queue-list 1 queue 1 limit 15 queue-list 1 queue 2 byte-count 2000 queue-list 1 queue 3 limit 25 queue-list 1 queue 4 byte-count 1024 queue-list 1 queue 4 limit 10
Finally, the custom queue must be applied to an interface: Router(config)# interface serial0/0 Router(config-if)# custom-queue-list 1 (Reference: http://www.cisco.com/en/US/docs/ios/12_0/qos/configuration/guide/qccq.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 178
Weighted Fair Queuing (WFQ) Weighted Fair Queuing (WFQ) dynamically creates queues based on traffic flows. Traffic flows are identified with a hash value generated from the following header fields: • Source and Destination IP address • Source and Destination TCP (or UDP) port • IP Protocol number • Type of Service value (IP Precedence or DSCP) Traffics of the same flow are placed in the same flow queue. By default, a maximum of 256 queues can exist, though this can be increased to 4096. If the priority (based on the ToS field) of all packets are the same, bandwidth is divided equally among all queues. This results in low-traffic flows incurring a minimal amount of delay, while high-traffic flows may experience latency. Packets with a higher priority are scheduled before lower-priority packets arriving at the same time. This is accomplished by assigning a sequence number to each arriving packet, which is calculated from the last sequence number multiplied by an inverse weight (based on the ToS field). In other words a higher ToS value results in a lower sequence number, and the higher-priority packet will be serviced first. WFQ is actually the default on slow serial links (2.048 Mbps or slower). To explicitly enable WFQ on an interface: Router(config)# interface s0/0 Router(config-if)# fair-queue
The following are optional WFQ parameters: Router(config)# interface s0/0 Router(config-if)# fair-queue 128 1024
The 128 value increases the maximum size of a queue, measured in packets (64 is the default). The 1024 value increases the maximum number of queues from its default of 256. The following queuing methods are based on WFQ: • Class-Based Weighted Fair Queuing (CBWFQ) • Low Latency Queuing (LLQ) *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 179
Class-Based WFQ (CBWFQ) WFQ suffers from several key disadvantages: • Traffic cannot be queued based on user-defined classes. • WFQ cannot provide specific bandwidth guarantees to a traffic flow. • WFQ is only supported on slower links (2.048 Mbps or less). These limitations were corrected with Class-Based WFQ (CBWFQ). CBWFQ provides up to 64 user-defined queues. Traffic within each queue is processed using FIFO. Each queue is provided with a configurable minimum bandwidth guarantee, which can be represented one of three ways: • As a fixed amount (using the bandwidth command). • As a percentage of the total interface bandwidth (using the bandwidth percent command). • As a percentage of the remaining unallocated bandwidth (using the bandwidth remaining percent command). Note: the above three commands must be used exclusively from each other – it is no possible to use the fixed bandwidth command on one class, and bandwidth percent command on another class within the same policy. CBWFQ queues are only held to their minimum bandwidth guarantee during periods of congestion, and can thus exceed this minimum when the bandwidth is available. By default, only 75% of an interface’s total bandwidth can be reserved. This can be changed using the following command: Router(config)# interface s0/0 Router(config-if)# max-reserved-bandwidth 90
The key disadvantage with CBWFQ is that no mechanism exists to provide a strict-priority queue for real-time traffic, such as VoIP, to alleviate latency. Low Latency Queuing (LLQ) addresses this disadvantage, and will be discussed in detail shortly.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 180
Configuring CBWFQ CBWFQ is implemented using the Modular Command-Line (MQC) interface. Specifically, a class-map is used to identify the traffic, a policymap is used to enforce each queue’s bandwidth, and a service-policy is used to apply the policy-map to an interface. Router(config)# access-list 101 permit tcp 10.1.5.0 0.0.0.255 any eq http Router(config)# access-list 102 permit tcp 10.1.5.0 0.0.0.255 any eq ftp Router(config)# class-map HTTP Router(config-cmap)# match access-group 101 Router(config)# class-map FTP Router(config-cmap)# match access-group 102 Router(config)# policy-map THEPOLICY Router(config-pmap)# class HTTP Router(config-pmap-c)# bandwidth 256 Router(config-pmap)# class FTP Router(config-pmap-c)# bandwidth 128 Router(config)# interface serial0/0 Router(config-if)# service-policy output THEPOLICY
The above example utilizes the bandwidth command to assign a fixed minimum bandwidth guarantee for each class. Alternatively, a percentage of the interface bandwidth (75% of the total bandwidth, by default) can be guaranteed using the bandwidth percent command: Router(config)# policy-map THEPOLICY Router(config-pmap)# class HTTP Router(config-pmap-c)# bandwidth percent 40 Router(config-pmap)# class FTP Router(config-pmap-c)# bandwidth percent 20
The minimum guarantee can also be based as a percentage of the remaining unallocated bandwidth, using the bandwidth remaining percent command. Router(config)# policy-map THEPOLICY Router(config-pmap)# class HTTP Router(config-pmap-c)# bandwidth remaining percent 20 Router(config-pmap)# class FTP Router(config-pmap-c)# bandwidth remaining percent 20
Remember, the bandwidth, bandwidth percent, and bandwidth remaining percent commands must be used exclusively, not in tandem, with each other. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 181
Low Latency Queuing (LLQ) Low-Latency Queuing (LLQ) is an improved version of CBWFQ that includes one or more strict-priority queues, to alleviate latency issues for real-time applications. Strict-priority queues are always serviced before standard class-based queues. The key difference between LLQ and PQ (which also has a strict priority queue), is that the LLQ strict-priority queue will not starve all other queues. The LLQ strict-priority queue is policed, either by bandwidth or a percentage of the bandwidth. As with CBWFQ, configuration of LLQ is accomplished using MQC: Router(config)# access-list 101 permit tcp 10.1.5.0 0.0.0.255 any eq http Router(config)# access-list 102 permit tcp 10.1.5.0 0.0.0.255 any eq ftp Router(config)# access-list 103 permit tcp 10.1.5.0 0.0.0.255 any eq 666 Router(config)# class-map HTTP Router(config-cmap)# match access-group 101 Router(config)# class-map FTP Router(config-cmap)# match access-group 102 Router(config)# class-map SECRETAPP Router(config-cmap)# match access-group 103 Router(config)# policy-map THEPOLICY Router(config-pmap)# class HTTP Router(config-pmap-c)# bandwidth percent 20 Router(config-pmap)# class FTP Router(config-pmap-c)# bandwidth percent 20 Router(config-pmap)# class SECRETAPP Router(config-pmap-c)# priority percent 50 Router(config)# int serial0/1 Router(config-if)# service-policy output THEPOLICY
Note that the SECRETAPP has been assigned to a strict-priority queue, using the priority percent command.
(Reference: http://www.cisco.com/en/US/docs/ios/12_0t/12_0t7/feature/guide/pqcbwfq.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 182
Troubleshooting Queuing To view the configured queuing mechanism and traffic statistics on an interface: Router# show interface serial 0/0 Serial 0/0 is up, line protocol is up Hardware is MCI Serial Internet address is 192.168.150.1, subnet mask is 255.255.255.0 MTU 1500 bytes, BW 1544Kbit, DLY 20000 usec, rely 255/255, load 1/255 Encapsulation HDLC, loopback not set ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:00, output 00:00:01, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: Class-based queueing Output queue: 0/1000/64/0 (size/max total/threshold/drops) Conversations 0/1/256 (active/max active/max total) Reserved Conversations 1/1 (allocated/max allocated)
To view the packets currently stored in a queue: Router# show queue s0/0
To view policy-map statistics on an interface: Router# show policy-map interface s0/0 Serial0/0 Service-policy input: THEPOLICY Class-map: SECRETAPP (match-all) 123 packets, 44125 bytes 1 minute offered rate 1544000 bps, drop rate 0 bps Match: access-group 103 Weighted Fair Queuing Strict Priority Output Queue: Conversation 264 Bandwidth 772 (Kbps) (pkts matched/bytes matched) 123/44125
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 183
Section 19 - QoS and Congestion Avoidance Queue Congestion Switch (and router) queues are susceptible to congestion. Congestion occurs when the rate of ingress traffic is greater than can be successfully processed and serialized on an egress interface. Common causes for congestion include: • The speed of an ingress interface is higher than the egress interface. • The combined traffic of multiple ingress interfaces exceeds the capacity of a single egress interface. • The switch/router CPU is insufficient to handle the size of the forwarding table. By default, if an interface’s queue buffer fills to capacity, new packets will be dropped. This condition is referred to as tail drop, and operates on a firstcome, first-served basis. If a standard queue fills to capacity, any new packets are indiscriminately dropped, regardless of the packet’s classification or marking. QoS provides switches and routers with a mechanism to queue and service higher priority traffic before lower priority traffic. Queuing is covered in detail in a separate guide. QoS also provides a mechanism to drop lower priority traffic before higher priority traffic, during periods of congestion. This is known as Weighted Random Early Detection (WRED), and is covered in detail in this guide.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 184
Random Early Detection (RED) and Weighted RED (WRED) Tail drop proved to be an inefficient method of congestion control. A more robust method was developed called Random Early Detection (RED). RED prevents the queue from filling to capacity, by randomly dropping packets in the queue. RED essentially takes advantage of TCP’s ability to resend dropped packets. RED helps alleviate two TCP issues caused by tail drop: • TCP Global Synchronization – occurs when a large number of TCP packets are dropped simultaneously. Hosts will reduce TCP traffic (referred to as slow start) in response, and then ramp up again… simultaneously. This results in cyclical periods of extreme congestion, followed by periods of under-utilization of the link. • TCP Starvation – occurs when TCP flows are stalled during times of congestion (as detailed above), allowing non-TCP traffic to saturate a queue (and thus starving out the TCP traffic). RED will randomly drop queued packets based on configurable thresholds. By dropping only some of the traffic before the queue is saturated, instead of all newly-arriving traffic (tail drop), RED limits the impact of TCP global synchronization. RED will drop packets using one of three methods: • No drop – used when there is no congestion. • Random drop – used to prevent a queue from becoming saturated, based on thresholds. • Tail drop – used when a queue does become saturated. RED indiscriminately drops random packets. It has no mechanism to differentiate between traffic flows. Thus, RED is mostly deprecated. Weighted Random Early Detection (WRED) provides more granular control – packets with a lower IP Precedence or DCSP value can be dropped more frequently than higher priority packets. This guide will concentrate on the functionality and configuration of WRED.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 185
WRED Fundamentals There are two methods to configuring WRED. Basic WRED configuration is accomplished by configuring minimum and maximum packet thresholds for each IP Precedence or DSCP value. • The minimum threshold indicates the minimum number of packets that must be queued, before packets of a specific IP Precedence or DSCP value will be randomly dropped. • The maximum threshold indicates the number of packets that must be queued, before all new packets of a specific IP Precedence or DSCP value are dropped. When the maximum threshold is reached, WRED essentially mimics the tail drop method of congestion control. • The mark probability denominator (MPD) determines the number of packets that will be dropped, when the size of the queue is in between the minimum and maximum thresholds. This is measured as a fraction, specifically 1/MPD. For example, if the MPD is set to 5, one out of every 5 packets will be dropped. In other words, the chance of each packet being dropped is 20%. Observe the following table: Precedence 0 1 2 3
Minimum Threshold 10 12 14 16
Maximum Threshold 25 25 25 25
MPD 5 5 5 5
If the WRED configuration matched the above, packets with a precedence of 0 would be randomly dropped once 10 packets were queued. Packets with a precedence of 2 would similarly be dropped once 14 packets were queued. The maximum queue size is 25, thus all new packets of any precedence would be dropped once 25 packets were queued. Advanced WRED configuration involves tuning WRED maximum and minimum thresholds on a per-queue basis, rather than to specific IP Precedence or DSCP values. In this instance, the min and max thresholds are based on percentages, instead of a specific number of packets. This is only supported on higher model Catalyst switches. WRED only affects standard queues. Traffic from strict priority queues is never dropped by WRED. *** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 186
Configuring Basic WRED WRED configuration can be based on either IP Precedence or a DSCP value. To configure WRED thresholds using IP Precedence: Router(config)# interface fa0/1 Router(config-if)# random-detect Router(config-if)# random-detect precedence 0 10 25 5 Router(config-if)# random-detect precedence 1 12 25 5 Router(config-if)# random-detect precedence 2 14 25 5 Router(config-if)# random-detect precedence 3 16 25 5 Router(config-if)# random-detect precedence 4 18 25 5 Router(config-if)# random-detect precedence 5 20 25 5
The first random-detect command enables WRED on the interface. The subsequent random-detect commands apply a minimum threshold, maximum threshold, and MPD value, for each specified IP Precedence level. To configure WRED thresholds using DSCP values: Router(config)# interface fa0/10 Router(config-if)# random-detect Router(config-if)# random-detect dscp-based af11 14 25 5 Router(config-if)# random-detect dscp-based af12 12 25 5 Router(config-if)# random-detect dscp-based af13 10 25 5 Router(config-if)# random-detect dscp-based af21 20 25 5 Router(config-if)# random-detect dscp-based af22 18 25 5 Router(config-if)# random-detect dscp-based af23 16 25 5
To view the WRED status and configuration on all interfaces: Router# show interface random-detect Router# show queuing
WRED is not compatible with Custom Queuing (CQ), Priority Queuing (PQ) or Weighted Fair Queuing (WFQ), and thus cannot be enabled on interfaces using one of those queuing methods.
(Reference: http://www.cisco.com/en/US/docs/ios/12_0/qos/configuration/guide/qcwred.html)
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 187
Configuring Advanced WRED with WRR On higher-end Catalyst models, WRED can be handled on a per-queue basis, and is configured in conjunction with a feature called Weighted Round Robin (WRR). Recall that interfaces have both ingress (inbound) queues and egress (outbound) queues. Each interface has one or more hardware queues (also known as transmit (TxQ) queues). Traffic is placed into egress hardware queues to be serialized onto the wire. There are two types of hardware queues. By default, traffic is placed in a standard queue, where all traffic is regarded equally. However, interfaces can also support strict priority queues, dedicated for higher-priority traffic. DiffServ QoS can dictate that traffic with a higher DSCP or IP Precedence value be placed in strict priority queues, to be serviced first. Traffic in a strict priority queue is never dropped due to congestion. A Catalyst switch interface may support multiple standard or strict priority queues, depending on the switch model. Cisco notates strict priority queues with a “p”, standard queues with a “q”, and WRED thresholds per queue (explained in a separate guide) with a “t”. If a switch interface supports one strict priority queue, two standard queues, and two WRED thresholds, Cisco would notate this as: 1p2q2t To view the supported number of hardware queues on a given Catalyst switch interface: Switch# show interface fa0/12 capabilities
The strict priority egress queue must be explicitly enabled on an interface: Switch(config)# interface fa0/12 Switch(config-if)# priority-queue out
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 188
Configuring Advanced WRED with WRR(continued) Standard egress queues can be assigned weights, which dictate the proportion of traffic sent across each queue: Switch(config-if)# wrr-queue bandwidth 127 255
The above command would be used if a particular port has two standard egress queues (remember, the number of queues depends on the Catalyst model). The two numbers are the weights for Queue 1 and Queue 2, respectively. The weight is a number between 1 and 255, and serves as a ratio for sending traffic. In the above example, Queue 2 would be allowed to transmit twice as much traffic as Queue 1 every cycle (255 is roughly twice that of 127). This way, the higher-priority traffic should always be serviced first, and more often. Next, WRED/WRR can be enabled for a particular queue. Cisco’s documentation on this is inconsistent on whether it is enabled by default, or not. To manually enable WRED/WRR on Queue 1: Switch(config-if)# wrr-queue random-detect 1
To disable WRED/WRR and revert to tail-drop congestion control: Switch(config-if)# no wrr-queue random-detect 1
Next, the WRED/WRR minimum and maximum thresholds must be tuned. Again, this is accomplished per standard queue, and based on a percentage of the capacity of the queue. Recall that each switch port has a specific set of queues (for example, 1p2q2t). The 2t indicates that two WRED/WRR thresholds can exist per standard queue. Switch(config-if)# wrr-queue random-detect min-threshold 1 5 10 Switch(config-if)# wrr-queue random-detect max-threshold 1 40 100
The first command sets two separate min-thresholds for Queue 1, specifically 5 percent and 10 percent. The second command sets two separate max-thresholds for Queue 1, specifically 40 percent and 100 percent.
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 189
Configuring Advanced WRED with WRR (continued) Why two separate minimum and maximum thresholds per queue? Because packets of a specific CoS value can be mapped to a specific threshold of a specific queue. Observe: Switch(config-if)# wrr-queue cos-map 1 1 0 1 Switch(config-if)# wrr-queue cos-map 1 2 2 3
The first command creates a map, associating queue 1, threshold 1 with CoS values of 0 and 1. The second command creates a map, associating queue 1, threshold 2 with CoS values of 2 and 3. All traffic marked with CoS value 0 or 1 will have a minimum threshold of 5 percent, and a maximum threshold of 40 percent (per the earlier commands). All traffic marked with CoS value 2 or 3 will have a minimum threshold of 10 percent, and a maximum threshold of 100 percent. The above wrr-queue commands are actually the default settings on higherend Catalyst switches. To view the QoS settings on a Catalyst interface: Switch# show mls qos interface fa0/10
To view the queuing information for a Catalyst interface: Switch# show mls qos interface fa0/10 queuing
To view QoS mapping configurations: Switch# show mls qos maps
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.
CCNP Switching Study Guide v1.51 – Aaron Balchunas 190
Configuring Class-Based WRED (CBWRED) The functionality of Class-Based Weighted Fair Queuing (CBWFQ) can be combined with WRED to form Class-Based WRED (CBWRED). CBWFQ is covered in detail in a separate guide. CBWRED is implemented within a policy-map: Router(config)# class-map HIGH Router(config-cmap)# match ip precedence 5 Router(config)# class-map LOW Router(config-cmap)# match ip precedence 0 1 2 Router(config)# policy-map THEPOLICY Router(config-pmap)# class HIGH Router(config-pmap-c)# bandwidth percent 40 Router(config-pmap-c)# random-detect Router(config-pmap-c)# random-detect precedence 5 30 50 5 Router(config-pmap)# class LOW Router(config-pmap-c)# bandwidth percent 20 Router(config-pmap-c)# random-detect Router(config-pmap-c)# random-detect precedence 0 20 50 5 Router(config-pmap-c)# random-detect precedence 1 22 50 5 Router(config-pmap-c)# random-detect precedence 2 24 50 5 Router(config)# int fa0/1 Router(config-if)# service-policy output THEPOLICY
DSCP values can be used in place of IP Precedence: Router(config)# class-map HIGH Router(config-cmap)# match ip dscp af31 af41 Router(config)# policy-map THEPOLICY Router(config-pmap)# class HIGH Router(config-pmap-c)# bandwidth percent 40 Router(config-pmap-c)# random-detect dscp-based Router(config-pmap-c)# random-detect dscp af31 28 50 5 Router(config-pmap-c)# random-detect dscp af41 30 50 5
To view CBWRED statistics: Router# show policy-map
*** All original material copyright © 2012 by Aaron Balchunas (aaron@routeralley.com), unless otherwise noted. All other material copyright © of their respective owners. This material may be copied and used freely, but may not be altered or sold without the expressed written consent of the owner of the above copyright. Updated material may be found at http://www.routeralley.com.