´cnica de Valencia Universidad Polite Arquitectura de redes de altas prestaciones
Congestion Control in Multistage Interconnection Networks Author: Andrew Fecheyr Lippens
March of 2010
Abstract As the number of components in cluster-based systems increase, cost and power consumption increase as well. One way to reduce both problems is reducing the amount of components in the interconnection networks. Critical to the usability of networks with less components are management systems that eliminate HOL blocking during situations of congestion. This paper starts by exploring research concerning the dynamics of congestion trees and introduces three previously developed mechanisms that solve the negative side-effects caused by them.
Contents 1 Introduction 1.1 Technical Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Head-of-line Blocking (HOL) . . . . . . . . . . . . . . . . . . . . . .
2 2 3
2 Related Work
4
3 Dynamic Evolution of Congestion Trees 3.1 Effect of Switch Architecture . . . . . . . . . . . . . . . . . . . . . . 3.2 Impact of Traffic Patterns . . . . . . . . . . . . . . . . . . . . . . .
6 7 8
4 Proposed Solutions 8 4.1 Basic RECN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.2 Enhanced RECN . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.3 FBICM: Flow-Based Implicit Congestion Management . . . . . . . 13 5 Performance Evaluation 14 5.1 RECN Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.2 FBCIM Performance . . . . . . . . . . . . . . . . . . . . . . . . . . 16 6 Conclusion
17
1
1
Introduction
Interconnection networks in large parallel computers and computer clusters are becoming more expensive and are consuming more power. Interconnect technologies such as Myrinet 2000 [4] and InfiniBand [5] have become pricy compared to processors and other system parts. Moreover, interconnects consume a greater fraction of the total system power as link speed increases [6] while the power consumption is almost independent of link utilization. This is a serious problem for system architects as the speed of processors keeps climbing, and with it the ever increasing need for bandwidth. Historically interconnection networks have been overdimensioned to cope with peak burst traffic. This overcapacity effectively prevented congestion of the links. However, the increasing cost and power hunger of high link speed interconnects is forcing system designers to rethink their strategy. An effective way to reduce cost and power consumption is reducing the number of network components and increasing their utilization. However, this solution will increase network contention that may lead to congestion situations. In general, congestion will quickly spread through the network due to flow control and form congestion trees. The major negative effect of these situations happens when packets blocked due to congestion prevent the advance of other packets in the same queue. This effect, referred to as head-of-line blocking, is explained in section 1.2. Therefore, reducing the number of network components without a suitable congestion management mechanism may lead to dramatic throughput degradation when the network enters saturation. This paper explores previously published research on the subject of congestion in lossless networks and summarizes proposed techniques to control it. Before we continue with the related works we explain some technical terms in section 1.1 and in section 1.2 we dig a little deeper in what causes performance degradation in congestion trees: head-of-line blocking. Section 3 explores the dynamic evolution of congestion trees and its effects on congestion management while section 4 introduces three mechanism to eliminate head-of-line blocking: basic RECN, enhanced RECN and FBCIM.
1.1
Technical Terms
This paper uses technical terms from the domain of computer networks. Some of them are explained here to help the reader.
2
Multistage interconnection networks (MINs) are a class of high-speed computer networks usually composed of processing elements on one end of the network and memory elements and/or other processing elements on the other end, connected together by switching elements. The switching elements themselves are usually connected to each other in stages. MINs are often used as Network-on-a-Chip and in computer clusters to connect the individual nodes. The lossless nature of these networks makes them quite different from the regular lossy networks that make up the internet. Congestion trees: In a lossless network, when the input flows arriving at a switch are higher than the output flows the switch buffers will completely fill up. This is called congestion. Flow control mechanisms (such as stop-and-go, or credit based systems) propagate this congestion to throughout the network, following the reversal path of the flows contributing to congestion [2]. This is called a congestion tree. The switch where the congestion originated is referred to as the root of the tree and the endpoints as the leaves.
1.2
Head-of-line Blocking (HOL)
Head-of-line blocking is a phenomenon that appears in buffered network switches. A switch is usually made of buffered input ports, a switch fabric and buffered output ports. Because of the FIFO nature of the input buffers and switch design, the switch fabric can only switch the packets at the head of the buffer per cycle. HOL arises when packets arriving at different input ports are destined for the same output port. If the HOL packet of a certain buffer at the input cannot be switched to an output port because of contention, the rest of the packets in that buffer are blocked by that Head-of-Line packet. HOL blocking can limit the throughput of switches up to 58.6% of its peak value [7]. Figure 1 illustrates a network switch with input buffers that suffers from HOL as two packets are destined for the same output port, needlessly blocking the packets behind them in the input buffer. The behavior of network congestion depends on switch architecture. HOL blocking is one of the main problems arising in networks based on switches with queues at their input ports (Input Queuing, IQ switches). From the point of view of switch architecture, HOL blocking will greatly affect IQ switches as well as switches with queues at their input and output ports (Combined Input and Output Queuing, CIOQ switches). These architectures are different from that of traditional switches in communication networks, which uses queues only at their output ports (Output Queuing, OQ switches). The OQ scheme has become infeasible because it requires the switch to operate at a much faster speed than
3
Input i
Switching Fabric
Output j
1
2 3 4
1
2
1 4 3
2
3
1 1 4
3
4
1 3 2
4
Figure 1: Head-of-line blocking
the links in order to handle all the possible incoming packets concurrently1 , and link speed in current high-speed interconnects is on the order of ten Gbps. In CIOQ switches, HOL blocking can be reduced by increasing switch speedup. A switch with a speedup of S can remove S packets from each input and deliver up to S packets to each output within a time slot, where a time slot is the time between packet arrivals at input ports. Note that OQ switches have a speedup ofN while IQ switches have a speedup of 1. CIOQ switches have values of S between 1 and N . However, for larger values of S, switches become too expensive, and practical values are between 1.5 and 2. Therefore, the switch speedup increase is limited, and HOL blocking should be controlled by a suitable congestion management mechanism.
2
Related Work
Congestion in interconnection networks is a well-known problem, and many strategies have been proposed to alleviate their negative effects. The subject has been researched for over two decades [8]. A large number of strategies have been proposed for controlling the formation of congestion trees and for eliminating or reducing their negative effects. Many of them consider congestion in multiprocessor systems, where saturation trees appear due to concurrent requests to the same shared memory module. A taxonomy of hot-spot management strategies is given in [9]. The authors divide the different strategies in three categories: avoidancebased, prevention-based and detection-based strategies. Avoidance based, or proactive congestion management techniques, use a priori 1
Theoretically, a N Ă— N OQ switch must work N times faster than the link speed
4
knowledge of resource requirements to reserve resources before starting data transmission. They require a previous planning in order to guarantee that congestion trees will not appear. Some of these techniques are software-based [10, 11], while others are hardware-oriented [12]. In general, these strategies are related to quality of service (QoS) requirements. Prevention based strategies control traffic in a way that congestion trees should not happen. Decisions are made “on the fly�, often limiting or modifying routes or memory accesses. These techniques can be implemented in software [13] or hardware [14, 15]. Detection based strategies, also referred to as reactive strategies [1], allow the forming of congestion trees. They detect these formations and activate a control mechanism in order to solve the problem. Detection is usually based on feedback information. For instance, it is possible to measure the switch buffer occupancy [19, 17] or the amount of memory access requests [15] in order to detect congestion. When congestion is detected the sources injecting traffic into the network are notified to reduce or stop the injection. There is a large amount of research in this area that can be categorized into several subclasses. Some solutions broadcast notifications to all the sources [16, 17]. These solutions consume a large fraction of the bandwidth for notification which makes them very inefficient. Other solutions send notifications only to the sources contributing to the congestion [18], making them more efficient. However, none of these solutions scales because as network size and/or link bandwidth increase, the effective delay between congestion detection and reaction (and the number of injected packets during that time) increases linearly. This results in slow response and/or typical oscillations that arise in closed-loop control systems with delays in the feedback loop. Both of these translate into significant performance degradation. Moreover, these limitations are due to physical constraints as notifications cannot propagate faster than the speed of light, thus being unavoidable. Hence, reactive congestion management alone may not be appropriate for large future systems with long round trip times. The only scalable reactive congestion management strategies are those that react locally by reducing/stopping injection at the endpoints attached to the switch where congestion is detected [20, 21]. Although those techniques have been reported to achieve good results with synthetic traffic where all the nodes have the same injection rate, they may fail under real traffic because sources contributing to congestion are usually different from those that are throttled. In the next section we will explore the dynamics of congestion trees in order to better understand their growth and how they impact performance. Moreover,
5
insights gained by the research performed in that area was key to the development of enhancements applied to the basic mechanism we describe later on.
3
Dynamic Evolution of Congestion Trees
In [2] the authors analyze the different scenarios that may arise in the formation of congestion trees for networks with CIOQ switches. They focus on two key aspects that greatly influence how congestion trees are formed. The first one is the architecture of the switch and the second one is the traffic pattern. The traditional view of congestion trees is that the congestion grows from the root to the leaves: the root of the congestion tree (an output port at some switch) becomes congested and, due to the use of flow control, congestion spreads from the root to the leaves. However, this situation happens only in a particular scenario that rarely occurs: when the different traffic flows that form the tree join only at the root. + ingres congestion + egres congestion Switch Speedup = 2
+ sw1
sw1 root switch
+
+
root switch
+
sw2
root moves downstream
sw2
1
sw1
+
2
sw4
sw3 flows
new root switch
+ sw3
sw4 (a)
(b)
(c)
Fig. 1. Dif ferent dynamic formations of trees. Speedupis two. (a) Traditional view about the Figure 2: Different dynamic formations of trees. Speedup is two. (a) formation of a congestiontree, (b) Dif ferent locations (ingress and egress) where congestionis Traditional view the formation a congestion tree, (b) Different detectedfor one tree, about and (c) theroot switch movesof downstream. locations (ingress and egress) where congestion is detected for one tree, and (c) the root switch moves downstream.
will start to queue packets (again, first at the egress queues and then at the ingress queues). This is because the available bandwidth at the output port of the root switch Figure 2.adivided showsamong an example offlows the and traditional view: fivebandwidth flows form a congeswill be the different will be lower than the required by by eachmeeting flow. At the congestion treeThe will grow the root to the leaves. tion tree atend, thetheroot switch. sumfrom of the injection rate of all the Traditionally (and link in thebandwidth, former example), it has also been assumed in a CIOQ flows is higher than the thus a congestion tree that is formed. switch with speedup, packets will start queuing at the egress ports when congestion forms. However, as will be seen in the next section, this may not always be true. 3.2
Effect of Switch Ar chitecture on the Dynamics
6 specifically the speedup, can alleviate the As mentioned above, the switch architecture, possible HOL blocking at the input ports. However, it is not completely eliminated. In particular, depending on the number of flows and the rate at which they arrive at a switch, the formation of a congestion tree will be different. Figure 2.a shows an example where two flows, injected at the full rate and headed to the same destination join at a switch with no speedup. As the total reception is higher than the rate at which packets are forwarded to the egress side, packets are queued at the ingress side. Thus, congestion will occur at the ingress side of the switch. However, if speedup is used (Figure 2.b, speedup of 2 ) congestion occurs at the egress side. As the number of flows arriving at the root switch is equal to or less than the speedup, the switch can forward the flows to
3.1
Effect of Switch Architecture
The formation of a congestion tree is different depending on the number of incoming flows and the rate at which they arrive at a switch. Switch architecture, especially the speedup2 of a switch, is also a determining factor as will be shown in the following example. Figure 3 depicts three different situations. In the first one (a), the switch has no speedup and the two flows, injecting at the full rate, are headed to the same destination port. Because a switch without speedup can only handle one packet per cycle, packets are queued at the ingress side. Thus, congestion will occur at the ingress side of the switch. However, if speedup is used (Figure 3.b) congestion occurs at the egress side. When the number of flows arriving at the root switch is less than or equal to the speedup, the switch can forward the flows to the egress side at the same rate as they are arriving. But, because the output port link speed is only as high as one input link, packets start to queue at the egress side.
(no speedup)
full rate
(speedup S=2)
(speedup S=2)
full rate full rate
congestion
(a)
congestion
(b)
congestion
(c)
HOL blocking within a switch with differentspeedupsanddifferentflows. FigureFig. 3:2.HOL blocking within a switch with different speedups and different flows.
packets arrive is higher than the rate at which they are forwarded to the egress side.
Thus, congestion arises at the ingress side, to the common belief. flows and the Obviously the situation depends on contrary the number of incoming This effect may also occur at several points along a congestion tree. In Figure 1.b we higher speedup of the switch. Figure 3.c illustrates a number of incoming flows can observe a congestion tree formed by eight flows heading to the same destination. than the switch speedup, causing the congestion to once again form at the ingress Switch speedup is 2. Three flows merge at sw1, and other flows merge at different side. Note that congestion occur regardless destination thesw2, incoming points, finally meeting all will together at the root switch.of In the this situation, at sw1ofand flows. congestion first occurs at the ingress side. Once all the flows arrive at the root switch they merge and congestion occurs again. However, in this case, congestion first occurs
Thisat the effect may occur atatseveral along a congestion tree. In figegress side,also and so happens sw3 and points sw4. ure 2.b weTherefore, can observe a congestion treecongestion formed trees by eight flows heading the speedup affects the way form. In particular, con- to the gestion may first occur flows at ingress or egress side and, at the sameflows time, different local same destination. Three merge at sw1, while other merge at different congestion spots may arise during the formation of the congestion tree. Although the points, finally converging at the root switch. In this situation, congestion first 2
different local congestion spots can be initially viewed as independent congestion trees, they end upofbeing part of a single and larger congestion tree. Whether only the comA description switch speedup is given on page 4 plete congestion tree or each local congestion spot should be treated by a congestion control mechanism will be discussed in section 4. This decision will significantly affect the effectiveness of the congestion control 7 mechanism. 3.3
Impact of Traffic Patterns on the Dynamics
In the previous scenario it was assumed that all the flows were injecting packets at the same rate and started at the same time. This could be the case when executing a barrier synchronization among different processes in a multiprocessor system. However, other scenarios may arise where different flows contributing to the same congestion tree start at different times and inject at different rates. This will lead to more sophisticated dynamics in the formation of congestion trees. As an example, consider Figure 1.c where a first congestion tree made up of three flows is created. These flows, headed to the same
occurs at the ingress side at sw1 and sw2. Once all the flows arrive at the root switch they merge and congestion occurs again. However, in this case, congestion occurs at the egress side, at sw3, sw4 and the root switch.
3.2
Impact of Traffic Patterns
In the previous scenario it was assumed that all the flows were injecting packets at the same rate and started at the same time. This can happen in some cases, such as a barrier synchronization among different processes in a multiprocessor system, but often is not so. Scenarios may arise where different flows contributing to the same congestion tree start at different times and inject at different rates. The authors in [2] describe the more sophisticated dynamics of these scenarios. They show that the root of a congestion tree may move upstream by the collapse of some branches, or downstream by the addition of new flows. The latter is illustrated in figure 2.b. Also, congestion trees may overlap at several network points without merging. And finally, a congestion tree may be formed from local and transient congestion trees that will later merge due to the limited speedup. Therefore, for all of these cases some kind of architectural support is needed in order to separate each congestion tree and to follow the complex dynamics they may exhibit. Now that the dynamics of congestion trees have been explored a bit deeper, we can describe techniques proposed to solve their negative effects.
4
Proposed Solutions
In this section we describe three proposed solutions that effectively eliminate HOL blocking and with it, the negative impact of congestion trees on network throughput. First we present a summary of the basic RECN technique by J. Duato et al. [1]. This technique was enhanced by P.J. Garc織覺a et al. in [2]. We state the two enhancements they developed and their achieved performance improvements. Finally this paper includes a brief overview of FBCIM, an adaptation of the RECN approach for networks with table-based, distributed routing [3].
8
4.1
Basic RECN
RECN stands for regional explicit congestion notification because it detects and explicitly notifies switches about congestion throughout the region that will be affected by a given congestion tree [1]. RECN differs from local explicit congestion notification (LECN) strategies [22] proposed in that it is able to eliminate HOL blocking across multiple stages. The authors in [1] focus on removing HOL blocking in Multistage Interconnection Networks with deterministic routing, effectively making congestion trees harmless. The technique basically works by separating the packets from congested flows from the packets belonging to uncongested flows. By doing so, congested packets no longer interfere with uncongested packets and even with packets from congested flows belonging to another tree. This is important since different congested flows usually experience significantly different degrees of congestion. Therefore, the technique focuses on quickly detecting congested flows and immediately mapping them to separate queues. The deallocation of those queues when they are no longer required is also important in order to minimize the number of resources needed to eliminate HOL blocking. 4.1.1
Congestion Detection
The congestion detection mechanism is simple, fast and accurately identifies the sources that are contributing to congestion. RECN’s congestion detection mechanism consists of detecting when the occupancy of a given output queue reaches a certain threshold. This technique works for both congestion at the endnodes of the network as well as internal network congestion. Fully determining the sources that are contributing to congestion was deemed infeasible due to data RAM bandwidth consumption and the corresponding interference with packet transmission. As such, the authors opted for another approach. The switch simply propagates notifications to the input ports that are contributing to congestion. Each output port has a flag associated with each input port, which records the transmission of the corresponding notification to avoid repeated congestion notifications. 4.1.2
Congestion Notification and Queue Allocation
When congestion notifications are received at some input port, a set aside queue (SAQ) is locally allocated for the packets leading to the congested output port. All the incoming packets that request the congested output port will be stored in
9
that SAQ, thus preventing them from blocking other packets stored in the main queue for that input port. Path information is stored in a content addressable memory (CAM). Each CAM line contains the path information associated with the corresponding SAQ. The reason for using a CAM is that the switch has to check the destination address field in the header of every incoming data packet and compare it against all the path informations stored in the CAM to determine whether this packet is going to cross the root of some congestion tree. In such a case, it will be stored in the corresponding SAQ. It should be noted that packets from uncongested flows that will use the same output port at the current switch as other packets from congested flows will not match any CAM line, and therefore, will not be stored in any SAQ, thus avoiding HOL blocking. It is possible that two congestion trees overlap. Moreover, one of them could be a subtree of the other tree. All of these cases are automatically solved when using a CAM to store path information and incoming notification packets are matched against it. Figure 4 shows the CAM structure required to store path information in a switch for PCI Express Advanced Switching3 . RECN benefits from the routing mechanisms found in AS. In particular, AS uses source deterministic routing. The AS header includes a turn pool made up of 31 bits that contains all the turns (offset from the incoming port to the outgoing port) for every switch along the path. An advantage of this routing method is that it allows to address a particular network point from any other point in the network. Thus, a switch, by inspecting the appropriate turnpool bits of a packet, can know in advance if it will pass through a particular network point.
ficantly enhances queue utieed to implement some techpackets from congested flows nd therefore they are not goted flows. Moreover, they do elves. This is important since Figure 4: CAM structure. Figure 1. CAM structure. ally experience significantly . Thus, our proposal focuses RECN detects congestion only at switch egress ports. When a normal egress t congested flows are quickly queue receives a packet and fills over a given threshold, a notification is sent ped to separate queues. The the location of the leaves of the congestion tree. This will to the resenderbeingress port deallocating indicating that theonce output port isdisapcongested. This when they are no longer useful when queues congestion notification includes the routing information (a turnpool and the corresponding er to minimize the number of pears. When congestion notifications are received at some mask bits) to reach congested output port from the notified input port,the a set aside queue (SAQ) is locally allocated foringress port. HOL blocking. 3 the standard packets for leading to the congestedtechnologies output port. All theby in-the ASI Special AS is an open fabric-interconnection developed packets that request the congested output port will Interest Group coming [23]. be stored in that SAQ, thus preventing them from blocking s paper is valid for any switch other packets stored in the main 10 queue for that input port. mechanisms and implementaTwo problems need to be solved in order to guarantee that e architecture to another. Tak- packets will be correctly processed. The first one is ensurfor high-speed interconnects, ing that only the packets leading to a congested port will be ure based on an internal mul- stored in the corresponding SAQ. In order to achieve this, d output buffers. Also, all the each SAQ has an associated routing information that indiplemented using a high-speed cates the path from the port where the SAQ is located to the trol RAM to store the point- port where congestion was detected. As we have assumed s the use of dynamically allo- the use of deterministic routing, that path is unique. The en-
Upon reception of a notification, each ingress port allocates a new SAQ and fills the corresponding CAM line with the received turnpool and mask bits. From that moment, every incoming packet that will pass through the congested point (easily detected from the packet turnpool) will be mapped to the newly allocated SAQ, thus eliminating the HOL blocking it may cause. If an ingress SAQ becomes subsequently congested, a new notification will be sent upstream to some egress port that will react in the same way, allocating an egress SAQ, and so on. As the notifications go upstream, the information indicating the route to the congested point is updated accordingly, in such a way that growing sequences of turns and mask bits are stored in the CAM lines. To guarantee in order delivery, whenever a new SAQ is allocated, forwarding packets from that queue is disabled until the last packet of the normal queue (at the moment of the SAQ allocation) is forwarded. This is implemented by a simple pointer associated to the last packet in the normal queue and pointing to the blocked SAQ. RECN also implements for each individual SAQ a special Xon/Xoff flow control, that follows the stop-and-go model. RECN keeps track (with a control bit on each CAM line) of the network points that are leaves of a congestion tree. Whenever a SAQ with the leaf bit set to one empties, the queue is deallocated and a notification is sent downstream, repeating the process until the root of the congestion tree is reached.
4.2
Enhanced RECN
In [1] the authors propose two enhancements to RECN. Both enhancements allow RECN to keep track of the dynamics of congestion trees explored in section 3, and thereby, significantly increase performance. The first enhancement is allowing RECN to detect congestion at switch ingress ports. Basic RECN defines SAQs at ingress and egress ports, but it only detects congestion at egress ports. However, Section 3.1 on page 7 shows that congestion may first occur at ingress ports. In these cases, basic RECN never detects congestion at the root. Instead, it detects congestion at the immediate upstream switches, but only when the root ingress queues are full. So, RECN will not react quickly to eliminate HOL blocking at an important part of the tree. The first enhancement is accomplished by replacing the normal queue at each ingress port by a set of small buffers (referred to as detection queues). So, at ingress ports the memory is now shared by detection queues and SAQs. The detection queues are structured at the switch level: there are as many detection queues as output ports in the switch, and packets heading to a particular output port are 11
Threshold
Port 0
Detection Queues
turn=4 turn=5
Port 4
Port 5
turn=6
SAQs
Port 6
Threshold
Port 0 Congestion notification
CAM Lines
turn=4
Detection Queues
turn=5 turn=6
SAQs
Port 4
Port 5
Port 6
4 CAM Lines
turn=7
Port 1
Port 7
Port 2
Port 2
Port 3
Port 3 (a)
turn=7
Port 1
Port 7
(b)
Figure 5: for Mechanism for detecting and handling ingress Fig. 4. Mechanism detectingand handlingcongestionat thecongestion ingress side:at(a)the Queue statusat side: (a) Queue status at the detection moment (b) Queue status after dethedetectionmoment(b) Queuestatusafter detection. tection.
gestion Thus, RECN does not support movement. Also the directedtree. to the associated detection queue.theBydownstream doing this,tree when a detection queue situation a congestion forms from leaves to and root the willoutput not be correctly treated. fills overwhere a given threshold, tree congestion is detected, port causing the congestion is shows easily another computed as the In port detection queue. Figure 5.b situation. thisassociated case sw1, with havingthat a SAQ allocated for Once B, congestion detected an ingress(being port, point a newASAQ is allocated at this point receives aisless specificatnotification congested). This can be port, and the turnpool identifying the output port causing congestion is stored related to the formation of an overlapping congestion tree. In this case, RECN acceptsin thenotification CAM line.andThe detection where is detected and the SAQ the allocates a newqueue SAQ for pointcongestion A. In this situation, an arriving packet allocated are swapped. As the new SAQ can now be considered to be congested, will be stored in the SAQ whose associated turnpool matches in more turns the packet a notification sent upstream. Figure 5trough showsAthe turnpool. Thus,isincoming packets passing andproposed B will bemechanism. mapped to the SAQ allocated for B, and packets passing through but actions not through B upon will bereception mapped toof The second enhancement is related toAthe taken the SAQ allocated forRECN A. Notice this situation no all outthe of order deliveryreceived. may be notifications. Basic doesthat notin allocate SAQs for notifications introduced as already stored packets passing through A but not trough B are not mapped This is done in order to ensure that no out of order packet delivery is introduced. to the SAQ B (theywhere were stored the normal queue). Figure 6.a asociated shows an to example RECNondoes not allocate a SAQ when receiving Thus, the proposal is to accept all the notifications regardless whether they more a notification. First, sw1 is notified that point A is congested. Then, it are allocates or ordercongested to deal withpoint. out of Packets order issues, when a new A SAQ allocated a less new specific. SAQ forIn that going through willis be stored due to a more specific notification, it must be blocked (must not send packets) until all from that moment in that SAQ. Later, point B becomes congested at sw3 and the packets stored theupstream, SAQ associated to the less This specific notificationis(when the to new notifications are in sent reaching sw1. notification referred as SAQ is allocated) leave the queue. This can be accomplished by placing in the “old” being a more specific notification (as B is further away than A). SAQ a pointer to the new allocated SAQ . This fact can lead RECN to introduce HOL blocking. Indeed, flows not beIn order to foresee the potential of the proposed enhancements, Figures 6.a and longing to the first tree (traffic adressed to point B) passing trough point A will 6.b show how the original (or “basic”) RECN and the enhanced RECN mechanisms be mapped to the same SAQ (the one associated to A) than flows belonging to the detect congestion trees. These figures reflect simulation results when a congestion tree congestion tree. Thus, RECN does not support the downstream tree movement. is formed by eight sources injecting packets to the same destination (hot-spot) at the Also the situation where a congestion tree forms from leaves to root will not be full rate of the link. All the sources start sending packets at the same time. Switch speedup has been set to 1.5. In both figures, dots indicate the points (ingress or egress) 12 roots after the tree is formed. The thick considered by the mechanism as congestion arrows indicate the paths followed by congestion notifications (RECN messages) from congestion points, thus indicating where SAQs are allocated for a particular congestion point.
notification discarded by RECN more specific notification A B sw 1
less specific notification A B
sw 1
sw 2
sw 2 sw 3
sw 3
normal queue A
normal queue B A
SAQs CAM lines
SAQs CAM lines
(a)
(b)
Fig.6: 5. Basic RECN treatmentfor (a) morespecific and (b) less specific Figure Basic RECN treatment for (a) more specific andnotifications. (b) less specific notifications.
As can be observed, with the basic RECN mechanism, one “real” congestion tree is viewed as several subtrees. In particular, seven subtrees are formed, all of them with correctly treated. their roots at egress sides of switches. However, the enhanced RECN mechanism coridentifies the congestionproposal tree. At theisend, one tree detected and the rootregardless is Therectly second enhancement to only accept all is the notifications located at the egress side of switch 8, matching so the real congestion tree. whether they are more or less specific. In order to deal with out of order issues, Although both schemes end up detecting the same congestion (through one tree or when aseveral new SAQ allocated due to more specific notification, must be blocked ones), is detecting the correct onea has powerful benefits. In particular,itfor the basic (must not send packets) until all the packets stored in the SAQ associated RECN mechanism, HOL blocking is not correctly eliminated. As an example, imagine to the traffic not belonging to the congestion tree that arrives switch 5 andleave is passing less specific notification (when the new SAQ is at allocated) thethrough queue. This some of the detected intermediate congestion points. Those packets will be mapped can be accomplished by placing a pointer to the new allocated SAQ in the “old” to the same SAQ used to hold packets addressed to the congested destination, thus SAQ. introducing massive HOL blocking. As the congestion tree grows in the number of stages, the number ofintroduced intermediate detected congestion will increase and, then, Both enhancements to RECN allow itpoints to detect congestion at ingress more HOL blocking will be introduced. On the other hand, the enhanced mechanism ports and allow the system to correctly accommodate for movements of the root will use a SAQ in all the switches (on every attached port) to store packets exactly switch destined that aretotraditionally not Thus taken Performance evaluations the congestion root. theinto rest consideration. of flows will be mapped to the detection that highlight the gains provided by these enhancements are reported in queues, and so HOL blocking at the congestion tree will be completely eliminated.section 5.1 It has to be noted that enabling congestion detection at switch ingress sides also on page 15. has benefits in the mechanism response. In the basic RECN, intermediate output ports (detected as congested) start to congest only when the ingress queue at the downstream switch totally fills up with congested packets. Thus, at the time congestion is detected 4.3 some FBICM: Implicit Congestion Management ingress sideFlow-Based queues are totally full.
5 Performance aluation An essential requirementEvfor any RECN implementation is the use of source-based routingInwhere the entire path packet is encoded in its header. this section we will evaluateof theaimpact of the proposed enhancements over RECN RECN idenon the overall networkby performance in different scenarios of traffic load in andthe switch tifies congested packets comparing the explicit route stored header of architecture. For this purpose we have developed a detailed event-driven simulator that each packet to explicit routes leading to congested points. This limitation prevents RECN from being applied in any network technology that uses table-based, distributed routing, like Infiniband [5], a high-performance interconnection tech13
nology becoming increasingly popular in cluster-based systems. Flow-Based Implicit Congestion Management (FBICM) was developed in order to bridge this significant technical gap. As RECN, it is able to eliminate HOL blocking produced by congested flows, but it can be employed in networks implementing table-based, distributed deterministic routing. FBICM uses the same basic principles as RECN: detect congested packets and separate them from noncongested ones. The main differences between FBCIM and RECN are: 1. Congested points are not addressed by means of explicit routes. Instead, FBICM identifies congested points implicitly, by keeping track of the different flows passing through these points. 2. Congested packets are not identified as packets following an explicit route to a congested point, but as packets belonging to flows involved in a congested situation. 3. Congestion information is associated to destinations where as in RECN it is associated to internal network points. For a detailed description of FBCIM, including switch architecture and operation examples, please refer to [3]. The paper further describes how implicit information about congested points can be stored and propagated (once congestion is detected) without requiring explicit routes, and how packets belonging to congested flows can be identified taking into account just their destination.
5
Performance Evaluation
The original performance analysis of basic RECN was revealed in [1], the improvements introduced by the enhanced RECN in [2] and the original authors of FBCIM uncovered their benchmark results in [3]. All three used a custom developed detailed event-driven simulator that allowed them to model the network at the register transfer level. For a thorough description of the simulator we recommend section 5.1 of [2]. As all three papers elaborate their performance evaluation to a great detail, we only present the general facts and highlight some interesting outcomes.
14
5.1
RECN Performance
RECN was tested with two different scenarios: using well-defined synthetic traffic patterns and using I/O traces provided by Hewlett-Packard Labs4 .
Basic RECN
ECN
VOQsw
1e+06
2e+06
3e+06
4e+06
Nanoseconds
(a) Traffic case #1
5e+06
6e+06
Network throughput (bytes per nanosecond)
Figure 7 shows interesting results for basic RECN, enhanced RECN and VOQsw (Virtual Output Queuing at switch level). A crossbar speedup of 1.5 is used. Time is plotted on the horizontal axis, while network throughput is displayed on the vertical axis. It can be noticed that the basic RECN mechanism starts to degrade performance as throughput drops from 44 bytes/ns to 37 bytes/ns. However, when the congestion tree disappears, performance recovers, although exhibiting a significant oscillation, due to the fact that HOL blocking is not completely eliminated. On the other hand, enhanced RECN behaves optimally as it tolerates the congestion tree with no performance degradation.
45 Enh. RECN
40
Basic RECN VOQsw
35
30
0
1e+06
2e+06
3e+06
4e+06
5e+06
6e+06
Nanoseconds
(b) Traffic case #2 for RECN and VOQsw Figure 7: Network throughput with 1.5 speedup
g. 7. Network throughputfor traffic cases #1 and #2. Speedupis 1.5.
Taking the throughput results of VOQsw as a reference, it can be deduced that both RECN versions handle the congestion caused by this specific traffic case very well, whereas the performance achieved by VOQsw is quite poor. Looking at both ame performance as enhanced RECN. However, there thethat performance of basic and enhanced RECN, weare cansignificant conclude that the use of anyexhibits of them virtually eliminates HOL blocking and that theydue perform irst, basic RECN an oscilation in the throughput achieved, to a lot better than the reference VOQsw.
HOL blocking is not completely eliminated. The enhanced RECN mechaFigure 8Inhighlights a situation the enhancements s a smooth performance. the traffic case #1,where sources injecting to proposed random to RECN are clearly beneficial. It depicts the throughput of the SAN5 traces provided by HP, are injecting at half the link rate. It can be observed that when congesusing no speedup when a sudden congestion tree forms. As can be observed, the rming, no strong degradation is experienced. However, for traffic case #2, 4 HP Storage Systems Program, http://www.hpl.hp.com/research/ssp/software/. ting random traffic5 Storage are injecting at the full injection rate. It can be noticed Area Network c RECN mechanism starts to degrade performance as throughput drops s/ns to 37 bytes/ns. However, when congestion 15 tree disapears, it recovexhibiting a significant oscillation. On the other hand, enhanced RECN mally as it tolerates the congestion tree with no performance degradation. e throughput results of VOQsw as a reference, it can be deduced that both ons handle the congestion caused by traffic case #2 very well, whereas nce achieved by VOQsw is quite poor (throughput drops from 44 to 25 does not recover even when the congestion tree disappears). Therefore,
c RECN
Network throughput (bytes per nanoseco
Enhanced RECN
VOQsw
18
Enhanced RECN
Basic RECN
16 14 12 10 8
VOQsw
d RECN
VOQsw Basic RECN
e+06
2e+06 3e+06 4e+06 Nanoseconds
(a) Traffic case #3
5e+06
6e+06
Network throughput (bytes per nanosecond)
6 basic RECN suffers from severe degradation: network throughput drops from 44 4 (a 77% drop). Also noticeable is the fact that basic RECN bytes/ns to 10 bytes/ns recovery time6e+06 is quite excessive that3e+06 it never fully recovers. e+06 2e+06 3e+06 4e+06 5e+06 0 1e+06and 2e+06 4e+06 5e+06 6e+06 Enhanced RECN the absence of speedup and the dynamics Nanosecondson the other hand is able to withstand Nanoseconds of congestion trees, achieving maximum performance. It seems to handle this (a) No Speedup (a) Speedup1.5 workload without a hitch by completely eliminates HOL blocking, even at the Fig. 8.ingress Network throughputfor SAN traffic. ports.
45 40
Basic RECN
Enhanced RECN
35 30 VOQsw
25 20 15 10 0
1e+06
2e+06 3e+06 4e+06 Nanoseconds
5e+06
6e+06
(b) Traffic case #4
Figure 8: Network throughput without speedup for SAN traces.
ig. 9. Network Throughputfor traffic cases #3 and #4. No speedup. Further test in [1] reveal that for large networks (2048 × 2048 MINs) the effects of the enhanced RECN are even more pronounced. Again, the interested reader caring for the full picture should consult the entire results.
put drops from 25 bytes/ns to 10 bytes/ns (60% drop) for traffic case #3 bytes/ns to 10 bytes/ns (77% drop) for traffic case #4. It can be noticed 5.2 FBCIM Performance RECN recovery time is quite excesive for traffic case #4 (indeed, it never ). On the other The hand, the enhanced RECN is able to filter the is achieved by evaluation of FBCIM in [3] mechanism states that maximum performance induced by theFBICM absence of speedup and size by or thetraffic dynamics congestion regardless of network pattern,of even in congestion situations. Virtual Output Queues at network level (VOQNet) obtains similar results than ng maximum performance. Thus, it virtually eliminates HOL blocking. FBICM, but requires far more resources, especially in large networks. VOQSw also shows performance results for a sudden congestion tree formation, consumes similar resources than FBICM, but its performance is poor in hot-spot speedup is 1.5.traffic Again, network throughput forsize. traffic cases #3 and results #4 for FBICM situations, regardless of network As can be expected, similar10.b, to therespectively. RECN ones, but in aboth different interconnect context. We refer Figure 10.a andareFigure With traffics, the behavto [3] for the full evaluation. ic RECN mechanism is better than in the case of no-speedup switches, ct that congestion tends to appear at egress sides. However, the basic nism still exhibits worse performance than the enhanced one. Whereas RECN keeps almost constantly network throughput at maximum, the baughput drops significantly (33% for traffic case #3 16 and 45% for traffic case its strong oscillations. Moreover, when sources inject at full rate (traffic work troughput does not completely recover from the drop. When the consappears, the basic RECN performance is far better than the VOQsw one. shows performance results for basic RECN and enhanced RECN when r networks. Figure 11.a corresponds to a 512 × 512 MIN network (512
6
Conclusion
This paper started by explored the complex nature of the formation of congestion trees in networks with CIOQ switches. It presented examples showing that, contrary to the common belief, a congestion tree can grow in a variety of ways, depending on switch architecture (crossbar speedup) and traffic load. We also showed that congestion situations are a serious threat to the performance of interconnection networks. In these situations, packet flows contributing to congestion slow the advance of other flows due to HOL blocking, effectively degrading overall network performance. Many techniques have been proposed to solve this problem. Basic RECN accomplishes decent results, and with the enhancements described in section 4.2 it achieves great performance by eliminating HOL blocking in a truly efficient and scalable way. Since RECN requires the use of deterministic source routing, networks using table-based, distributed deterministic routing cannot benefit from a RECN implementation. For these types of networks FBCIM offers an equally performing solution. The performance of FBCIM is on par with the RECN and far better than other solutions.
References [1] J. Duato, I. Johnson, J. Flich, F. Naven, P. Garc´ıa, and T. Nachiondo A New Scalable and Cost-Effective Congestion Management Strategy for Lossless Multistage Interconnection Networks, 2005. [2] P.J. Garc´ıa, J. Flich, J. Duato, I. Johnson, F.J. Quiles, F. Naven, Dynamic Evolution of Congestion Trees: Analysus and Impact on Switch Architecture, 2005. [3] J. Escudero-Sahuquillo, P.J. Garc´ıa, F.J. Quiles, J. Flich, and J. Duato, FBICM: Efficient Congestion Management for High-Performance Networks Using Distributed Deterministic Routing, 2008. [4] Myrinet 2000 Series Networking. Available at http://www.myricom.com/myrinet/overview/. [5] InfiniBand Trade Association, InfiniBand Architecture. Specification Volume 1. Release 1.0. Available at http://www.infinibandta.org.
17
[6] L. Shang, L. S. Peh, and N. K. Jha, Dynamic Voltage Scaling with Links for Power Optimization of Interconnection Networks, in Proc. Int. Symp. on High-Performance Computer Architecture, pp. 91–102, Feb. 2003. [7] M. Karol, M. Hluchyj, S. Morgan, Input Versus Output Queuing on a SpaceDivision Packet Switch, IEEE Transactions on Communications, Volume 35, Issue 12, pp. 1347–1356, Dec 1987. [8] G. Pfister and A. Norton, Hot Spot Contention and Combining in Multistage Interconnect Networks, IEEE Trans. on Computers, vol. C-34, pp. 943–948, Oct. 1985. [9] S. P. Dandamudi, Reducing Hot-Spot Contention in Shared-Memory Multiprocessor Systems,in IEEE Concurrency, vol. 7, no 1, pp. 48–59, January 1999. [10] R. Bianchini, T. J. LeBlanc, L. I. Kontothanassis, M. E. Crovella, Alleviating Memory Contention in Matrix Computations on Large-Scale Shared-Memory Multiprocessors, Technical report 449, Dept. of Computer Science, Rochester University, April 1993. [11] P. Yew, N. Tzeng, D.H. Lawrie, Distributing Hot-Spot Addressing in LargeScale Multiprocessors,IEEE Transactions on Computers, vol. 36, no. 4, pp. 388–395, April 1987. [12] M. Wang, H. J. Siegel, M. A. Nichols, S. Abraham, Using a Multipath Network for Reducing the Effects of Hot Spots, IEEE Transactions on Parallel and Distributed Systems, vol. 6, no.3, pp. 252–268, March 1995. [13] W.S. Ho, D.L. Eager, A Novel Strategy for Controlling Hot Spot Contention, in Proc. Int. Conf. Parallel Processing, vol. I, pp. 14–18, 1989. [14] G.S. Almasi, A. Gottlieb, Highly parallel computing, 1994. [15] S.L. Scott, G.S. Sohi, The Use of Feedback in Multiprocessors and Its Application to Tree Saturation Control, IEEE Transactions on Parallel Distributed Systems, vol. 1, no. 4, pp.385–398, Oct. 1990. [16] M. Thottethodi, A. R. Lebeck, S. S. Mukherjee, Self-Tuned Congestion Control for Multiprocessor Networks, in Proc. Int. Symp. High-Performance Computer Architecture, Feb. 2001. [17] W. Vogels et al, Tree-Saturation Control in the AC3 Velocity Cluster Interconnect, in Proc. 8th Conference on Hot Interconnects, Aug. 2000.
18
[18] J. H. Kim, Z. Liu, and A. A. Chien, Compressionless Routing: A Framework for Adaptive and Fault-Tolerant Routing, IEEE Trans. on Parallel and Distributed Systems, vol. 8, no. 3, 1997. [19] J. Liu, K.G. Shin, C.C. Chang, Prevention of Congestion in Packet-Switched Multistage Interconnection Networks, IEEE Transactions on Parallel Distributed Systems, vol. 6, no. 5, pp. 535–541, May 1995. [20] E. Baydal,P. L´opez and J.Duato, A Congestion Control Mechanism for Wormhole Networks, in Proc. 9th. Euromicro Workshop Parallel and Distributed Processing, pp. 19–26, Feb. 2001. [21] P. L´opez and J. Duato, Deadlock-Free Adaptive Routing Algorithms for the 3D-Torus: Limitations and Solutions, in Proc. Parallel Architectures and Languages Europe 93, June 1993. [22] V. Krishnan and D. Mayhew, A Localized Congestion Control Mechanism for PCI Express Advanced Switching Fabrics, in Proc. 12th IEEE Symp. on Hot Interconnects, Aug. 2004. [23] Advanced Switching for the PCI Express Architecture. White paper. Available at http://www.intel.com/technology/pciexpress/devnet/ AdvancedSwitching.pdf
19