5 Core Network
Chapter 5 Core Network Topic
Page
MME in Pool.................................................................................................. 119 Signalling Transport (SIGTRAN).................................................................. 124 User data transfer ........................................................................................... 131 Diameter ......................................................................................................... 135 Quality of Service .......................................................................................... 137
117
LTE/EPS Technology
This page is intentionally left blank
118
5 Core Network
MME in Pool The Intra Domain Connection of RAN Nodes to Multiple CN Nodes, introduced in UMTS R5, overcomes the strict hierarchy, which restricts the connection of a RAN node to just one CN node. This restriction in GSM/UMTS results from routing mechanisms in the RAN nodes which differentiate only between information to be sent to the PS or to the CS domain CN nodes and which do not differentiate between multiple CN nodes in each domain. The Intra Domain Connection of RAN Nodes to Multiple CN Nodes introduces a routing mechanism (and other related functionality), which enables the RAN nodes to route information to different CN nodes within the CS or PS domain, respectively.
GGSN SGSN RNC
RNC
GGSN SGSN
RNC
RNC
SGSN RNC
RNC
Figure 5-1 Network hierarchy GSM/UMTS R4The Intra Domain Connection of RAN Nodes to Multiple CN Nodes introduces further the concept of ‘pool-areas’ which is enabled by the routing mechanism in the RAN nodes. A pool-area is comparable to an MSC or SGSN service area as a collection of one or more RAN node service areas. In difference to an MSC or SGSN service area a pool-area is served by multiple CN nodes (MSCs or SGSNs) in parallel which share the traffic of this area between each other. Furthermore, pool-areas may overlap which is not possible for MSC or SGSN service areas. From a RAN perspective a pool-area comprises all LA(s)/RA(s) of one or more RNC/BSC that are served by a certain group of CN nodes in parallel. One or more of the CN nodes in this group may in addition serve LAs/RAs outside this pool-area or may also serve other pool-areas. This group of CN nodes is also referred to as MSC pool or SGSN pool respectively.
119
LTE/EPS Technology
GGSN
GGSN
SGSN
RNC
SGSN
RNC
RNC
SGSN
RNC
RNC
Pool area 1
RNC Pool area 2
Figure 5-2 Network hierarchy GSM/UMTS R5+ The Intra Domain Connection of RAN Nodes to Multiple CN Nodes enables a few different application scenarios with certain characteristics. The service provision by multiple CN nodes within a pool-area enlarges the served area compared to the service area of one CN node. This results in reduced inter CN node updates, handovers and relocations and it reduces the HSS update traffic. The configuration of overlapping pool-areas allows to separate the overall traffic into different MS moving pattern, e.g. pool-areas where each covers a separate residential area and all the same city centre. Other advantages of multiple CN nodes in a pool-area are the possibility of capacity upgrades by additional CN nodes in the pool-area or the increased service availability as other CN nodes may provide services in case one CN node in the pool-area fails. A user terminal is served by one dedicated CN node of a pool-area as long as it is in radio coverage of the pool-area. The fact that the BSC can co-operate with the several SGSN does not implies that the separate physical interfaces are required since the IP network can be used between BSCs and SGSNs to switch the traffic delivered on the same physical interfaces to different recipients connected to that network. SGSN1
SGSN2
SGSN3
IP network
BSC1
BSC2
BSC3
BSC4
BSC5
Figure 5-3 SGSNs in Pool (physical view with Gb/IP) 120
BSC6
5 Core Network
Similarly to GSM/UMTS, where the MSC/SGSN in Pool is already quite popular solution, the EPS network may utilise the solution called MME in Pool. However, some aspects of the CN nodes pool solution for GSM/UMTS and EPS networks are different: •
There is only one CN domain in EPS, that is PS domain, so there is only necessity for the MME nodes to be pooled,
•
The MME in Pool concept is introduced in the first release of the standard for the EPS network, so right from the beginning all MMEs/eNBs can support MME in Pool specific procedures. (In GSM/UMTS there was a necessity to solve backward compatibility problems between MSCs/SGSNs capable and non-capable of supporting pool area concept.)
•
The temporary UE identity GUTI that holds the binding between the UE and it’s serving MME in EPS has a structure that directly supports the concept of the MME in Pool, in contradiction to GSM/UMTS where TMSI/P-TMSI structure was modified for that purpose. Since new R5 TMSI/P-TMSI structure has to be backward compatible with R4, the solution is slightly less efficient, introduces some extra signalling load and in some cases may result in the situation where subscriber are not subjected to inter MSC/SGSN load distribution.
MME in Pool only PS domain in EPS no problems with backward compatibility GUTI structure supports the MME in Pool concept
Figure 5-4 MME in Pool
Pool area A pool-area is an area within which a UE may roam without a need to change the serving MME node. A pool-area is served by one or more MMEs nodes in parallel. The complete service area of a eNB (i.e. all the cells being served by one eNB) belongs to the same one or more pool-area(s). A eNB service area may belong to multiple pool-areas, which is the case when multiple overlapping pool-areas include this eNB node service area. If TA spans over multiple eNB service areas then all these eNB service areas have to belong to the same MME pool-area. Additionally, when the TA list, the UE is registered to, spans over multiple eNB service areas then also all these eNB service areas have to belong to the same MME pool area. 121
LTE/EPS Technology
eNB eNB
MME
eNB
MME
eNB
MME eNB
An MME pool-area is an area within which an MS roams without a need to change the serving MME.
Figure 5-5 MME pool area
MME selection and addressing addressing Each time the UE leaves the current MME pool area, the eNB runs MME selection function. The MME selection function selects an available MME for serving a UE. The selection is based on network topology, i.e. the selected MME serves the UE’s location and in case of overlapping MME service areas, the selection may prefer MMEs with service areas that reduce the probability of changing the MME. The selected MME allocates a Globally Unique Temporary Identity (GUTI) to the UE. The GUTI has two main components: •
Globally Unique MME Identifier (GUMMEI) uniquely identifying the MME which allocated the GUTI,
•
M-TMSI uniquely identifying the UE within the MME that allocated the GUTI. S-TMSI
GUMMEI
MCC
MNC
MMEGI
MMEC
MMEI GUMMEI Globally Unique MME Identifier MMEGI
MME Group ID
MMEC
MME Code
MMEI
MME Identifier
Figure 5-6 GUTI structure 122
M-TMSI
5 Core Network
The GUTI structure directly supports the concept of the MME pool area. Since during each identification the UE, not only identifies itself but also the MME that has allocated its temporary identity. Therefore, even in case of intra MME pool area mobility, each eNB easily can route the data from the UE to the MME which holds the user subscription and session information.
eNB
MME (GUMMEI #1)
GUTI (GUMMEI #2)
MME GUTI (GUMMEI #2)
eNB
(GUMMEI #2)
MME MME selection GUTI/GUMMEI allocation GUMMEI routing
(GUMMEI #3)
Figure 5-7MME in Pool and GUTI In case of inter MME pool area mobility the new eNB, can easily discover that the UE is coming from another pool area, the eNB is not a part of. In that case the eNB runs the MME selection process that will choose the new MME for the UE, which in turn allocates the new GUTI. The new GUTI (that includes the new MME’s identity) is used from that moment to route signalling messages from the UE to the selected MME, until the MME pool area is changed.
Load Balancing The MME Load Balancing functionality permits UEs that are entering into an MME Pool Area to be directed to an appropriate MME in a manner that achieves load balancing between MMEs. This is achieved by setting an MME weight factor (called MME Relative Capacity) for each MME, such that the probability of the eNB selecting an MME is proportional to its capacity. The MME Relative Capacity parameter is typically set according to the capacity of an MME node relative to other MME nodes.
123
LTE/EPS Technology
MME relative capacity MME 10
eNB 10
MME
20
MME Figure 5-8 Load balancing The MME Load Re-balancing functionality permits UEs that are registered on an MME (within an MME Pool Area) to be moved to another MME. An example use for the MME Load Re-balancing function is for the O&M related removal of one MME from an MME Pool Area.
MME
MME
MME Figure 5-9 Load re-balancing
Signalling Signalling Transport Transport (SIGTRAN) Signalling Transport (SIGTRAN) is a new set of standards defined by the International Engineering Task Force (IETF). This set of protocols has been defined in order to provide the architectural model of signalling transport over IP networks. 124
5 Core Network
SCTP To reliably transport signalling messages over IP networks, the IETF SIGTRAN working group devised the Stream Control Transmission Protocol (SCTP). SCTP allows the reliable transfer of signalling messages between signalling endpoints in an IP network.
Multihoming Opposed to TCP connection, an SCTP association can take advantage of a multihomed host using all the IP addresses the host owns. This feature is one of the most important ones in SCTP as it gives some network redundancy that is really valuable when dealing with signalling. In the older signalling systems, like SS7, every network component is duplicated, and the idea of loosing a TCP connection due to the failure of one of the network cards was one of the major problems that made SCTP necessary.
IP
TCP
TCP user
TCP
connection
IP
TCP user
endpoint/socket = IP address + TCP port number
IP path Figure 5-10 Singlehomed protocol (TCP)
IP IP
SCTP
IP
SCTP user
association
SCTP
IP
SCTP user
endpoint/socket = IP addresses + SCTP port number
IP paths Figure 5-11 Multihomed protocol (SCTP)
125
LTE/EPS Technology
Streams IP signalling traffic is usually composed of many independent message sequences between many different signalling endpoints. SCTP allows signalling messages to be independently ordered within multiple streams (unidirectional logical channels established from one SCTP endpoint to another) to ensure in-sequence delivery between associated endpoints. By transferring independent message sequences in separate SCTP streams, it is less likely that the retransmission of a lost message will affect the timely delivery of other messages in unrelated sequences (problem called head-of-line blocking). Because TCP/IP does enforce head-of-line blocking, the SCTP is better suited, rather than TCP/IP, for the transmission of signalling messages over IP networks. application
5
4
3
2
1 2
TCP connection
3 Re-Tx
4
1
5 buffered
Figure 5-12 Head-Of-Line (HOL) blocking – single TCP connection 2
1 SCTP user
Stream 0
6 SCTP association
5
Stream 0
Stream 1
Stream 2
5
45
6
46
Stream 1
46
45
Stream 2
2
buffered delivered delivered
Figure 5-13 SCTP association with several streams
Message oriented protocol TCP is stream oriented, and this can be also an inconvenience for some applications, since usually they have to include their own marks inside the stream so the beginning and end of their messages can be identified. In addition, they should explicitly make use of the push facility to ensure that the complete message has been transferred in a reasonable time.
126
5 Core Network
TCP user
TCP user
TCP
TCP
Figure 5-14 Stream oriented protocol (TCP) Opposed to TCP, an SCTP is message oriented. This means that the SCTP is aware of the upper layer protocol data structures, thus always a complete messages, well separated from each other are deliver to the SCTP user on the receiving side.
SCTP user
SCTP user
SCTP
SCTP
Figure 5-15 Message oriented protocol (SCTP)
Security SCTP is using and new method for association establishment. It completely removed the problem of the so-called SYN attack in TCP. This attack is very simple and can affect any system connected to the Internet providing TCP-based network services (such as an HTTP, FTP or mail server). Let us see in short how this basic attack is performed. In TCP, the establishment phase consists of a three-way handshake. These three packets are usually called SYN (from Synchronisation, as it has the SYN flag set, used only during the establishment), SYN-ACK (it has both the SYN and ACK flags set) and ACK (this is a simple acknowledgement message with the ACK flag set). The problem is that the receiver of the SYN not only sends back the SYN-ACK but also keeps some information about the packet received while waiting for the ACK message (a server in this state is said to have a half-open connection).
127
LTE/EPS Technology
Client
Server
SYN
Ack No. = 0, Seq. No. = Tag A TCB
SYN ACK
Ack No. = Tag A, Seq. No. =Tag B
ACK
Ack No. = Tag B, ...
Figure 5-16 Establishment procedure (TCP) The memory space used to keep the information of all pending connections is of finite size and it can be exhausted by intentionally creating too many half-open connections. This makes the attacked system unable to accept any new incoming connections and thus provokes a denial of service to other users wanting to connect to the server. There is a timer that removes the half-open connections from memory when they have been in this state for so long, and that will eventually make the system to recover, but nothing will change if the attacker continues sending SYN messages.
SYN
Fake IP address A
SYN
Fake IP address B
SYN
Fake IP address C
C SYN A
RST
K
AC SYN
K
CK NA Y S
RST
AC K
Fake IP address ...
SY N
SYN
R
ST
Figure 5-17 SYN attack in TCP As we see, the attacker uses IP spoofing, making it unable to receive the SYN-ACK segments produced, which is not a problem since it will never answer them. All those SYN-ACK segments will be lost unless there is any host with TCP service listening to the port and addresses used as the source of the SYN segment. In that case that host will answer with a segment carrying the RST (from Reset) flag set and the attacked system will delete the information for that specific half-open connection. SCTP gives no chance of success to this kind of attacks with its cookie mechanism. When the designers of SCTP started to think about how to deal 128
5 Core Network
with SYN flooding, they quickly saw that two things were necessary in order not to make a new transport protocol with the same weakness: •
The server (the initiate of a new association) should not use even a byte of memory until the association is completely established.
•
There must be a way to recognise that the client (the initiator of the association) is using its real IP address.
Usually, to meet the second requirement, the server sends some kind of key number to the client who will only receive that information if the source address used in its IP datagram is the real one. Once the client has that information, it can then send a confirmation to the server using that key number thus proving that it was telling the truth. This means that the server needs to save somewhere that key number as well so there is a way it can verify that the key number was the right one. But then comes the problem of being forced to store that value somewhere and using some memory resources while waiting for the answer that might never come. Therefore, the idea was: why not instead of storing that information in our system we make it to stay all the time in the network or in the client's memory? Of course, one immediately thinks that if a datagram coming from the client is the one that is going to provide us the information to check against the client's answer, we have not done anything but making worse the situation. The client will tell us whatever it wants and then it could just completely open an association sending us a simple message. But this is not necessarily true if we manage to convert the two problems into another one: the server has to sign with a secret key the information sent to the client. So, when it receives that information back from the client, it can recognise due to the signature and using the secret key, that it did send exactly that information, which is unmodified, and so we can be as confident on it as if it had never left the server's buffers.
Client
Server
INIT
COOKIE TCB
INIT ACK
(COOKIE)
COOKIE ECHO
(COOKIE) TCB
COOKIE ACK
Figure 5-18 Cookie (SCTP association establishment) 129
LTE/EPS Technology
SIGTRAN in GSM/UMTS Traditionally, on all the interfaces in GSM/UMTS CN, as well as, on interfaces connecting CN with the RAN, the SS7 is used. However the traditional SS7 protocols stack is not a good solution for the networks with IP transport since it still requires traditional TDM based interfaces to carry SS7 signalling links. That is way, majority of the GSM/UMTS operators are replacing traditional SS7 protocol stack with SIGTRAN. However, since it is just the modification of the traditionally SS7 protocol stack, only the MTP and sometimes additionally SCCP protocols are replaced by SIGTRAN, whereas the upper layers remains unchanged. This requires an extra set of protocols called the SIGTRAN User Adaptation Layers (UALs) introducing some extra cost, consuming processing power of the signalling nodes and adding also some complexity to the system. In fact, SIGTRAN in the today networks is only emulating the behaviour of the transport layers of the SS7. ISUP
BSSAP
MAP
CAP
Q.931
V5.2
SUA
IUA
V5UA
TCAP SCCP MTP3 M2PA
M2UA
M3UA SCTP IP
SIGTRAN protocols SS7 protocols
Figure 5-19 SIGTRAN protocol suite The User Adaptation Layers are named according to the service they replace, rather than the user of that service. For example, M3UA adapts SCTP to provide the services of MTP3, rather then providing a service to MTP3.
SIGTRAN SIGTRAN in EPS Since EPS is introducing a completely new set of the signalling protocols, these protocols were designed to operate directly on top of SCTP, without need for any User Adaptation Layers. Hence, the protocol stack is not only more elegant, but also it is much more efficient. Instead of emulating the
130
5 Core Network
behaviour of the traditional SS7 the, the SCTP can provide its services directly to the top most protocols, so they are now ready to fully utilise capabilities of the SCTP. Thanks to the fact, that there is less protocols in the stack the new system behaves better in terms of both, transmission bandwidth utilisation, as well as processing power consumption in the end devices.
S1AP
X2AP
SGsAP
Diameter
SCTP IP Figure 5-20 SIGTRAN in EPS
User data transfer The EPS nodes are interconnected via a private IP network of the operator, thus when communicating between each other they are using IP addresses from that private IP network. The IP address allocated to the user is in fact belonging to the external PDN addressing space, as it is used between the UE and the servers in the external network. IP address allocation
eNB IP
S-GW IP
IP IP IP private
P-GW IP
IP IP private or public
Figure 5-21 Tunnelling This means that on the interface which carries user data, user IP packets going to and from PDN have to be send inside other IP packets going between EPS nodes.
131
LTE/EPS Technology
IP
S-GW
IP
IP
P-GW
IP
Figure 5-22 User IP packet encapsulation
GTP GPRS Tunnelling Protocol (GTP) is a group of IP-based communications protocols used to carry user IP packets within GSM, UMTS and EPS networks. GTP can be decomposed into two separate protocols: GTP-C and GTP-U. GTPv2-C is used within the EPC for signalling between S-GW, P-GW, SGSN and SRVCC enhanced MSC server. This allows the EPC to activate a session on a user's behalf (EPS bearer), to deactivate the same session, to adjust QoS parameters, or to update a session for a subscriber changing S-GW or SGSN. Additionally, GTPv2-C is used to perform PS to CS handover between MME and SRVCC enhanced MSC (see Chapter 10 for more details). GTPv1-U is used for carrying user data within the EPC, between eNBs and S-GWs (S1 interface) and between neighbouring eNBs (X2 interface). UE is connected to an eNB without being aware of GTP.
Tunnels GTP tunnels are used between two nodes communicating over a GTP based interface, to separate traffic into different communication flows. GTP
GTP
UDP
UDP
IP
IP
L2
L2
L1
L1 S1/S3/S4/S5/S8/ S10/S11/S12/Sv/X2
Figure 5-23 GTP protocol stack
132
5 Core Network
A GTP tunnel is identified in each node with a Tunnel Endpoint Identifier (TEID), an IP address and a UDP port number. The receiving end side of a GTP tunnel locally assigns the TEID value the transmitting side has to use. The TEID values are exchanged between tunnel endpoints using GTP-C, S1-MME or X2-eNB messages. X2AP
GTP-C
S1AP MME GTP-C
eNB GTP-U
eNB
S-GW
P-GW
GTP-U
GTP-U GTP-C
GTP-U GTP-U
RNC
SGSN
RANAP
Figure 5-24 Tunnel control protocols
Tunnel establishment The generic GTP-C/GTP-U tunnel establishment procedure is shown in Fig.-5-25.
Create Tunnel Request (
)
Node 1
Node 2 Create Tunnel Response (
)
Data Control
TEID & IP @ Node 1 for data TEID & IP @ Node 1 for signalling
TEID & IP @ Node 2 for data TEID & IP @ Node 2 for signalling
Figure 5-25 Generic tunnel establishment procedure
133
LTE/EPS Technology
The node that is initiating the tunnel establishment sends to the terminating node the Create Tunnel Request message; the message among many other procedure specific parameters includes: •
Tunnel Endpoint Identifier (TEID) for User Plane that specifies a TEID for GTP-U, chosen by the originating node. The terminating node includes this TEID in the GTP-U header of all subsequent GTP-U packets, send in the backward direction.
•
Tunnel Endpoint Identifier (TEID) for Control Plane that specifies a TEID for control plane message, chosen by the originating node. The terminating node includes this TEID in the GTP-C header of all subsequent GTP-C packets, send in the backward direction. Those packets can carry messages used to complete the tunnel establishment, modify the already existing tunnel or to release the existing tunnel,
•
Originating node’s IP address for User Plane,
•
Originating node’s IP address for Control Plane.
The terminating node answers with the Create Tunnel Response message which contains: •
Tunnel Endpoint Identifier (TEID) for User Plane that specifies a TEID for GTP-U, chosen by the terminating node. The originating node includes this TEID in the GTP-U header of all subsequent GTP-U packets, send in the forward direction.
•
Tunnel Endpoint Identifier (TEID) for Control Plane that specifies a TEID for control plane messages, chosen by the terminating node. The originating node includes this TEID in the GTP-C header of all subsequent GTP-C packets, sent in the forward direction.
•
Terminating node’s IP address for User Plane,
•
Terminating node’s IP address for Control Plane.
From that moment the user communication context on one side of the tunnel is associated with the corresponding context on the other side of the tunnel. This association is kept thanks to allocation of flow specific pairs of IP addresses and TEIDs for both user data and control messages.
134
5 Core Network
Diameter Diameter is an AAA (Authentication, Authorisation and Accounting) protocol for applications such as network access or IP mobility. The basic concept is to provide a base protocol that can be extended in order to provide AAA services to new access technologies. Diameter is intended to work in both local and roaming AAA situations.
Diameter sessions consist of exchange of commands and Attribute Value Pairs (AVPs) between authorised Diameter Clients and Servers. Some of the command values are used by the Diameter protocol itself, while others deliver data associated with particular applications that employ Diameter. The base protocol provides basic mechanisms for reliable transport, message delivery and error handling. It must be used along with a Diameter application. A Diameter application uses the services of base protocol in order to support a specific type of service. The Diameter Base Protocol defines basic and standard behaviour of Diameter nodes as well-defined state machines and also provides an extensible messaging mechanism that allows information exchange among Diameter Nodes. Diameter Applications augment the Base Protocol state machines with application-specific behaviour to provide new AAA capabilities.
Diameter applications There are two kinds of applications: IETF standards track applications and vendor specific applications. The 3GPP Diameter application, relevant to EPS, are listed in Fig. 5-26. Application Identifier
Application (interface)
Nodes
16777236
Rx
PCRF ↔ AF
16777238
Gx
PCRF ↔ PCEF (P-GW)
16777251
S6a
MME ↔ HSS
16777252
S13/S13’
MME/SGSN ↔ EIR
16777267
S9
vPCRF ↔ hPCRF
Figure 5-26 3GPP Diameter applications The Diameter peers are communicating with each other over transport connection provided by SCTP (SCTP association). 135
LTE/EPS Technology
Diameter
Diameter
SCTP
SCTP
IP
IP
L2
L2
L1
L1 S6a/S13/S9/Rx/Gx
Figure 5-27 Diameter protocol stack
Proxy/Relay agent The Diameter base protocol defines two types of Diameter agent, namely Diameter Relay agent and Diameter Proxy agent. Diameter Relay is a function specialised in message forwarding, i.e.: •
A Relay agent does not inspect the actual contents of the message.
•
When a Relay agent receives a request, it will route messages to nexthop Diameter peer based on information found in the message e.g. application ID and destination address.
Diameter Proxy includes the functions of Diameter Relay and additionally it can inspects the actual contents of the message to perform admission control, policy control, add special information elements handling. The use of Proxy and Relay agent is especially important in case of roaming scenarios to support scalability, resilience and maintainability and to reduce the export of network topologies. Update Location Request, Destination Realm: epc.mnc<MNC>.mcc<MCC>.3gppnetwork.org. User Name: IMSI
HSS
vPLMN
MME MME
HSS Proxy/ Relay
GRX/IPX
Proxy/ Relay
PCRF
hPLMN
HSS IMSI
MME PCRF
Figure 5-28 Diameter Proxy/Relay agent Please, note that without usage of Diameter Proxy/Relay agents it would be necessary to provide a separate Diameter connection (SCTP association) between each MME of the VPLMN and each HSS of every possible HPLMN. 136
5 Core Network
Quality of Service The EPS provides IP connectivity between a UE and a PLMN external packet data network. This is referred to as PDN Connectivity Service. The PDN Connectivity Service supports the transport of one or more Service Data Flows (SDFs).
EPS bearer For E-UTRAN access to the EPC the PDN connectivity service is provided by an EPS bearer. An EPS bearer uniquely identifies an SDF aggregate between a UE and a P-GW. An EPS bearer is the level of granularity for bearer level QoS control in the EPC/E-UTRAN. That is, SDFs mapped to the same EPS bearer receive the same bearer level packet forwarding treatment (e.g. scheduling policy, queue management policy, rate shaping policy, RLC configuration, etc.). Providing different bearer level QoS to two SDFs thus requires that a separate EPS bearer is established for each SDF.
eNB
S-GW
EPS Bearer #1 (bearer QoS1)
P-GW
EPS Bearer #2 (bearer QoS2)
PDN
Service Data Flow (PCC parameters)
Figure 5-29 EPS bearer One EPS bearer is established when the UE connects to a PDN, and that remains established throughout the lifetime of the PDN connection to provide the UE with always-on IP connectivity to that PDN. That bearer is referred to as the default bearer. Any additional EPS bearer that is established to the same PDN is referred to as a dedicated bearer.
137
LTE/EPS Technology
eNB
S-GW
P-GW
Dedicated Bearer (additional bearer, GBR or non-GBR)
PDN Default Bearer (created as a part of Attach proc., non-GBR)
Figure 5-30 Default & dedicated bearer An UpLink/DownLink Traffic Flow Template (UL/DL TFT) is a set of UL/DL packet filters. Every EPS bearer is associated with an UL TFT in the UE and a DL TFT in the P-GW (i.e. PCEF).
UE
eNB
S-GW
P-GW
EPS Bearer #1 Filters
Filters EPS Bearer #2
PDN
Figure 5-31 Traffic Flow Template (TFT) The initial bearer level QoS parameter values of the default bearer are assigned by the network, based on subscription data (in case of E-UTRAN the MME sets those initial values based on subscription data retrieved from HSS). The PCEF may change those values based in interaction with the PCRF or based on local configuration. The decision to establish or modify a dedicated bearer can only be taken by the EPC, and the bearer level QoS parameter values are always assigned by the EPC. Therefore, the MME does not modify the bearer level QoS parameter values received on the S11 reference point during establishment or modification of a dedicated bearer. Instead, the MME only transparently forwards those values to the E-UTRAN. Consequently, â&#x20AC;&#x2DC;QoS negotiationâ&#x20AC;&#x2122; between the E-UTRAN and the EPC during dedicated bearer establishment / modification is not supported. The MME may, however, reject the establishment or modification of a dedicated bearer (e.g. in case the bearer level QoS parameter values sent by the PCEF over an S8 roaming interface do not comply with a roaming agreement).
138
5 Core Network
‘Bearer establishment trigger’ PCRF
eNB
S-GW
P-GW
Bearer establishment direction
AF
no QoS negotiation
Figure 5-32 Bearer establishment direction An EPS bearer is referred to as a GBR bearer if dedicated network resources related to a Guaranteed Bit Rate (GBR) value that is associated with the EPS bearer are permanently allocated (e.g. by an admission control function in the eNB) at bearer establishment/modification. Otherwise, an EPS bearer is referred to as a Non-GBR bearer. A dedicated bearer can either be a GBR or a Non-GBR bearer. A default bearer is a Non-GBR bearer. An EPS bearer is realised by the following elements: •
An UL TFT in the UE maps an SDF to an EPS bearer in the UL direction. Multiple SDFs can be multiplexed onto the same EPS bearer by including multiple UL packet filters in the UL TFT;
•
A DL TFT in the P-GW maps an SDF to an EPS bearer in the DL direction. Multiple SDFs can be multiplexed onto the same EPS bearer by including multiple DL packet filters in the DL TFT;
•
A radio bearer transports the packets of an EPS bearer between a UE and an eNB. There is a one-to-one mapping between an EPS bearer and a radio bearer;
•
An S1 bearer transports the packets of an EPS bearer between an eNB and a S-GW;
•
An S5/S8 bearer transports the packets of an EPS bearer between a S-GW and a P-GW;
•
A UE stores a mapping between an UL packet filter and a radio bearer to create the mapping between an SDF and a radio bearer in the UL;
•
A P-GW stores a mapping between a DL packet filter and an S5/S8 bearer to create the mapping between an SDF and an S5/S8 bearer in the DL;
139
LTE/EPS Technology
•
An eNB stores a one-to-one mapping between a radio bearer and an S1 to create the mapping between a radio bearer and an S1 bearer in both the UL and DL;
•
A S-GW stores a one-to-one mapping between an S1 bearer and an S5/S8 bearer to create the mapping between an S1 bearer and an S5/S8 bearer in both the UL and DL.
QoS parameters The bearer level (i.e. per bearer or per bearer aggregate) QoS parameters are QCI, ARP, GBR, MBR, and AMBR described in this section. Each EPS bearer (GBR and Non-GBR) is associated with the following bearer level QoS parameters: •
QoS Class Identifier (QCI);
•
Allocation and Retention Priority (ARP).
A QCI is a scalar that is used as a reference to access node-specific parameters that control bearer level packet forwarding treatment (e.g. scheduling weights, admission thresholds, queue management thresholds, link layer protocol configuration, etc.), and that have been pre-configured by the operator owning the access node (e.g. eNB). The primary purpose of ARP is to decide whether a bearer establishment / modification request can be accepted or needs to be rejected in case of resource limitations (typically available radio capacity in case of GBR bearers). In addition, the ARP can be used (e.g. by the eNB) to decide which bearer(s) to drop during exceptional resource limitations (e.g. at handover). Once successfully established, a bearer's ARP has no any impact on the bearer level packet forwarding treatment (e.g. scheduling and rate control). Such packet forwarding treatment should be solely determined by the other bearer level QoS parameters: QCI, GBR, MBR, and AMBR. Video telephony is one use case where it may be beneficial to use EPS bearers with different ARP values for the same UE. In this use case an operator could map voice to one bearer with a higher ARP, and video to another bearer with a lower ARP. In a congestion situation (e.g. cell edge) the eNB can then drop the ‘video bearer’ without affecting the ’voice bearer’. This would improve service continuity.
140
5 Core Network
Each GBR bearer is additionally associated with the following bearer level QoS parameters: •
Guaranteed Bit Rate (GBR);
•
Maximum Bit Rate (MBR).
The GBR denotes the bit rate that can be expected to be provided by a GBR bearer. The MBR limits the bit rate that can be expected to be provided by a GBR bearer (e.g. excess traffic may get discarded by a rate shaping function). Each APN is associated with the ‘per APN Aggregate Maximum Bit Rate (APN-AMBR)’ IP-CAN session level QoS parameter. The APN-AMBR is a subscription parameter stored per APN in the HSS. It limits the aggregate bit rate that can be expected to be provided across all Non-GBR bearers and across all PDN connections of the same APN (e.g. excess traffic may get discarded by a rate shaping function). Each of those Non-GBR bearers could potentially utilise the entire APN-AMBR, e.g. when the other Non-GBR bearers do not carry any traffic. GBR bearers are outside the scope of APN-AMBR. The P-GW enforces the APN-AMBR in downlink. Enforcement of APN-AMBR in uplink is done in the UE and additionally in the P-GW. Each UE is associated with the ‘per UE Aggregate Maximum Bit Rate (UE-AMBR)’ bearer level QoS parameter. The UE-AMBR is limited by a subscription parameter stored in the HSS. The MME sets the used UE-AMBR to the sum of the APN-AMBR of all active APNs up to the value of the subscribed UE-AMBR. The UE-AMBR limits the aggregate bit rate that can be expected to be provided across all Non-GBR bearers of a UE (e.g. excess traffic may get discarded by a rate shaping function). Each of those Non-GBR bearers could potentially utilise the entire UE-AMBR, e.g. when the other Non-GBR bearers do not carry any traffic. GBR bearers are outside the scope of UE-AMBR. The E-UTRAN enforces the UE-AMBR in uplink and downlink. The GBR and MBR denote bit rates of traffic per bearer while UE-AMBR/ APN-AMBR denote bit rates of traffic per group of bearers. Each of those QoS parameters has an uplink and a downlink component. On S1_MME the values of the GBR, MBR, and AMBR refer to the bit stream excluding the GTP-U/IP header overhead of the tunnel on S1_U. One 'EPS subscribed QoS profile' is defined for each APN permitted for the subscriber. It contains the bearer level QoS parameter values for that APN's default bearer (QCI and ARP) and the APN-AMBR.
141
LTE/EPS Technology
EPS bearer GBR bearer
non-GBR bearer
GBR
QCI
MBR
ARP
UE-AMBR
APN-AMBR
Figure 5-33 EPS bearer related QoS parameters
Mapping between QCI and UMTS QoS parameters A recommended mapping for QoS Class Identifier to/from UMTS QoS parameters is shown in Fig. 5-34. UMTS QoS parameters QCI traffic class
THP
signalling indication
source statistics descriptor
1
conversational
-
-
speech
2
conversational
-
-
unknown
3
streaming
-
-
speech
4
streaming
-
-
unknown
5
interactive
1
yes
-
6
interactive
1
no
-
7
interactive
2
no
-
8
interactive
3
no
-
9
background
-
-
-
Figure 5-34 QCI to UMTS QoS parameters mapping
142