CommsDay Magazine Summer 2014/15 edition

Page 1

December 2014 • Published by Decisive • A CommsDay publication

“OTT video killed the cable tv star?” The rise and rise of Netflix Telcos face 4K revolution

Apple bites into mobile payments

Copper: The shape of things to come

A quantum leap in telecoms - literally?

The road to 5G


Revolutionizing the datacenter network and beyond... The Nuage Networks™ Virtualized Services Platform (VSP) lays the foundation for an open and dynamically controlled datacenter network fabric to accelerate application programmability, facilitate unconstrained mobility, and maximize compute efficiency for cloud service providers, webscale operators & leading tech enterprises across the globe. Discover how Nuage Networks VSP leverages Software Defined Networking (SDN) to: ■ Simplify operations for rapid service instantiation ■ Address changing business requirements with flexible, adaptable services ■ Support massive multi-tenancy and hybrid cloud services

@nuagenetworks


COMMSDAY MAGAZINE

ABOUT COMMSDAY MAGAZINE Mail: PO Box A191 Sydney South NSW 1235 AUSTRALIA. Fax: +612 9261 5434 Internet: www.commsday.com COMPLIMENTARY FOR ALL COMMSDAY SUBSCRIBERS AND CUSTOMERS. Published several times annually. CONTRIBUTIONS ARE WELCOME GROUP EDITOR: Petroc Wilton FOUNDER: Grahame Lynch COVER DESIGN: Peter Darby WRITERS: Geoff Long, Richard van der Draay, William Van Hefner, Grahame Lynch, Tony Chan ADVERTISING INQUIRIES: Sally Lloyd at sally@commsdaymail.com EVENT SPONSORSHIP: Veronica Kennedy-Good at veronica@mindsharecomms.com.au ALL CONTENTS OF THIS PUBLICATION ARE COPYRIGHT. ALL RIGHTS RESERVED CommsDay is published by Decisive Publishing, 4/276 Pitt St, Sydney, Australia 2000 ACN 13 065 084 960

5 The rise and rise of Netflix 8 The decline of cable TV 11 Telcos’ 4K challenge 15 Latest developments in copper 19 The path to 5G 26 Apple moves into mobile payments 29 Datacentres and telcos 33 The quantum shift in telcos



The cord-cutters Netflix has finally announced that it is officially coming to the Asia Pacific region in 2015. Geoff Long looks at what it means for traditional pay TV providers and telcos alike.

T

hey're known as the cord cutters: consumers who ditch their traditional pay-TV service in favour of a generous broadband package, coupled with an over-the-top content provider such as Netflix, Amazon Video, Hulu and the like. While the trend has been most notable in the US market, some of those same OTT players now have their sights set on new markets in the Asia Pacific, while local service providers are also launching their own offerings. If the region follows the trends in the US and Europe, a new generation of consumers will soon be looking to cut the cord – with implications for broadband traffic. According to the latest data from Sandvine, Netflix alone now accounts for 34.9% of downstream traffic in peak evening hours in the US market. And in Australia – where users have been accessing Netflix via VPNs ahead of its launch in

March next year – Sandvine suggests it could already account for 4% of downstream traffic. Amazon Instant Video is now the second largest paid streaming video service in North America, accounting for 2.6% of downstream traffic. While still relatively small, its share of traffic has more than doubled in the last 18 months. HBO GO accounts for just 1% of downstream traffic in the US, but given that it plans to start offering a standalone streaming service in the US, it's also likely to be a factor in future. Users in the Asia Pacific are already heavy users of real-time entertainment services – accounting for 47.5% of peak downstream traffic in the latest Sandvine report – so the arrival of the OTT video services is likely to put further demands on telco networks. SNL Kagan analyst Wangxing Zhao says that OTT providers are taking on traditional multichannel provid-

ers with diversified revenue models including advertising, streaming video on demand premium rental, and download-to-own. Based on indicators including market size and device/service penetration, he nominates the top-ranking countries in Asia Pacific for OTT viability as South Korea, Japan, China, Australia and Taiwan. Other factors cited that would boost OTT viability include telecom infrastructure, an open regulatory environment, diverse international content, strong local broadcaster presence, and residential purchasing power. NETFLIX IS COMING: The most significant OTT announcement for the region is that Netflix has finally confirmed that it is arriving, firstly in Australia and New Zealand but with plans to also tackle other markets including China. The impact of Netflix is already evident in the UK and Ireland, where just two years after its


ME1 WHERE MELBOURNE AND THE WORLD CONNECT

Benefits of an Equinix Data Centre • Directly connect to rich global ecosystems including networks, cloud providers, content sources, financial exchanges and social networks • Utilising cutting-edge operational technology, systems, and procedures to achieve global uptime record of >99.9999% • Conveniently located within 5 minutes from the central business district Would you like to discuss how you can benefit by partnering with Equinix in our ME1 Data Centre. CALL US ON 1800 135 546

Equinix.com.au

© 2013 Equinix Inc


launch it was the second largest driver of traffic on fixed access networks, accounting for over 17.8% of downstream traffic in the evening. The company itself says that it’s been obtaining the rights to content in Australia and other places in Asia, making it easier to expand within the region. “A lot of our content choices have proven to be extremely global, starting with all of our original series – Orange Is the New Black and House of Cards have been huge successes in not just in Australia but in China, I mean, all over the world. So these buys bode well, I think, for future expansion in all territories,” says Netflix chief content officer Ted Sarandos. Netflix director of content delivery architecture David Fullagar is already telling the Australian service provider community that the arrival of the service is likely to see a major surge in downstream traffic. “The interesting thing is after a few years in every market we've operated in, we start to become a dominant form of traffic,” he says, noting that Netflix typically accounts for between 30-50% of downstream traffic in a range of markets. In its home market in the US, Netflix makes up around 33% of

downstream net traffic in peak hours. When the service does make its way Down Under, Fullagar and his team will use the same ‘Open Connect’ model for large-scale content delivery that it uses in other markets. In a briefing at Swinburne University, he told the audience that Netflix had originally used a combination of content delivery networks from Level 3, Akamai and Limelight to deliver its services. It was the largest traffic source on each of those networks but has since moved to its own delivery system. “It made a lot of sense from a scale point of view to bring that back in house. So from this (Northern) summer we actually now have all our

“The interesting thing is after a few years in every market we've operated in, we start to become a dominant form of traffic” traffic on our own network,” he said. Before joining Netflix in 2011, Fullagar was the principal network architect for Level 3's CDN; he also

jointly holds a number of patents for load balancing and media storage. He told the audience at Swinburne that the Netflix service now runs on “at least a couple of thousand different types of products,” from mobile phones and set top boxes, to game consoles, tablets and TVs. He said one of the keys to serving up reliable content was to pre-load content in caches and use analytics to determine the likely content and the device being used. The Netflix client runs in-house developed adaptive bitrate heuristics, which choose the right bitrate for that particular connection. Client metrics then offer information about what sort of user interface they need to see, what sort of recommendations they're going to see and which servers on its delivery network will be used. The service also uses a cloud -based control plane connection that is hosted in three different locations on Amazon's web services platform. Netflix benefits from having a content delivery network that serves a single purpose: delivering video from a fixed library of content. It can utilise off-peak times when the networks are less busy to fill its caches in advance. Those caches are located both in fixed exchanges that Netflix operates itself as well as cache appliances that it can place within the service provider's infrastructure. “We developed this cache that would be very effective for service providers that wanted to have Netflix content within their network and reduce that middle mile traffic,” explains Fullagar. “For service providers that don't want to peer with us at the locations we have, they can take caches to augment [service] and further reduce costs.” He also suggests that the move to its own CDN infrastructure had made negotiations within new markets easier; he notes that third-party CDNs are not necessarily totally aligned with either Netflix's or ISPs’ bests interests. “So by having our own CDN we can have those conversations, we can


make compromises and make changes in the way we do design to the ISP's benefit and we can execute on it from the beginning because we're in control of the infrastructure,” he says. COMPETITION RISES: Of course, Netflix will have to battle the locals when it does arrive in Australia and New Zealand next year. In Australia both Foxtel and Telstra have recently introduced significant prices drops for basic services. New offering Stan – a joint venture between the Nine Network and Fairfax – is also slated to launch just ahead of Netflix. Over in New Zealand, Spark has recently launched its Lightbox video streaming service, while rival Sky is launching a service that will be bundled with Vodafone broadband at the end of the year. And West Australianbased Quickflix operates a similar service to Netflix in both Australia and New Zealand. The reaction from New Zealand to Netflix’s announcement has been largely positive. Slingshot GM Taryn Hamilton welcomes the news and looks forward to more similar ser-

vices entering the field, while InternetNZ CEO Jordan Carter says his organisation is “thrilled” that Netflix will be launching locally. “It’s becoming clearer that the future of broadcasting is online. Netflix has shown with its huge success overseas that it is one of the very best at this game. Competition like this will lead to better choices and more content for New Zealanders,” he says. “We’ve had Quickflix for a while, Lightbox is all go and Sky is also moving to an online model. With this level of competition we expect prices to drop and quality to improve.” Carter also suggests that the arrival of Netflix is likely to help drive the uptake of fibre.“This is exactly the sort of content that the UFB was designed for. It wasn’t long ago that Youtube videos were all in grainy 240p. Now we’re talking about streaming movies in HD. Getting the UFB out to New Zealanders like we are will ensure that we get the best of what the internet can offer.” FetchTV CEO Scott Lorson, meanwhile, is similarly unruffled by the news. FetchTV partners with compa-

nies like iiNet and Optus and competes directly with Telstra/Foxtel, but Lorson has long argued that the real pay-TV money in Australia is in that platform space – the battle for the main living room TV – rather than OTT streaming. “The courting is now coming to a close, and we’re entering a new phase in Australian media. We think it’s going to be a very exciting next six months,” says Lorson. “We believe the platform players, ourselves and Foxtel, are incredibly well-positioned – and that one, and possibly two winners, will emerge in the SVOD space. But there’s no denying that will be a bloody battle with very high stakes.” Fetch has previously publicly expressed a willingness to partner with players such as Netflix. “We aspire to be a platform ecosystem, and the ability to integrate third-party products is a core competency,” says Lorson. However it plays out, service providers can be guaranteed one thing: the amount of video traffic on their networks will continue increasing.

The decline of cable television The cord-cutting phenomenon might be driving broadband traffic, but it poses some very serious challenges for cable operators. William Van Hefner reports on how the situation is playing out in the US.

A

perfect storm of sorts has been building in the cable industry for a number of years now. As most cable operators are increasingly pinning their hopes on broadband sales to ensure their survival, a longrunning war with cable TV networks has made the business of selling cable television programming an increasingly difficult affair.

At the same time, cable TV viewers are rebuffing rate increases triggered by increased programming costs. A growing number of internet users are becoming ‘cordcutters’, ditching cable TV subscriptions entirely in favour of streaming video services such as Netflix and Hulu. Programmers and cable networks,

on the other hand, have seen a sharp decline in viewers and advertising revenues in recent years and are increasingly looking to subscription fees for cash to create new and original content. In the United States, this three-way battle has put the industry on the brink of a watershed moment. Pro-


grammers and content providers, which face increasingly stiff resistance from cable and satellite providers to steep rate increases, are slowly coming to the conclusion that eliminating the middle-man may be the only way to retain their audiences. A number of high-profile disputes in late 2014 have resulted in cable and satellite providers in the U.S. completely dropping the network programming of channels which have insisted on raising rates, at a time when advertising revenues and viewership for many of these channels are on the decline. Programmers, which once had the upper hand in these negotiations, are suddenly finding themselves frozen out of entire markets – resulting in literally millions of lost viewers and with them, monthly subscription fees. In a move viewed by many as long overdue, and by others as a desperate gamble, content providers are slowly beginning to dismantle the wall that separates them from their viewers. On the heels of other successful streaming services, a number of content providers have made announcements of their direct entry into video streaming delivery. HBO, which briefly experimented with delivering such a service to Europe in 2012, has announced a new, cable-free streaming platform to be introduced within a year. The CBS Network has already launched its own streaming service and promises to add its premium channel Showtime to the lineup in 2015. Cable and satellite providers themselves have fought back, launching their own streaming access to programming. Until now, these services have usually been tied to a tradition-

al cable TV subscription. However, in 2015 DISH Network will begin offering a low-cost streaming service featuring much of its traditional satellite TV lineup without the need for a cable connection or a satellite dish. Third-party content distributors are also set to jump on the streaming bandwagon in 2015. Sony plans to

“How many separate streaming services consumers are willing to pay for remains to be seen, though, especially in light of the public's aversion to most internet paywalls� offer its own streaming service directly to consumers via their Playstation platform. The company has already negotiated content rights with dozens of programmers eyeing a worldwide audience. Although most of the moves currently being made by programmers to directly bypass cable and satellite providers are happening in the United States, the ripple effect will not take long to affect other countries. An increasingly internet-savvy generation have learned to bypass geographical restrictions by utilizing proxy servers, a service that has seen explosive growth over the past year. Simply put, geographical boundaries no longer exist for those who choose to bypass them. It is almost as easy for someone to watch a Netflix movie in Beijing as it is in the United States, whether content providers or governments approve of the practice or not. Statistics in the cable industry do not bode well for the future of traditional distribution models. The Wall

Street Journal recently reported that the number of cable television subscribers in the U.S. actually contracted in 2013, by over 200,000 customers. It is the first year in the history of cable television that the number of subscribers has actually declined. Another critical threshold was crossed this past year by two of the nation's largest cable providers. Comcast and Time Warner Cable. In the summer of 2014, both companies reported adding more new broadband subscribers than cable television subscribers for the first time ever. This trend seems to be accelerating, provoking some small cable providers to drop cable television programming entirely and concentrate instead on the wider popularity and increased margins that broadband services currently provide. With an increasing number of agreements between cable television companies and programmers failing to be renewed, there is little doubt that the number of programmers offering streaming services directly to consumers will continue to increase. Third-party programming packagers such as Sony will also likely take their own share of the market. How many separate streaming services consumers are willing to pay for remains to be seen, though, especially in light of the public's aversion to most internet paywalls., With no end in sight for the broadband market, cable operators are increasingly unlikely to stay aboard what amounts to a sinking ship. With so little to be gained in the ever-dwindling cable television marketplace, the days of cable companies offering television content not under their direct control would seem to be numbered.


CISCO. CHANGING THE WAY SERVICE PROVIDERS CONNECT, OPERATE, INNOVATE AND EVOLVE. CONGRATULATES COMMSDAY ON CELEBRATING 20 YEARS. www.cisco.com/go/anz/sp

SERVICE PROVIDER

©2014 Cisco Systems, Inc. All rights reserved.

TRANSFORMATION THROUGH INNOVATION.


Telcos facing 4K video conundrum The emergence of 4K video could flood networks with video content that offer few upsides for telcos, while demanding heavy investment in infrastructure. Tony Chan looks at the potential impact of ultra high definition content on the global network infrastructure.

A

kamai Technologies media business CTO John Bishop throws around a lot of big numbers; forecasts that, if they turn out to be accurate, could severely disrupt the internet in its present form. For example, his presentation recently at an industry event literally was off the chart when it came to projected bandwidth requirements to support global adoption of 4K video, the version of digital video with a resolution four times the pixel count of high definition television. Bishop’s equation is probably too simplistic, but it is effective in sketching the challenge that the telecoms industry could face in the future. Taking the average primetime viewership globally of 2.5 billion, he multiples that number by 10Mbps, or the estimated bandwidth needed to transmit a high definition and/or 4K video signal – depending on factors such as compression, encoding, and frame rate – and he comes up with a big number: 25,000Tbps. That’s the estimated bandwidth to support global adoption of HD/4K primetime viewing. Then he takes the number of major core networks in the world today, 100, and multiplies that by their av-

erage capacity of 5Tbps. The result he comes up with is, again, pretty big – 500Tbps, which is more than all of the world’s subsea cables put together today – but not big enough, since it represents just 2% of that 25,000Tbps overhead for global HD/4K. Obviously, the idea of globally ubiquitous HD and 4K video is a long way from reality today. And not all 2.5 billion primetime viewers will be streaming across the internet backbone; so those figures create an ex-

Akamai’s John Bishop treme, and slightly abstract, version of the real world bandwidth demand. On the other hand, Bishop highlights two real world events that have

seen bandwidth consumption grow nearly tenfold in a matter of four years. First he points to the Winter Olympics of 2010 and 2014. In the four years between the games in Vancouver, Canada and Sochi, Russia, the traffic carried on Akamai’s network grew from 12 petabytes to 81 petabytes. Secondly, he describes the massive traffic surge on Akamai’s network generated by another onceevery-four-year event – the FIFA World Cup. Between the 2010 tournament in South Africa and the 2014 event in Brazil, traffic surged from 29 petabytes to 222 petabytes. “That’s a compounded annual growth rate of 63% for the two events,” Bishop says, pointing out that traffic on Akamai’s infrastructure is growing even faster, at a CAGR of 81% over the past 15 years, from 1Gbps of capacity on its network back in 1999, to 7Tbps in 2014. So even as network operators across the world voice their lament over the impact of Netflix’s traffic on their networks, the challenge is only emerging. With the advent of ultra HD and 4K, the situation is set to get a lot more difficult – by a factor


of four. RESOLUTION REVOLUTION: While the growing number of users represents one dimension of the challenge, video quality is also a major contributor, adds Bishop. “If you turn your attention to what is happening on quality, there is similar growth curve,” he says. “If you go back, you are looking at postage stamp video back in the late 90s, early 2000s – really we [were] doing quarter VGA, so 160x120 resolution, and maybe in MPEG1, but with also a lot of proprietary codecs.” From that beginning where streams required 550kbps, the industry has made strides forward, first to full VGA quality (640x480 pixels) on MPEG2-4 requiring 1.8Mbps circuits around the 2004 time frame, then to 720p HD and 1080p HD using AVC/H.264 codecs that needed 3.5Mbps and 7.5Mbps to work. “We have advanced quickly and we now have 4K. We have gone though the HD, the SD shift, we have gone HD at 720p at 24 frames, 30 frames and 60 frames, and now we have 1080p/24, 1080p/30, 1080p/60… so the revolution in data rates that we are seeing on the web is exactly what is put out on a traditional television today. There is no difference,” says Bishop. “When we look at what we are starting to see, we are just at the tip of the iceberg with 4K. What

we are seeing with 4K is beyond what television is doing. There is no linear 4K being distributed out over normal television today, but there is 4K video-on-demand content going out over Akamai’s network. We are seeing data rates there of around 15Mbps, we are seeing data rates right now on 1080p live streaming content in some countries pushing 810Mbps, so the numbers keep rising

“The challenge at Akamai media is ‘how do we put out the ultimate video quality at scale?’” up.” Telcos have a lot to worry about when it comes to Bishop’s numbers. First of all, 4K is coming whether they like it or not. A recent Ericsson study found that up to 60% of consumers see HD video as a “very important aspect” of their TV and video experience. Perhaps more importantly, 43% of those surveyed specifically pointed out they felt the same about ultra HD and 4K. A more worrying point for telcos, perhaps, is that their networks could be the only ones capable of delivering beyond-HD content. According to Bishop, 4K exceeds the capabilities of traditional broadcast media like satellite. IP networks are proba-

bly the only networks that will be able to support 4K content at scale, he says. “4K video won’t be simply internet video, but is likely to replace broadcast television. And what we can deliver over IP is bigger than what we can deliver over the traditional broadcast infrastructure,” he says. “The challenge at Akamai media is ‘how do we put out the ultimate video quality at scale?’ When I say ultimate, I don’t mean HD, but beyond HD, so 4K, which is either here today, or just around the corner, or is around the next corner after that. But it’s coming. I’ve already had discussions around 8K and 12K, and around 3D 4K, and 3D 8K, there is always more. The ultimate video quality is a moving target.” FEW UPSIDES: But even as they face this pending avalanche of traffic, there isn’t much upside for telcos, at least on the retail side. Unless telcos start charging specifically for carriage of 4K traffic and offer some kind of performance guarantee, there isn’t any inherently obvious way for them to monetise 4K proportionate to the hugely increased data volume. According to Bishop, there are three classic monetising models for media content: sell through such as videoon-demand, subscription based services, and advertising. None of these are actually applicable to telecoms


EXPERIENCED, COMMERCIAL, PASSIONATE. Gilbert + Tobin has been at the forefront of communications law and policy for over 25 years. Our clients are industry leaders in Australia and globally, who turn to us for advice, thought leadership, and to help them deliver complex, ground-breaking transactions. Our lawyers understand that great advice helps to deliver results. That is why our telecommunications, media, technology, and regulatory teams are consistently ranked top tier in external publications Legal 500 Asia Pacific and Chambers Asia Pacific.

SYDNEY MELBOURNE

GTLAW.COM.AU

P E RT H


operators unless they start their own over-the-top or content service. One way telcos might benefit is, ironically, to become like Akamai – by licensing Akamai’s technology and building their own CDN in their network. “We are licensing our technology to telcos across the globe. And we are making sure that our CDN technology can be used in various telco networks on an on-net capacity,” says Bishop. “When we look at the telco on-net delivery using Akamai technology, they can provide higher quality of service than an open internet site. So if someone is on-net, I might be able to deliver a better Netflix experience, or my own content if I have my own content. I think there are opportunities for telcos to differentiate services as an on-net delivery.” One environment where this is immediately applicable is when the service provider is actually a quad- or triple-play provider. 4K would immediately become a competitive differentiator not only because of the superior quality of the images, but in highlighting the quality of the operator’s network in supporting the higher resolution content. “When you look at the role of the triple play and consumers dollars spent on voice, data, and video, the dollar is now being spent on the data side. So if they can differentiate their services by providing higher tier data, this is what we are seeing across the globe, that the higher tier data services are now the things that are selling,” Bishop says.

Formula One races, it showed it was possible to technically deliver 4K live content from Singapore to a control room in London. In Tata’s case, the operator used a 480Mbps IP pipe to carry the full 4K feed from Asia to Europe. The bandwidth was four times the typical amount that Tata Communications provides to F1 during each race event. According to Tata Communications F1 business MD Mehul Kapadia, the trial proved the commercial readiness of 4K. While obvious challenges remain, such as critical mass of 4K TVs in use, he says that he has been in discussions with a number of broadcasters regarding potential 4K deployments.

Tata Communications F1 business MD Mehul Kapadia One low hanging fruit for 4K broadcast is live sporting events such as F1, Kapadia points out. “Something like a 4K service can get them that additional value-edge over their competitors. And they can also command a little bit of a premium.”

F1 TRIAL: Another way for carriers to monetise 4K video is in the backhaul, by delivering content not to consumers, but by transported 4K signals between their origination and production houses.

But even on the backhaul side, 4K is not without its challenges. Kapadia adds that the massive pipe at 480Mbps was necessary to ensure that there were no latency or packet loss issues during transmission. Even then, the traffic was carried over Tata’s optimised video delivery infrastructure Video Connect Network.

One example is Tata Communications’ recent trial of a 4K broadcast system from the Singapore Grand Prix. While the trial was a proof-ofconcept demonstration by Tata as the official telecoms partner for the

“Also, we wanted to do it at much higher throughput to see the impact. As you can imagine, you can do 4K at lower capacities if you wanted to. We choose 480Mbps to see, if we took a reasonable chunk of capacity,

and see its performance, and to be prepared in the future if we want to put more channels over it,” he says. “[And] from an international bandwidth perspective, we have the capacity.” MANAGING COSTS: Lastly, there are emerging solutions that should help telcos cope with the video deluge, as well as offer value-added services for content providers, Bishop adds. These include the high efficiency video coding, or HEVC, and emerging techniques for trickling and prepositioning content for off-peak delivery. “We know HEVC is really going to be the enabler for 4K, and HEVC is going to give about 50% savings from a video codec perspective,” he says. “We are going to have to manage the cost, so lower delivery costs will be a critical element with nonpeak delivery. Now I can start to look at things at off-peak hours, and get things delivered a lot cheaper. This could be an order of magnitude, it could be 10%, it could be 80% savings on their transit costs for content.” The primary benefits for these mechanisms will be to reduce costs, but their ability to accelerate content performance means that they can also be offered to content providers as value added services. All up, the migration to 4K video presents a dilemma for telcos. On the one hand, additional traffic on their networks logically adds to the value of their infrastructure. On the other, there are few opportunities for them to monetise that traffic, a situation made all the more dire because of the investment required to sustain performance. Embracing 4K means committing to infrastructure investment that might end up predominantly fuelling the revenue streams of OTT players. Not doing so risks disappointing customers, who may look elsewhere for their broadband connections. Either way, it’s going to be a tough choice for telcos.


Copper: the shape of things to come Interest in vectoring and G.fast continues to surge – and several new developments in the pipeline promise even more ways to wring value out of legacy copper assets. Petroc Wilton reports.

E

verything old is new again; or so it seems, at least, in the current deployments of fixed-line access networks. A year ago, CommsDay Magazine reported extensively on DSL vectoring: a crosstalk-cancelling technology set to breathe new life into operators’ legacy copper assets, by massively increasing throughput speeds to around the 100Mbps mark on loop lengths of 300-500m. We also reported on G.fast: a more recent innovation that targets speeds of 500Mbps over a 100m copper loop – and up to 1Gbps – by using a much broader frequency range. Alongside these developments has come a subtly nuanced transformation in fixed-line broadband discussions around the world. Operators and some governments are still deploying FTTP networks where practicable; Google Fibre, of course, is frequently cited, and it’s often commercially sensible to build fibre access nets in brand new

‘greenfields’ developments. But a glance at the news coming out of Broadband World Forum in Europe through the last two years shows that operators are also striving to squeeze more bandwidth from existing assets, in particular legacy copper infrastructure – avoiding the huge capital expense of ‘rip and replace’ campaigns to overbuild with fibre. And the vendors, of course, are flocking to provide solutions. If there’s a mantra that sums up the current state of the fixed-line market, perhaps it’s the phrase coined by Alcatel-Lucent: “fibre to the most economic point.” So where are some of these copper technologies now in practice – and what’s next on the horizon? Certainly, vectoring seems to be taking off in a big way. It’s become a key part of the roadmap, for example, in Australia’s national broadband network rollout – reconfigured, via a change of government, away from its original FTTP-heavy model to a

broader blend of access technologies – as well as at least one competing private FTTB rollout in the same country. Alcatel-Lucent, one of the most aggressive vendors in the vectoring space, says it’s shipped 9.4 million vectoring lines globally as at October this year; Fast Net News editor and industry veteran David Burstein estimated in September that Huawei, Keymile, Adtran and Alcatel-Lucent between them accounted for at least five million vectoring-capable ports shipped. It’s worth noting, however, that Burstein points out that actual commercial activations are lagging somewhat; or in some cases – as in Belgium’s Belgacom, one of the first telcos to deploy vectoring – are being speed-capped below their theoretical maxima for the sake of stability. And Assia CEO and ‘father of DSL’ Dr John Cioffi highlights the difference between numbers of vectoring-ready lines shipped, and systems actually in active service. “Along with George


Ginis, we have the original patent on this stuff, filed back in 2000 – so we’re big supporters of it, but G.vector [the International Telecommunications Union standard G.993.5, covering vectoring for use with VDSL2 transceivers] is not there yet,” he says. “It can get there, in terms of numbers – and we will see a lot of DSLs being upgraded to... vectored VDSL in future around the world, I have no doubt of that.... I think it’s going to be some time next year.” G.fast, for which the technical standard is expected to be completed by the end of December 2014, is also starting to attract significant attention. “It’s essentially a version of [VDSL2 with] G.vector that just uses a wider broader swathe of [signal processing] bandwidth on a shorter twisted pair – so let’s say [copper pairs used with] FTTB or fibre-to-thekerb,” says Cioffi. “Since the length of the copper is only a couple of hundred metres or less, you can use up to 100MHz, or in some cases even 200-300MHz, of bandwidth. G.fast expands the bandwidth of [VDSL2 with] G.vector, which is only up to 30MHz, and in most cases they only use 17MHz.... but if you triple that bandwidth or multiply it by six, you get G.fast; which requires more signal processing power in the equipment, because you’re running at higher speeds.” “You can go all the way up to 1Gbps with these technologies, and that is feasible as well... if you use enough [signal processing] bandwidth on a short twisted pair, like 100m, you could get all the way up to 25Gbps. But no-one tries to run that fast; nobody needs that kind of speed, and it is difficult, in signal processing, to press the limits like that at those higher speeds. You get a more expensive component the faster you go. The compromise has been G.fast; that’s sufficiently low power and high-speed to get to 1Gbps, it matches the fibre speed at that level.” Again, Cioffi warns that – while vendors are starting to ramp up the hype

around G.fast – broad-scale commercial deployments are still some way off. Nevertheless, Alcatel-Lucent has said its first G.fast solution will be available in the first quarter of 2015. Adtran EMEA and APAC CTO Ronan Kelly says the firm has already been seeing “substantial interest in the technology” across the operator base it’s been testing with, using commercially available chipsets, with

“... if you use enough [signal processing] bandwidth on a short twisted pair, like 100m, you could get all the way up to 25Gbps” many “hugely impressed with the stability of G.fast” in its early days. In lab trials on that same commercial silicon, Adtran has recorded speeds of up to 600Mbps downstream and 100Mbps up over a single copper loop at 100m, using 106MHz G.fast; ultimately, the technology is set to use up to 212MHz.

Assia’s John Cioffi Kelly also notes some key advantages of the technology over conventional VDSL. “One of the main differences is that G.fast is a time division technology, while VDSL and ADSL... are frequency-division duplex,” he says. “What that means is that G.fast allows a lot of flexibility in the types of service that you dimension off it; it’s far easier to do symmetric services, but also, for the first time ever, you

can dimension services that are biased in the upstream dimension.” Additionally, as both Kelly and a recent Analysys Mason whitepaper point out, the G.fast standard addresses the possibility of reverse powering: drawing on customer premise equipment to power active equipment deeper in the network, which might help to overcome the challenge of deploying larger numbers of higher-density nodes in order to get to the short loop lengths over which G.fast works best. But G.fast and vectored VDSL are by no means the only game in town these days when it comes to next-gen copper technologies. Alcatel-Lucent, for instance, has been talking up its ‘Vplus’ vector tech – which extends the frequency range of vectored VDSL to 30MHz and, according to the vendor, is capable of delivering aggregate speeds of 200Mbps over copper loops up to 400m, and 300Mbps on loops shorter than 200m. Huawei’s SuperVector offering runs along similar lines, extending the frequency band to 35MHz and bringing in new coding and signal spectrum optimization to push up to 400Mbps inside 300m or 100Mbps within 800m. Adtran’s Kelly, on the other hand, describes another alternative –‘frequencydivision vectoring’ – as a means to better combine the benefits of both G.fast and vectored VDSL2 across a single subscriber line. “If we look at G.fast performance... if we... ‘chop off’ the bottom 17MHz so we don’t interfere with existing VDSL service... it has a hit on performance. The percentage hit on short loops is quite small, but as you get further out in the network that percentage gets much, much bigger,” he says. “That co-existence is the big challenge; and that’s where Adtran came to the fore with what we call frequency-division vectoring. Our view was ‘what if we could leverage the existing VDSL signal, and combine that signal with a compatible G.fast signal, combine the two technologies together in one solution?’”


Ultra-Flexible, High-Capacity, Deep Fibre Solutions ADTRAN has a proud heritage and continuing commitment to invest in Australia. Since establishing local operations in 2000, ADTRAN has become the country’s leading equipment supplier for copper-based, business-class services and the world’s leading supplier of sealed FTTN/FTTdp platforms. ADTRAN is reinventing the networks of service providers around the globe with a full portfolio of ultra-flexible, high-capacity, deep-fibre, solutions. Whether the need be FTTH, FTTN or FTTdp, these solutions provide lowest total cost of ownership while allowing operators to rapidly expand their addressable market and ensure that no customer is left behind.

visit adtran.com/reinvent

IN2065A Copyright © 2014 ADTRAN Inc. All rights reserved.


“It’s actually in excess of what we’d have achieved with full-spectrum G.fast... and to avail of that solution, the CPE a vendor would use has to have a G.fast element in there. So even though you’d deploy that solution from a cabinet location, you’re now seeding the network with G.fastcapable CPE. So at a point in the future where you want to move to the distribution point and offer 5600Mbps-type services, you don’t have to replace the customer premise equipment; you simply replace the active equipment in the field. So it makes that migration very, very easy.” Cioffi, meanwhile, also highlights a slightly different type of technology: G.Now. “G.Now is a marketing term that’s been promoted by a company called Marvell, a chip company... but it’s based on a different standard, called G.hn, for ‘home networking’. It’s used to transmit data rates up to about 200-500Mbps on inhome twisted pairs, in-home power line, or in-home coaxial network,” he explains. “[It] was standardised within the last several years. It does not use vectoring for crosstalk cancellation, because it’s in-home – and normally, you don’t have that much crosstalk in-home because there’s no other twisted pairs there. But otherwise, it’s a nice viable standard, and uses a lot of the same technologies as VDSL and G.fast – just no vectoring.” “So what Marvell and some of their customers have decided to do is run it longer than just in the home – to run it to the fibre point, typically in the basement, of a building. This is popular in Korea, Japan and generally in Asia... where you have FTTB systems already. They used to be VDSL, they’d run at 25-50Mbps, and what G.now allows them to do is go up to 250-500Mbps on a single twisted pair, so they get a speed increase,” continues Cioffi. “That is viable, except you have crosstalk. And since it’s not being eliminated by vectoring, Assia’s contribution is to... use

our management system, not to completely eliminate crosstalk, but to minimise the effects of it between the different systems. And you can introduce stability that way as well, managing the in-home noises, which is important.” “G.now could in some cases do better than G.fast; in other cases it’ll do worse, depending on the situation. But it’s called G.now because the

“So hopefully my rule is broken for G.fast... god bless the industry, we could use that, because it has been a sobering rule.” home networking components are already available, and it can be deployed more quickly. It’s got some technological limitations, but maybe also some advantages; so it’s seeing some level of competition right now with the G.fast technologies.”

Adtran’s Ronan Kelly It’s clear that operators have an increasingly broad arsenal of FTTX access technologies that they can deploy to serve customers across a multitude of scenarios. Situations where it’s practical to install nodes very close to premises, for example, might make the best use of G.fast or G.now; medium-length loops might be better served by vectored DSL using more spectrum; longer copper pairs could be better candidates for the more standard 17MHz vectored VDSL deployments. G.now could

offer more rapid deployment options than newer technologies. As Analysis Mason points out in its whitepaper, each will have quite different cost implications depending on applications. And, as Cioffi is careful to emphasise, increasingly high-speed copper technologies are more sensitive to noise generated by in-home appliances and indeed by each other, making the proper and proactive application of management systems (Assia’s own field of expertise) more important – particularly where multiple different technologies are being deployed in close proximity. That said, perhaps the most important factor in the newest generation of copper access technologies is which will reach a critical mass of deployment when. Cioffi, for his part, is hoping that some of the newer techs break the pattern of history. “We’ve jokingly called it ‘Cioffi’s 14year rule’, but it’s been accurate – if you go back even into the seventies, with ISDN which was really the first form of DSL – from the initial moment that something was proposed in standards, it [takes] fourteen years before you see one million deployed,” he says. “This is true in the fixed-line side, the mobile side is faster... if you look at ADSL, it was originally proposed in 1987 and in 2001 they had a million lines deployed. Look at VDSL, it was originally proposed in 1994 – I know, I did it, with some others! – and it was 2008 before we had a million VDSLs deployed. Vectored VDSL was proposed in 2001 – I know, I did that one [too!] – and so 2015 would be fourteen years! And we’re going to get close to a million lines. So it’s still working.” “G.fast was proposed in 2009; so if you add fourteen to that, we’ll have a long wait! So hopefully my rule is broken for G.fast... god bless the industry, we could use that, because it has been a sobering rule.”


On the road to 5G: new air interfaces, spectrum bands, and use cases Over the past 12 months, 5G has gained considerable momentum; attracting multiple technology proposals, complete with a full list of acronyms, for competing air interfaces, new use cases, and new traffic profiles. But the need to support a complex ecosystem of applications plus the need for official spectrum allocation means there’s a lot more work ahead, writes Tony Chan.

A

s recently as a year ago, the mention of 5G usually resulted in someone rolling their eyes in dismissal. After all, 4G was only beginning to get traction globally, with some markets still struggling with licensing and spectrum auctions. It didn't help that there was very little concrete detail at the time about 5G. Sure there were some vague notions, conjectures, inferences, proposals even; but there were no real technical specifications, let alone actual prototypes. Fast forward 12 months, though, and a solid story about what 5G should and could be has emerged. Both vendors and operators have announced specific trials. Huge research and development initiatives

have been launched by governments in Asia and Europe.

pre-commercial prototypes of key systems and components.

Today, there’s a plethora of information, white papers, corporate strategies – not to mention press releases – on the subject of 5G, complete with an emerging list of acronyms. There are even concrete timelines set by operators in Japan and Korea to launch some form of 5G by as early as 2018.

“If you look at the time line, a year ago, Samsung highlighted some millimetre wave work that they had been doing [which] they claimed that would be 5G,” says Alcatel-Lucent wireless business line CTO Michael Peeters. “Of course when the Koreans say something, in that region... everybody is very competitive; so the Chinese jumped on the boat, and started talking about tens of millions of connected things, and 10Gbps, and the Japanese came onboard, and then suddenly you have a huge amount of hype.”

So while everyone agrees 5G is years away from being an industry standard, the marketing machine behind the concept has already shifted into high gear. Vendors, operators, and regional governments are jostling to stake their claim in 5G, putting forth their own visions, proposing technical solutions, and even showing off

USE CASES: While the marketing machines of different vendors and governments have sounded their bat-


tle alarms and engaged weapons systems, there is surprisingly very little dissention when it comes to many aspects of 5G. For an industry that has seen arguably the fiercest standards war in recent history when formulating 2G and then 3G wireless standards, it seems every has agreed not to disagree in defining what 5G should do, and what it should look like as a system. Even as Huawei and ZTE claimed industry firsts in specific areas such as massive multiple in, multiple out antenna prototypes, an overarching consensus has emerged that 5G should meet certain performance targets, that it should support certain types of application behaviours, and that it will probably need certain spectrum resources. The industry calls these use cases. All the major vendors have their own white papers on these use cases, but they are surprisingly similar when it comes to terminology such as application requirements, and when it comes to defining specific network characteristics such as latency and cell density. Some of the use cases are pretty straight forward, calling for bigger capacities and lower latency performance. Some are less obvious; like the need to support healthcare applications, which presumably require traffic prioritisation and high reliability, or sensor networks, which require more efficient protocols to conserve energy on the device end. Perhaps the most comprehensive exploration of 5G scenarios comes from NTT DOCOMO, which has managed to convince many of the major vendors to participate in arguably the most extensive trial of potential 5G systems to date. In addition to new ways to interact with the internet, like virtual reality, and new communications types like sensors, M2M and internet of things, DOCOMO envisions applications for healthcare and safety, education, transportation, home automation, all in a world where its network traf-

fic “would be at least 1000-fold larger compared to 2010,” and “services will be more diversified.”

you know it has been very successful in the market with good performance.”

DOCOMO’s vision, echoed by most vendors today, calls for a 5G system that will exceed today’s networks by a wide margin in five areas. According to DOCOMO’s target, 5G should achieve higher systems capacity (1000-fold per square kilometre), higher data rates (100x typical

“So we would like the same for 5G as 4G, in the sense the industry has a good understanding of the performance requirements and the use cases and develops a good standard and good system, rather than just developing something for the sake of developing something new.”

“For 3G, it was not really clear what the main focus and the use cases would be and it took time to be successful... ”

For DOCOMO, the path to achieving those 5G targets will consist of implementing a coverage layer made up of an evolved version of 4G, and a second layer using a completely new radio access technology for adding capacity.

speeds, even at high mobility), support massive connectivity (100x more devices, even in dense areas), have reduced radio access network latency (less than a millisecond), and have a reduced cost structure with high energy efficiency and reliability. BEYOND MOBILE: The industry has learned from 4G that defining the requirements of a network is perhaps more important for success than the underlying technology itself, according to Nokia Networks VP of research and technology Lauri Oksanen. “We feel it is very important to agree on a target for 5G, so the industry has a good idea what should be developed for 5G,” he says “We have good examples and bad examples from past generations. For 3G, it was not really clear what the main focus and the use cases would be and it took time to be successful... it wasn’t successful until we added HSPA, and that was when it was able to provide the performance that people wanted to see – that was mobile broadband for accessing the internet. When 4G was developed, it was clear that that was the main use case, and everybody agreed that it had to be for mobile broadband and the internet, and we developed the standardisation for 4G with other vendors and it resulted in very good specifications… as

DOCOMO is now in the process of testing a number of potential technologies with six vendors, including Alcatel-Lucent, Fujitsu, NEC, Ericsson, Samsung, and Nokia. Each vendor trial targets different aspects of DOCOMO’s vision: Alcatel-Lucent’s trial consists of experiments on candidate waveform technologies to support mobile broadband and M2M services; Fujitsu will experiment on “super dense base stations;” NEC on “time domain beamforming with very large number of antennas;” Ericsson on new radio interface concept and Massive MIMO; Samsung on super-wideband hybrid beamforming; and Nokia on a “superwideband single carrier transmission and beamforming.”

4G WILL BE A BIG PART OF 5G: While DOCOMO’s vision is just that – a vision – it is representative of a larger industry-wide accord for 5G, which sees an evolution of 4G networks with the inclusion of potentially new air interfaces to address specific 5G use cases. One reason for this, according to Nokia’s Oksanen, is that 4G is pretty hard to beat. “For the basic usage of mobile broadband and wide area coverage, LTE is a very good system. It’s not easy to make something that is clearly better


than LTE. There isn’t any magic technology out there that will significantly improve over what LTE can do over a large area network,” he says. “So we believe LTE will definitely be a big part of 5G. We don’t think there will be a new air interface that will surpass or make LTE obsolete. There’s a lot of work ongoing for LTE evolution and that will stay there for the foreseeable future.” At the same time, Oksanen sees the need to evolve LTE to meet low latency and low power profiles, including the need to include Wi-Fi and its evolved versions in the 5G standards. And, like DOCOMO, he also sees the need for a higher frequency layer for adding more capacity into networks. This layer is where a brand new air interface will be needed due to the propagation characteristics of high frequencies, as well as the availability of bigger blocks of spectrum. “Eventually, we believe capacity requirements on network will grow, so we will need more spectrum. The only place you can find significant amount of spectrum is above 6GHz, so we believe that we will have systems operating on spectrum up to 90GHz… eventually, that spectrum will be needed. Maybe not in the very beginning, but eventually,” he says. “There is no one single air interface like there was for LTE. This will likely be a more modular approach.”

After all, wireless networking environments today take similar approaches across 2G, 3G, 4G and WiFi networks. Operators use 2G for legacy M2M applications and roaming, 3G for roaming and low speed data, 4G for mobile broadband, and Wi-Fi for handoff. “2G and 3G to some degree will also be part of 5G. We have spoken to

“We don’t think there will be a new air interface that will surpass or make LTE obsolete.” operators about whether they plan to turn off 2G, and we see that it will be present in most markets in 2020,” Oksanen says. “Also we expect Wi-Fi to continue to be there; it is also evolving and also needs to be included in the future 5G landscape.” In these terms, 5G would be a standard that encompasses all existing technologies, plus new air interfaces in new spectrum bands to add extra capacity. KITCHEN SINK APPROACH: There is obvious logic to the approach of including multiple technologies across multiple spectrum bands, but not everyone agrees it is the best approach – at least not at the beginning, and not for new higher frequency bands. “It’s an ‘everything but the kitchen Nokia Networks VP of research and technology Lauri Oksanen

sink’ approach. That is like saying that ‘we actually don’t have a specific view for what 5G is going to be’,” says Alcatel-Lucent’s Peeters. “So I disagree strongly that everything going together is going to make 5G, because then you are actually saying, ‘well, you have to implement everything’... that’s not true. There is a very clear, and I think defendable, case to go to 5G that doesn’t require millimetre wave technologies. It t does however depend on a new air interface.” In his opinion, 5G will be drastically different from 4G, which he says accomplished a single application – allowing users to use broadband away from their home or office. “Operators have swapped out entire networks, and invested huge amounts of money in new infrastructure, to allow people to do more broadband,” he says “5G is about things that you cannot do today, they are not normal extensions for what people are doing in the home today. And the number one case that can be made are for things that require M2M and very long battery life. If you think about things that require M2M and long battery life, you’d come to the realisation that it cannot use the neutral model for broadband take up.” And because people don’t upgrade their home automation system or their cars every six to nine months just so they can have the latest wireless technology (unlike smartphones), 5G necessitates a gradual, incremental business case that adds new capabilities to existing 4G services. “For an incremental business case to work, we see two things... that will define 5G. One is an air interface, not an experimental one for millimetre wave, but... in the normal frequencies. For me, really below 2GHz, a coverage frequency, like a refarmed 3G frequency – not a 4G one because 4G is a perfect one for mobile broadband, and not a 2G one because there are going to be leg-



acy M2M applications, but a refarmed 3G one,” Peeters says. The second component of Peeters’ vision is that the new air interface will have four distinct network layers addressing different application requirements, ranging from sustained high bandwidth downloads and bursty traffic, to a signalling optimised layer and a low energy layer for sensors. NEW AIR INTERFACE: The challenge with Peeters’ plan is that it requires a completely new air interface, which highlights the only point of real contention in the road to 5G. According to Peeters, there are three major camps backing three different air interfaces in the lower spectrum bands below 6GHz. These include an evolved version of LTE’s orthogonal frequency division multiplexing, a new technology called filter bank multicarrier, and a third called universal filtered multicarrier. Alcatel-Lucent is testing this last with DOCOMO – which, in its own white paper, has highlighted another method called non-orthogonal multiple access.

COMO which is in fact a real time transmission system working at 2.6GHz, and it has a receiver part as well. We are using this and helping NTT DOCOMO to do their UFMC, we are also building it so it can be used to benchmark the FBMC and the traditional OFDM,” says Peeters. “The work is so we can

“I strongly believe that there will only be a single 5G standard, which will have 4G as a supplementary downlink”

quantify the performance of the different proposals on a single platform against a number of channel conditions defined by the 3GPP – so that there will be, not an apple to oranges comparison, but an apples to apples comparison of all of the air interface technologies.” While he admits there is obvious un-

At least one other proposal may be in the market. Huawei has discussed a technology called semi orthogonal multiple access, but it is unclear whether it is a new air interface or just a variation in definition of existing standards. “We do have ideas, and we know that others have ideas. We don’t believe this will become a bottleneck. We have been working for a few years, and we believe we have a quite good concept and technology for these areas,” says Nokia’s Oksanen. “Of course there already are several different ideas around, and people have published [or] are thinking about new modulation methods. We think OFDM is pretty good actually, or some variation of it, but there are ideas around for new things.” For Peeters and Alcatel-Lucent, the bet is on UFMC, but the company is not ignoring other approaches. “We are building a test bed for NTT DO-

“The fight won’t be specifically about technology within the 3GPP or cellular community, but it will be more about what is the most efficient way to use spectrum between cellular and mobile, what is the role of Wi-Fi, what is the role of WiGig, and what is the role of licensed and unlicensed spectrum. I think the battle will be for the unlicensed spectrum.” TOO SOON FOR MILLIMETRE? Peeters adds that this new air interface will effectively meet all the parameters of 5G. Specifically, it will alleviate the need to go to millimetre wave technologies, which he feels won’t become necessary until current spectrum resources run out around 2022 and 2023. “If you look at spectrum exhaustion profiles around the world, if you take 4G, LTE and add LTEAdvanced, you add small cells, what you see… is that mobile broadband will probably... take us to 2022,” he says. “And this includes the current technology and essential evolutions, which includes some higher order MIMOs and other things we are working on today.” “So mobile broadband by itself is not sufficient as a business case for an operator to say ‘I’m going to start building a new technology’.”

Alcatel-Lucent’s Michael Peeters certainty regarding an air interface, Peeters, like Nokia’s Oksanen, is nevertheless confident that 4G’s success will set an example. “What I’m expecting is that considering the success of 4G... people will have realised that there is an advantage in defining things in a single standard,” says Peeters. “I strongly believe that there will only be a single 5G standard, which will have 4G as a supplementary downlink, or possibility for carrier aggregation.”

The other argument is that, because millimetre wave technologies offer basically more mobile broadband, their inclusion at the onset of 5G standardisation risks bringing in additional distractions. Those distractions include the need for yet more air interfaces. According to Peeters and Oksanen, the higher frequencies will require a different approach to radio design because of millimetre wave’s propagation characteristics and the availability of large blocks of spectrum – up to 1GHz for example. “Up to about 10GHz, maybe 20GHz, you can use the same air interface, maybe with slightly higher carrier spacing. But if you go higher, it doesn’t make sense to have pushed



out carrier spacing because you will need way too many carriers – which will drive up complexity,” says Peeters. “Secondly, narrow carriers require very accurate and stable frequency and carrier synchronisation, and the higher you go in frequency, the harder it is to build systems that actually are that accurate. And in fact, those are not necessary because you have so much more spectrum.” CRITICAL CORES: Just as important as the work on new radio interfaces will be work on the core network, says Oksanen. “We will need to somehow enable a system with multiple radios that is manageable for the operators and also provides good and simple service. We have from the beginning put a lot of effort into what we call architecture research: how to manage mobility, how to manage deployment of these networks, how to manage connections, how to manage security, so that it will be as easy as possible for operators,” he says. “A few interesting things that are happening already... in the core network... before 5G [include] network virtualisation and software defined networking.” “We are now doing research on RAN virtualisation, which is not quite so straightforward because radio access networks are by definition distributed rather than centralised.” Ultimately, Oksanen and Peeters believe that virtualising the core network will be the key enabler for 5G.

On the one hand, 5G needs to support different application profiles, hence necessitates a virtualised network core to support the different traffic types. On the other, 5G will incorporate, sooner or later, new frequency bands – and if those frequencies are in the millimetre wave spectrum, new air interface as well – which will require a virtualised infrastructure to support efficiently.

[We] will only be able to prepare the discussion for the next WRC conference, which will be in 2018, which hopefully will be able to develop new spectrum in the higher frequency.” “What I see is a software defined network, where you have the control and data plane,” says Peeters. “We can extend that same idea to the over-the-air interface, where you have your control, your exchange, your low latency packet exchange on the 5G air interface below 6GHz. Then you can augment it with supplementary uplinks and downlinks with millimetre wave... purely data plane only oriented for high speed, super high capacity downloads.” SPECTRUM SCHEDULES: The last component of the puzzle is spectrum. Nokia’s Oksanen points out that even if 5G gets standardised, the industry still needs to find spectrum

Source; DOCOMO

to deploy the new technology in. “There is new spectrum below 6GHz, for example around 3.5GHz, which many countries are thinking of allocating.. [but] it may be that these will already be used for LTE, so there [may not be] more spectrum available under 6GHz for 5G,” says Oksanen. “That means we will need a new band to be identified for mobile by the ITU, so we will have to rely on the world radio conference, which happen only every three years. Next year, 2015, there will be a world radio conference; but because of the relatively complex process, we will not be likely to be able to discuss spectrum above 6GHz. [We] will only be able to prepare the discussion for the next WRC conference, which will be in 2018, which hopefully will be able to develop new spectrum in the higher frequency.” Peeters is less concerned about spectrum because he believes 3G spectrum can be refarmed for initial 5G services. “There are a number of places where you can find spectrum... when you are talking about pure coverage, there is some spectrum that is used for legacy technologies today. 3G spectrum is a perfect example of spectrum that is not being use very efficiently today.” At the end of the day, Peeters believe that it is possible to meet some of the ambitious timelines outlined by operators like NTT DOCOMO and SK Telecom. “I think 2018, 2019, may be reachable if we focus on the strategy,” he says. “It is possible to have prestandard 5G trials if the community focuses on the right things. If we don’t try to push in massive MIMO and millimetre wave and all these things into the standard all at once, and only focus on what we can add to 4G, I think it is possible.”


Apple bites into mobile payments Apple wants a slice of the lucrative mobile payments market, but will its plans come at the expense of carriers? Geoff Long investigates the Apple system and some carrier alternatives

T

he idea of paying for things through a mobile phone has been around for quite a few years now but – a few developing markets aside – has never really taken off. Enter Apple, which has a history of disrupting markets and popularising existing technologies by making them easier to use. It did it with the iPod, it did it with the iPhone and now it wants to do the same for mobile payments with its Apple Pay system. The leading questions are: will it succeed, and what will it mean for carriers? According to Capgemini director and industry practice lead for banking and payments Phil Gomm, it will likely succeed. And for carriers, that could provide a boost in terms of carriage services – but could also bypass them when it comes to transaction fees. The Apple Pay mobile payments system was announced as part of the iPhone 6 and has now launched in the US. However, Apple has yet to announce its strategy for other markets around Asia. Gomm likens Apple's service to that

of other ‘over-the-top’ services that have frustrated carrier efforts to capture more value. He says that while Apple Pay needs connectivity, it doesn't need a commercial arrangement with the telco beyond the provision of a data carriage transfer.“It means the telco has a role to play in terms of network and carriage of the transaction, but it's quite difficult for them to participate in a clip on the ticket in terms of participation in the value chain,” Gomm says. He adds that previous attempts at getting into the mobile payments space from telcos have been frustrated by commercial challenges and banking issues, and as a result the payment system providers have largely worked around the telcos and devised alternative approaches. Gomm, who is also involved with the Capgemini Payments Centre of Excellence in Europe and a contributor to its World Payments Report, says the consultancy firm has been tipping Apple's move into NFC and mobile payments for some time.

In the US market, Apple has got key banks and card providers on-side. It has negotiated to get a small percentage point of the transaction fees, but in general Apple Pay is seen as complementary to the financial services companies. According to Gomm, the actual payment of the transaction is not the battleground that success or otherwise of mobile payments will be won on. Rather, the key will be the value-added services that are overlayed at the point-of-sale to give customers an incentive to use a phone for payment. And key to that could be data and analytics. “It's all about customer analytics, it's about knowing intimately your customer, having a customer-centric approach and being able to present in real time at that point of payment, a value proposition that's attractive and encourages the consumer to be able to use your service. And I think we'll find that's the way Apple is thinking also,” Gomm says. He expects Apple will be working on


a commercial basis with major retailers and their participating schemes and banks in order to provide an attractive value proposition back to iPhone users. “In order to encourage widespread takeup, it's the overlay service that you put on top that encourages your customer to use that transaction,” he continues. “So you'll be happy to take out your phone if you know you'll be getting a discount on the transaction, if you know you're getting a few points for a loyalty programme or you know you're getting a free cup of coffee or some other incentive.” Gomm also suggests that Apple's entry into the mobile payments market will likely mean that NFC-based systems will become a more widespread standard. “Apple is never the first, but it comes when the market is ready and it creates the opportunity. What the direction statement does give us is confidence that NFC is the preferred protocol. There have been some barriers, but those barriers are going away,” he notes. COMPETITION: How telcos fare in the mobile payments space varies from market to market: in some cases they have proved a dominant force, while in others they are being left behind. Interestingly, it seems that the places where telcos do best in terms of mobile payments are in countries where other financial infrastructure is lagging. One of the most successful carrier implementations of mobile payments is the M-Pesa system created by Kenyan mobile operator Safaricom, with help from Vodafone. Its service provides an e-wallet on a mobile phone that can perform many of the same functions as a traditional bank: transfers between users, transfers between a business and consumers and even cash withdrawals at designated locations. M-Pesa boasts more than 17 million users in Kenya and has also expanded into Tanzania, Afghanistan, South Africa and India with varying degrees of success.

SingTel Group has also made significant strides in the payment space of late. In its home market of Singapore in June this year, SingTel teamed up with Standard Chartered bank for the launch of the “Dash” mobile money system. It has an app on both Android and Apple devices and allows consumers to download cash into their mobile phone and use it to make payments. More recently in Australia, SingTel subsidiary Optus has announced a system that will compete more directly with Apple Pay. It is also the first move by an Australian telco into the mobile payments arena.. Optus teamed up with Visa and Heritage Bank for “Cash by Optus”, which

“Apple is never the first, but it comes when the market is ready and it creates the opportunity.” will allow its customers to pay for cash purchases below $100 with their Android-based smartphones. The system uses near-field communication and Visa payWave technology via a payment app on the phone. It will be aimed at consumers wanting to make small purchases such as for lunch, petrol and groceries. Optus mobile marketing VP Ben White points out that there are already nearly one million Optus postpaid customers with compatible devices who could download the app, get the SIM and make purchases using Cash by Optus. “We’re the first Australian telco to launch a mobile payments app, and because it’s compatible with many of the latest Android devices and can be linked to any Australian bank account, we’ve got a huge opportunity to bring this technology to a lot of people,” White says. Optus has not revealed how much of a cut it gets from transaction fees, but according to White the main rea-

son that the carrier is offering the service is to offer a service that differentiates it from other carries and encourages more use of smartphones by its customers. He points out that Optus has been working on the mobile payments system for a number of years before its introduction. It works like a Visa Prepaid debit card and customers can load up to $500 at any one time and make contactless purchases under $100 anywhere that accepts Visa payWave. Users will also have to get an NFC-enabled SIM, which Optus will send on request. White casts the move as just a first step into Optus' foray in mobile payments, with future plans likely to look at any type of account mechanism that stores value. “As technology and communications converge, Cash by Optus is a natural evolution for Optus. This is our first step towards launching future contactless applications in areas like public transport. Australians never leave home without their mobiles, so it makes sense to build this technology into smartphones now,” White says. Heritage Bank is responsible for the banking aspect of Cash by Optus. John Minz, CEO of Heritage Bank, says the launch of Cash was good for both the bank and the carrier. “Heritage Bank is a relatively small financial institution, but we’re already recognised for our ability to deliver really creative and effective payment solutions for our corporate partners, particularly in the area of prepaid transactions,” Minz says. Visa head of emerging products and innovation George Lawson says the rapid growth of contactless payments in Australia has set the right conditions to enable a mass shift to mobile payments. There were over 64 million Visa payWave transactions in September 2014, making Australians one of the leading users of contactless payment systems.



Telcos and datacentres: where to from here? Both the telecommunications and datacentre industries are going through some transformative changes. Richard van der Draay reports on some of the key opportunities.

T

elcos and datacentres have been inextricably linked for years. Many large telcos own and operate their own facilities; others co-locate racks of networking equipment in third-party datacentres, a significant industry in its own right; plenty adopt both strategies. But now, both the respective industries and their areas of overlap are now in the throes of rapid transformation – with associated opportunities for new revenue streams. Beyond general advances in technology, specific trends around cloud, mobility, software-based applications and connectivity have prompted the various players to reinvent their business models to varying degrees. For instance, many telcos are reworking their datacentre strategy – whether that applies to co-located hardware, or the facilities they own and operate themselves – in a tactical shift towards offering cloud hosting services. Some commentators argue that the next logical step for these enterprises would be to deliver software-as-a-service and thus add value to their existing offerings. But others contend that telcos must beware they don’t overreach – and build in safeguards to ensure they retain a differentiated business portfolio, without losing focus on their existing core strengths. And several key industry players see opportunities for telco in the datacentre space that go beyond hosted service offerings. The datacentre market itself has had its share of ups and downs. In the 1990s, datacentre operators built excess capacity because they predicted massive demand for datacentre space – but this never eventuated, and subsequently many were bankrupted.

One such firm was Exodus Communications, which at that time was a hosting the websites of internet giants such as Google, Microsoft and Yahoo. Yet, in September 2001 the firm filed for bankruptcy, having suffered from a combination of overexpansion which had left it with a raft of empty facilities and a collection of unpaid bills, courtesy of some of its less successful dotcom customers. However, by 2007, Equinix found that global demand for colocation services had jumped 12.5% over the previous year – set against a supply increase of just 4.2%. And, at that time, the firm reported expected growth rates for colocation business to hit 20% each year until 2010. Today, the sustained growth in the sheer volume of digital data and the proliferation of online applications has led to strong and increasing demand for storage options. As a result, many enterprises have elected to invest in their own datacentres. But building and operating these facilities, particularly in light of the rapidly growing demands placed on them, calls for significant investment in resources. And modern datacentre builds are subject to strict guidelines such as those laid out by the Telecommunications Industry Association in TIA-942, a US national standard specifying the minimum requirements for telecommunications infrastructure of datacentres and computer rooms including single tenant enterprise facilities and multitenant internet hosting facilities. However, the next dramatic shift for the datacentre industry in terms of telco is likely to centre on ownership of the fabric of connectivity itself, according to Pacnet ANZ CEO Nigel

Stitt. “The next big leap would be in building an interconnected ecosystem and owning the underlying fabric of the connectivity between different cloud platforms,” he says. “I see [that] telecommunications companies will become the fabric that interconnects hybrid, private and public cloud platforms... therefore, telcos need to be an ‘on net’ provider to all datacentre locations in their [core] marketplace or build and operate their own facilities.” Stitt goes on to suggest that telcos will need to adopt and use innovative bandwidth models that allow clients to interconnect ‘on demand’, adding that “these platforms will be built and driven by technologies based on software-definednetworking.” Bell Labs president Marcus Weldon similarly sees cloud connectivity as presenting key new business opportunities linked to the datacentre for telcos. He argues that from a purely technical point of view, datacentres are to a large degree evolving into facilities approximating IP networks, in the sense that they are starting to use the same internet protocols for part of their internal operations that have always been at the core of service providers’ main business propositions. “IP is a new opportunity for telcos to extend their business because the techniques that they use in their networks are the ones that are [now] entering the datacentre, which means that they can now couple to those clouds seamlessly [themselves],” notes Weldon, adding that as a result, telcos could essentially offer automated cloud connectivity and VPN services which would effectively pre-


Communicate faster with Vocus U.S.A San Jose Los Angeles HAWAII

HONG KONG PHILIPPINES GUAM

SINGAPORE FIJI

Brisbane Newcastle

Perth Adelaide

AUSTRALIA

1000+ on net buildings

Melbourne

Sydney Canberra

Auckland Christchurch

NEW ZEALAND

75+

on net data centres

How connected are you ?


sent them with untapped new business opportunities. On the other side of the coin, meanwhile, datacentre operators themselves are looking for new points of differentiation in an increasingly crowded field. For Stitt, particularly as power and cooling become increasingly commoditised, the answer lies in the ecosystem of players connected within the datacentre environment, and in monetising the movement of data. But he argues the big question hinges on who is better placed to capitalise on these — datacentre players building their own network capabilities, or telcos building datacentres and combining that focal shift with their already-proven network capability. “We would argue that it is the telco who can build a more sustainable model,” he says. For the moment, Stitt emphasises that traditional sales in the datacentre space continue to be the leading revenue areas. “However, the type of customer we typically attract also drives good network revenues,” he adds. “As clients start to understand that the network layer and the ability to use this securely and ‘on demand’ is essential to any cloud strategy adoption, we will see this portion becoming a larger part of the revenue piece.” Of course, there’s also the aforementioned opportunity for the telcos to offer their own cloud or cloud-based services. “Some telcos may see their strategy in the cloud world as going further up the OSI stack, offering applications and infrastructure ‘as a service’,” notess Stitt. But he warns that this model needs to be viewed with caution, saying it pushes telcos into a highly non-traditional space and moves them away from the focus on their core service of owning the data flows between the various cloud and application providers. “With the competition having wellestablished business models such as Amazon Web Service, Microsoft’s Windows Azure and Google Cloud Platform, a better model could be to

focus on the connectivity piece and innovate in how bandwidth can be used, rather than building cloud platforms,” says the Pacnet ANZ CEO, adding that good industry partnerships could also be an option for telcos keen to productise applications and cloud services. VIRTUALISATION: Stitt also highlights the advent of virtualisation of network devices as a key disruptor in the field, both for dedicated datacentre players and telcos, enabling CIOs to take a much more dynamic approach to managing infrastructure. “Coupled with a portfolio approach to applications, we are going to see lots of challenges for operators as we try to manage very flexible loads in

“Networking is a rapidly changing area of IT and much of the progress in network infrastructure has moved from Ethernet, cable, and fibre optic networks to virtualisation” the datacentre and on the network.” Virtualisation is certainly a hot topic for Macquarie Telecom, a telco that owns and operates its own datacentres. “Networking is a rapidly changing area of IT and much of the progress in network infrastructure has moved from Ethernet, cable, and fibre optic networks to virtualisation,” says the firm’s principal technical consultant Dayle Wilson. “Network virtualisation is the idea of shifting customer segregation from a virtual local area network and hardware network devices into software, this is why the term software-definednetworking is often used to describe network virtualisation.” Wilson says one of the benefits of virtualisation is the ability to easily move servers elsewhere in the case of incidents such as underlying hardware failure. “Because everything is stored in a database, functionality

has shifted from a hard-coded configuration on a switch to a piece of software connected to a server,” he adds. “This speeds up the failover process from an administrative perspective, while also driving the trend of single tenancy.” “Of course, virtual networks still live on underlying physical hardware,” notes Wilson. “Therefore, the decision to implement a virtual network will depend on scale and the desired amount of throughput (and the underlying hardware) will ultimately limit how many virtual customers can live on that piece of software.” Wilson says that this limit is generally either 1Gbps or 10Gbps, and that once that threshold is reached it may be necessary to purchase physical equipment and use dedicated cabling for each customer. The Brocade paper ‘Industry Trends and Vision: Evolution toward Datacentre Virtualization and Private Clouds’, builds on some of these themes. The document tackles some issues involving design considerations around network convergence, which is often positioned as the network architecture for server virtualisation. Brocade says that network convergence was originally applied to the convergence of voice, video, and data on the same telco network but that when network convergence is applied to the datacentre, it refers to transporting IP network and blocklevel storage traffic on the same physical network. “Although this is a separate topic from virtualisation and cloud computing, some have positioned it as essential for virtualisation to succeed,” the authors note. “Of course virtualisation and cloud computing cannot exist without datacentres and the physical hardware they house,” adds Brocade, echoing the point made by Macquarie Telecom’s Wilson. “To support virtualisation, the datacentre architect has to harden the network against failures and make it adaptable and flexible without disrupting traffic and do this while continuing to support existing datacentre assets.”


WE’RE BETTER WHEN WE WORK TOGETHER At Telstra Wholesale, we’re focused on the quality of our working relationships and the service we provide. Our number one priority is improving the way we work with our customers. By putting our customers at the centre of everything we do, we’re determined to create positive and rewarding customer experiences. telstrawholesale.com.au


A quantum leap Unimaginable amounts of data are being transported securely across its networks on a daily basis. But William van Hefner reports that few seem aware of a coming quantum leap in technology that promises to render all modern forms of secure communication obsolete.

A

dvances in physics over the past ten years have not only led to a greater understanding of how our universe works, but also yielded practical breakthroughs in fields such as quantum mechanics. In the quantum world, sub-atomic particles exhibit properties that simply defy physics as they apply to objects larger than the size of a single atom. Information can travel at speeds far exceeding the speed of light. Objects can appear in multiple locations across the universe simultaneously. Superconductors can transport energy with 100% efficiency. And what may sound to most like science fiction has already moved from the realm of purely theoretical

physics to the labs of semiconductor fabricators in Silicon Valley. Much like the race that propelled us from atomic theory to building atomic reactors in mere decades, today's race to capitalize on bleeding edge science is something that will almost assuredly see practical outcomes within most of our lifetimes. Competitors from laboratories in the United States, China, Switzerland and others are regularly upstaging one another in demonstrations of practical applications of quantum technology. These include the use of fibre optic cables to transport quantum information at distances exceeding 100km. QUANTUM TRANSPORT: The

first stage of applying quantum technology to existing telecommunications networks has already begun. Indeed the technology is in use today by a small number of financial institutions and governments, which are leveraging its ability to provide what scientists have discovered to be unbreakable encryption technology. A hybrid-quantum network, using a point-to-point fibre optic link to transfer traditional data packets as well as quantum information, is the basis for the most common form of quantum transport in use today. Companies such as IDQ in Switzerland are already manufacturing offthe-shelf hardware to build these private and completely secure telecom-


munications systems. Quantum bits (known as Qbits) travel along the same fibre optic lines and provide the actual encryption keys with which the normal data packets are encoded. Unlike with traditional encryption, the laws of physics actually make interception of quantum bits impossible without physically altering them. The simple act of observation by a third party being able to alter matter was a concept that even Albert Einstein struggled to understand. He finally came to refer to such seemingly implausible properties as “Spooky Actions At A Distance”. Today, not only have these “spooky” actions proven to very real, observable and practical to re-create, but are destined to revolutionize both the telecommunications and computing industries. QUANTUM COMPUTING: Another practical breakthrough made possible by the study of quantum mechanics is the rise of quantum computers. A handful of laboratories already claim to have built them, and at least one company is already selling them. At $10 million each, DWave Systems in Canada offers what it claims is the world's first commercially available quantum computer. Google has purchased one of the systems, as have a handful of U.S. government agencies. The few operating quantum computers today are perhaps comparable to the earliest electronic computers designed in the 1950s. They are incredibly bulky, extremely expensive, need huge resources to keep them cooled and can only be operated by a team of scientists. However, just as manufacturing methods led to those roomsized behemoths from the 50s being scaled-down to hand-held calculators less than 30 years later, the size and price of quantum computers can be expected to greatly diminish over time. The computing capabilities of quantum computers promise to be revolu-

tionary in scope. While many breakthroughs have been made in the actual manufacturing process, the biggest hurdle to their practical use at this time seems to be in designing a type of operating system that makes the vast amount of data they can process usable for humans. Once this hurdle has been overcome though, the capabilities of quantum computing will far surpass those of traditional supercomputers that currently occupy entire floors of office space. Without a doubt, the earliest users of quantum computing will put it to use at what it does best: breaking codes that are almost impossible to crack with modern computers. Once this is accomplished, which could be

China has already announced plans to launch the world's first satellite with quantum teleportation capabilities in 2016 in a very short time by human standards, all current forms of electronic encryption will become as transparent to these devices as plain text. Encryption keys that theoretically would take more computing power to break using all of the planet's computers combined over thousands of years should be a process that takes mere seconds. In fact, any data being encrypted using traditional cryptography today can theoretically be captured, stored and decoded at a later date once quantum computing becomes more widespread. A GLOBAL QUANTUM NETWORK: Transferring quantum information from one site to another (teleportation) has not only been accomplished via fibre optics but also using wireless – and, experimentally thus far, via satellite. China has already announced plans to launch the world's first satellite with quantum teleportation capabilities in 2016. Much as with the early internet, government agencies, military contrac-

tors, labs and financial institutions will most likely form the earliest links of a global quantum network. However, with advances in quantum computing running neck-and-neck with practical deployment of quantum-hybrid networks the need for rapid conversion of existing network capacity will be vital for both reasons of (inter)national security and the integrity of the world's banking and financial institutions. Simply put, any entity that has both eavesdropping or wiretapping capabilities combined with the power of a fully-functional quantum computer will be able to intercept and decode any form of encryption in use today, other than the few entities already utilizing quantum encryption. AHEAD OF THE CURVE: Although quantum teleportation of data at distances reaching that of today's largest networks is not yet possible, existing fibre optic networks should eventually be able to be retrofitted to quantum-hybrid networks, most likely within the coming decade. The demand for these transport services will almost certainly far outstrip supply once it begins. Eventually, all e-commerce will necessarily depend upon it. As quantum computers move from the reaches of well-funded laboratories to the desktop, quantum-hybrid networks will begin to disappear, being replaced with purely quantum computing networks. However, existing fibre optic lines may still be able to make that transition as well. Knowing the huge investment that most governments and private entities have made in traditional fibre optics, most labs developing teleportation technology today are focusing their efforts on leveraging existing infrastructure. While the electronics of existing networks will need to be upgraded for quantum teleportation, existing undersea and terrestrial fibre optic cables should continue to carry traffic through the quantum revolution and beyond.


With over 30 years’ experience helping our clients serve their customers, CSG International supports the majority of the top 100 global communications service providers, including leaders in fixed, mobile and next‐generation networks such as AT&T, Comcast, Orange, Reliance, SingTel Optus, Spark New Zealand, Telefonica, Time Warner Cable, T‐Mobile, Verizon, Vivo and Vodafone. More than ever, innovation is needed to support the digital businesses that CSPs are becoming. Trust CSG to provide the infrastructure, applications and BSS operations that will make your ideas a reality.

www.csgi.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.