CommsDay magazine Nov 2012

Page 1

October/November 2012 • Published by Decisive • A CommsDay publication


Oracle Communications

100 of the 100

Top Telcos

Get Better Results With Oracle

oracle.com/goto/communications

Copyright Š 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.


COMMSDAY MAGAZINE

ABOUT COMMSDAY MAGAZINE Mail: PO Box A191 Sydney South NSW 1235 AUSTRALIA. Fax: +612 9261 5434 Internet: www.commsday.com COMPLIMENTARY FOR ALL COMMSDAY SUBSCRIBERS AND CUSTOMERS.

First

Published up to 10 times annually. CONTRIBUTIONS ARE WELCOME

6 The new Oceania cable boom 7 Mobile payments on the march: why all eyes are on India

EDITOR: Tony Chan at Tony@commsdaymail.com GROUP EDITOR: Petroc Wilton FOUNDER: Grahame Lynch

10 What’s driving rural upstream traffic in the US

WRITERS: Geoff Long, David Edwards, William Van efner, Grahame Lynch, Dave Burstein, Bob Fonow

Cover Story

ADVERTISING INQUIRIES: Sally Lloyd at sally@commsdaymail.com

12 Made in China: Why the future of the mobile phone industry may come from Asia’s giant

EVENT SPONSORSHIP: Veronica Kennedy-Good at veronica@mindsharecomms.com.au ALL CONTENTS OF THIS PUBLICATION ARE COPYRIGHT. ALL RIGHTS RESERVED CommsDay is published by Decisive Publishing, 4/276 Pitt St, Sydney, Australia 2000 ACN 13 065 084 960

Features 16 An interview with Martin Geddes 20 How Superfast Cornwall is taking one of Britain’s most rural provinces into the fibre age 26 Why small cells are a big deal 29 Defining software defined networking: an extended report 33 A candid insight into Huawei from an internal change agent 36 Cashing in on white space


BOOK NOW: VICTORIA’S LEADING ANNUAL TELECOM CONFERENCE Langham Hotel, Melbourne, Australia Tues 9 October, Wednesday 10 October 2012 PLATINUM SPONSOR

Shadow comms minister Malcolm Turnbull (CONFIRMED)

KEYNOTE: ACMA deputy chair Richard Bean

Alcatel-Lucent researcher & author Allison Cerra

KEYNOTE: Optus MD Customer Vicki Brady

Telstra chief sustainability officer Tim O’Leary

NBN Co Head of Product Management & Industry Relations Jim Hassell

NEXTDC chief executive officer Craig Scroggie

Symbio Networks MD Rene Sugo

Telstra Wholesale executive director, sales Glenn Osborne

Comms Alliance CEO John Stanton

Optus MD Wholesale & Satellite Rob Parcell

iiNet chief technology officer John Lindsay

OzHub chairman Matt Healy on The Rise

Australia Post GM telecom Maha Krishnapillai

Vodafone GM industry strategy & public policy Matthew Lobb

of the Cloud Network

GOLD SPONSORS

COCKTAIL SPONSOR

Also hear industry updates from: Telecommunications Industry Ombudsman : Independent Telecom Adjudicator : Institute for a Broadband Enabled Society : ACCAN : Australian Communications & Media Authority : Comms Alliance

Technology briefings from:

SESSION SPONSORS

Brocade : Qualcomm : Oracle : Overture Networks : FTTH Council Asia Pac : Cambium Networks : Polyfone : The Billing Bureau : Enex

The future of… Voice by Ovum analyst David Kennedy Telework by Cisco GM government affairs & policy Tim Fawcett Convergence Regulation by Norton Rose partners Martyn Taylor & Nick Abrahams The Subscription Economy by Servcorp COO Marcus Moufarrige The NBN 1st rollout by Market Clarity CEO Shara Evans

The Great Debate

REFRESHMENT SPONSOR

With Internode founder Simon Hackett, Optus GM, government & corporate affairs Clare Gill, commentator Kevin Morgan, Alcatel-Lucent MD Sean O’Halloran & more

Great Networking Opportunities With two catered lunches, Tuesday night cocktails & refreshment breaks

40 expert speakers 2 days: 8 sessions Great knowledge & contacts COMMUNICATIONS DAY 19 January 2012 Page 7

SUPPORTING SPONSORS


TUESDAY 9 OCTOBER

WEDNESDAY 10 OCTOBER

KEYNOTE SESSION 9am Shadow communications minister Malcolm Turnbull 9.25am Alcatel-Lucent researcher and author Alison Cerra

KEYNOTE SESSION 9am Market Clarity analyst Shara Evans “Demographics of the

"Identity Shift: Where Identity Meets Technology in the Networked-Community Age" 9.50am Telstra Wholesale exec director, sales Glenn Osborne 10.15am Optus MD Customer Vicki Brady 10.40 REFRESHMENTS sponsored by Overture Networks MORNING SESSION sponsored by Brocade 11.00 Telstra chief sustainability officer Tim O’Leary 11.25 NextDC CEO Craig Scroggie 11.50 Ovum analyst David Kennedy “The End of Voice?” 12.10 Servcorp COO Marcus Moufarrige

“The Subscription Economy” 12.30 Brocade principal systems engineer David White

“Accelerate network transformation to deliver new services with Ethernet Fabric and Software-Defined Networking “ 12.55 Lunch sponsored by Telstra Wholesale 1.55 FTTH Council AP VP & Senko Advanced Components R&D director Bernard Lee 2.15 OzHub chairman Matt Healy BILLING, OSS & CUSTOMER SERVICE FOCUS sponsored by Symbio Networks 2.35 Symbio CEO Rene Sugo 3.00 Oracle senior director Rob Gashi 3.20 Afternoon tea sponsored by Overture Networks REGULATORY & POLICY FOCUS 3.35 Norton Rose partners Nick Abrahams & Martyn Taylor

“The Impact of the Convergence Review” 3.55 Independent Telecom Adjudicator Rob Nicholls 4.15 Telecommunications Industry Ombudsman Simon Cohen 4.35 ACCAN CEO Teresa Corbin 4.55 THE GREAT DEBATE: Optus GM, Government and Corporate Affairs Clare Gill, Internode founder Simon Hackett, telecom analyst Kevin Morgan, Alcatel-Lucent Australia MD Sean O’Halloran and more 5.25 Cocktails sponsored by NEXTDC

1st NBN rollout areas” 9.25 Comms Alliance CEO John Stanton 9.50am ACMA deputy chairman Richard Bean 10.15 NBN Co head of product management and industry relations Jim Hassell 10.40 Overture Networks Asia Pacific MD Graeme Bellis 11.00 REFRESHMENTS sponsored by Overture Networks MORNING SESSION 11.20 Qualcomm VP SE Asia and Pacific John Stefanac 11.40 Optus MD Wholesale & Satellite Rob Parcell 12.00 IBES executive director Kate Cornick 12.20 Australia Post GM telecom products and services Maha Krishnapillai 12.40 iiNet CTO John Lindsay 1.05 Lunch sponsored by Qualcomm WIRELESS & MOBILE BROADBAND FOCUS sponsored by Broadcast Australia 1.45 Broadcast Australia strategy & corporate development director Brett Savill “Snapshot from the Big Apple: update on the

world’s largest DAS project in the NY Subway” 2.05 Cambium Networks VP Asia sales & marketing Roy Wittert

“Fixed Wireless: Competitor or Complement to Mobile Broadband?” 2.25 Polyfone CEO Paul Wallace "Ethernet over wireless" 2.45 The Billing Bureau MD David Werdiger 3.05 Afternoon tea sponsored by Overture CLOSING PLENARY: APPLYING POLICY TO PRACTICE 3.25 Vodafone GM Industry Strategy and Public Policy Matthew Lobb “Competition issues in the telecom sector” 3.45 Enex TestLab MD Matt Tett “Internet Regulation - a decade

of content filter testing” 4.05 Cisco GM government affairs & policy Tim Fawcett

“The Telework Revolution” 4.25 Close

Yes, I would like to attend the CommsDay Melbourne Congress, October 9 and 10 LOCATION: Langham Hotel, Southbank, Melbourne [ ] One registration at $1097 [ ] Three for the price of two at no extra charge $A2184 [ ] RSVP for 9 October 7.45 breakfast with Alcatel Lucent’s Alison Cerra and Petroc Wilton (no extra charge)

Name ____________________________________ Company __________________________________ Phone No ________________________ Email _______________________________________________ Address _______________________________________________________________________________ ______________________________________________________________ Postcode _______________ Names of other delegates _________________________________________________________________ I want to pay by: [ ] Mastercard [ ] Visa [ ] Amex [ ] Diners [ ] Invoice me Name on card ______________________________________________________ Card Number ______________________________________________________ Expiration Date _______________________ Signature _____________________

TO REGISTER: • •

Fax this form to +612 9261 5434 Phone Sally Lloyd at +61 2 9261 5435 Mail to PO Box A191 Sydney South NSW 1235 Register online at http://tinyurl.com/7l2c9b3


FIRST The new Oceania cable boom While African subsea cable boom seems to be over, a new one has welled up in Oceania, consisting of Australia, New Zealand and the Pacific Islands. In the past month, three new cable projects have been announced in the area – although in the same period an existing project has gone belly up, and another is rumoured to soon follow. Activity in the South Pacific picked up pace after the original challenger in the space, Pacific Fibre, abandoned its plan to build a state-of-the-art, ultralow latency system between New Zealand and the US. Almost in the same week as PacFibre’s withdrawal, the Haiwaiki cable - a new transPacific system – came out of hiding, having been in the works for some months. The proposed Hawaiki cable appears to combine ideas from the failed Pacific Fibre and SPIN cable proposals. In fact, its leadership is the same team that was behind SPIN. Instead of connecting directly from New Zealand to the US as Pacific Fibre planned, Hawaiki (pictured) will aim to connect together many of the nations in the South Pacific. But unlike SPIN, which only went as far as Tahiti, the system will connect into Hawaii where it can be hooked up with other trans-Pacific systems. According to initial company information, Hawaiki will be a two fibre-pair system

with a design capacity of 8Tbps. The main landing stations will be Auckland, Sydney, and Hawaii with branching units that can connect Norfolk Island; Noumea, New Caledonia; Port Vila, Vanuatu; Suva, Fiji, Wallis & Futuna, Apia, Samoa and Pago Pago, America Samoa. There is also hope the Cook Islands may become involved. There has to date been less detail supplied on how the cable will be funded, an issue that ultimately defeated both Pacific Fibre and SPIN as well as a number of lesser interisland cable system proposals. In order to save costs and maximise flexibility, Hawaiki will use OADM BUs, or optical add-drop multiplexer branching units, which gives it a cost effective and flexible platform to connect up branding destinations. OADM BUs

allow cable builders to put them in first and connect up additional destinations later. AUSTRALIA-SG Weeks later on the other side of Australia, two prominent telecoms entrepreneurs announced another new cable, this time linking Perth, Indonesia, and Singapore. The Ted Pretty and Bevan Slattery-backed Australia Indonesia Singapore Cable is launching into a crowded market occupied by two proposed projects: one backed by Leightons, and the other by Huawei Marine. So far, there is little detail on the configuration of the new system, believed to have been conceived on the grounds that at least one of the two existing planned systems would not get built.


Australia and Singapore is traditionally a thin route served by a single link – SEAME-WE3. But there are good reasons to believe this will change. The giant Square Kilometre Array radio telescope planned for the Australian outback will gener- ate tremendous amounts of data for transmission to international researchers, while both Singapore and Hong Kong are developing as significant regional hubs for content delivery and cloud computing and are set to rival the US West Coast as major global centres for traffic handoff. The US is also losing attraction as a landing place for new cables due to the possible introduction of a 15.7% universal service levy on all systems that traverse American territory. SOLOMON ISLANDS Lastly, news has emerged that a previously planned system to connect together the Solomon Islands has advanced to the funding stage. The private cable system, being built by Solomons Oceanic Cable Company, features a link out of Guadalcanal to the PPC-1 system for international connectivity to Sydney and Guam, and two domestic spurs linking Guadalcanal with Malaita, landing in Auki, and the Western Province landing in Noro. According to reports in the Solomon Star, Solomon Islands minister for finance and treasury Rick Hounipwela told parliament that the government expects to receive some US$7.5 million in grant funding and US$10.5 million in loans from the ADB, which it will then lend to the cable company. Tony Chan

Mobile payments on the march Apple's snub of near-field communications shows that there is still no clear technology winner when it comes to mobile payments. But watch out for developments in India. One of the most talked about omissions in the feature list of the new iPhone 5 was direct support for near-field communications for mobile payments. A number of analysts suggested it was a sign that Apple was starting to lag behind in the smartphone arms race, but could the lack of NFC also signal that maybe Apple is ahead of the pack again? After all, the whole area of mobile payments is changing shape so rapidly that there is no clear technology standard. Perhaps what Apple is really doing is taking a punt that NFC will be by-passed in the mobile payment revolution? Paying for things on a mobile phone has been touted as the “next big thing” for a number of years now, and NFC was widely tipped as a technology leader – particularly when it was backed by Google. The search engine specialist has launched its Google Wallet, which uses NFC to turn a smartphone into a mobile payment device. An Androidbased smartphone that is. Apple's decision not to put an NFC chip in the iPhone 5 has now signalled that it is less inclined to go the NFC route. And to be fair, there are a lot of options out there, while Apple is not the first to suggest

that customers are perhaps after something more flexible. Today's consumer wants onthe-spot gratification and NFC payment systems still require them to go to a counter and wave their mobile wallet in front of a terminal. Peter Williams, head of Deloitte's Centre for the Edge and chair of the Deloitte Innovation Council, told CommsDay that any “slow” solution was unlikely to survive, including perhaps upcoming NFC technologies. “Today’s consumers are more mobile in their transactions and now have a wealth of options available regarding where, when and how they make purchasing decisions. The balance of power has shifted from the traditional retailer to the consumer and the success of online retail trailblazed this,” Williams said. He noted that the benefits that NFC brings are quite minor in the whole scheme of things, and consumers are more likely to prefer to use their smartphone to make purchases on the spot in locations away from a store counter. Already there are a number of newcomers in the payments market that are allowing them to do that. PAYMENT PLAYERS When it comes to mobile payments, there are a number of sectors of the market all clamouring for a piece of the pie. Device makers like Apple and Google have indicated they want a slice of the action, as do the traditional payment players such as banks and card providers. Telcos have also wanted in for some time, fear-


ing that they could again be left to provide the pipe but earn little else from the billions in transactions that are expected. How telcos fare in the mobile payments space varies from market to market: in some cases they have proved a dominant force, while in others they are being left behind. Interestingly, it seems that the places where telcos do best in terms of mobile payments is in countries where other financial infrastructure is lagging One of the most successful carrier implementations of mobile payments is the M-Pesa system created by Kenyan mobile operator Safaricom with help from Vodafone. Its service provides an ewallet on a mobile phone that can perform many of the same functions as a traditional bank: transfers between users, transfers between a business and consumers and even cash withdrawals at designated locations. M-Pesa boasts more than 17 million users in Kenya and has also expanded into Tanzania, Afghanistan, South Africa and India with varying degrees of success. However, in many developed countries carrier mpayment systems have been stymied by regulators, with many financial regulators of the view that payment systems need to be delivered by banks. In many countries, telcos would first need to acquire a banking licence before they can offer the services that M-Pesa and the like provide Mobile carriers don't want to be left carrying the transaction on behalf of someone else, however. In the US the three dominant mobile players

– AT&T, Verizon Wireless and T-Mobile – have formed a joint venture called Isis to provide a mobile payment system. Isis is also based on NFC technology and as well as the carrier involvement it has signed up payment specialists including JPMorgan Chase, Capital One, American Express and BarclayCard. However, the system has already been delayed – it was supposed to launch in the first half of 2012 – and it could be swamped before its starts by more nimble players that are making waves.

IT'S HIP TO BE SQUARE One of the most talked about new payment systems is Square – a credit card based payment solution built around smartphones, tablets, and a

“The whole area of mobile payments is changing shape so rapidly that there is no clear technology standard.” small mag-stripe reader that plugs into an audio socket. The technology was founded by some of the people that created Twitter and has already processed more than $5 billion in payments on an annual basis, according to some estimates. In September this year it raised a further US$200 million in funding, which would value the company at more than $3 billion. Also firmly on the “in” list is a mobile option from online payment specialist PayPal. PayPal is a powerhouse in the online world thanks in part to its involvement with parent company eBay. But more recently it has an-

nounced its mobile payment intentions through an alliance with Discover Financial Services, one of the biggest card issuers in the US. And yet another name that is being talked up in the mobile payments space is coupon specialist Groupon. It has created its own mobile payments system, called Groupon Payments, that is specifically targetting as competitors both PayPal and Square. And its main weapon will be substantially lower transaction costs. Whether any of these systems take off in Asia remains to be seen, although no doubt they would love the chance to operate in some of the most populated and technologyhungry markets in the world. Which is why an announcement by the Reserve Bank of India that it wants to rollout mobile payments as a way of encouraging financial inclusion will have the world's would-be providers sitting up with interest. India is already trail-blazing in the area of mobile payments. The National Payments Corporation of India, which runs the country's ATM network, has also been working with banks on its own mobile payments system. Dubbed the Inter-Bank Mobile Payment Service, it offers a range of services already in the B2B space and in September this year announced its first IMPS P2P (Person-to-Person) service. Through the service Indian customers can now use their mobile phones to pay for everything from railway tickets to bill payments and online shopping.


Are you a network operator looking for proven IP communications solutions? Symbio Networks is Australia’s largest supplier of VoIP Managed services and Wholesale carriage. We give you the ability to scale and evolve to make the most of this fast growing market, including: VoIP Managed Services

Wholesale Carriage

Call Termination for Voice & Fax (inc. T38) Call Origination with LNP Australian & International DID Hosting Special Numbers (13,1300, 1800) Hosted SIP End-points Hosted SIP Trunks Virtual Fax

Symbio also provides Access Services with IP Transit and is a certified Wholesale Service provider of NBN Co. Why Symbio?

SIZE CAPACITY RELIABILITY

Australia’s 5th largest voice interconnected network Carries over 2 Billion minutes of billed voice every year 99.99% uptime guarantee

Find out more Web: www.symbionetworks.com Email: wholesale@symbionetworks.com NBN, NBN Co, and Powered by the NBN are trade marks of NBN Co Limited and used under licence.


According to the Economist Intelligence Unit, only a tenth of India’s 630,000 villages have a bank branch. However, over the past two years, RBI has granted 17 wallet licences to mobile carriers. Perhaps if we want to see the future of mobile payments, India is the country to watch. Geoff Long

The rural upstream phenomenon A new snapshot report of rural US internet traffic has shown a surprise surge in upstream traffic from business users, who accounted for the largest portion of traffic going from end-users to the internet. According to the second quarter 2012 report by Calix, business traffic now takes up 30% of all upstream traffic, beating out internet browsing (19%), telephony and communications (17%) and even video (14%). “The advent of high speed, reliable broadband connections has resulted in the home becoming a virtual workplace – with a rise in telecommuting as well as an extension of the work environment into the home after normal work hours,” said Calix, the company behind Flow Analyze, a software-as-a-service tool for monitoring network utilization. “This has resulted in a significant rise in businessrelated internet traffic.” According to Calix, this category of traffic, characterised by its unique security and authentication applications like virtual private networks, was also the third largest contributing category to downstream internet traffic (7%).

In total rural US users consumed an average of 7.1GB of upstream traffic for the quarter, while downstream traffic reached 50.3GB. Not surprisingly, video dominated on the downstream, accounting for 62% of downloads, with users with fibre connections outpacing their copper-based counterparts. “The factor most strongly associated with a high volume of video streaming was a fibre network,” the report said. “Service providers whose networks were entirely fibre saw nearly 69% of their downstream internet traffic composed of video streaming – not surprising when you consider that fibre’s available bandwidth typically translates into a superior video viewing experience.”

“As many as 20% of the users on fibre and 13% on copper networks rarely used the internet, consuming less than 1GB of bandwidth per month.” FIBRE HOGS Across all the applications, users on fibre connections tended to eat up more bandwidth. For the second quarter of 2012, service providers that delivered broadband services exclusively over fibre saw their subscriber endpoints generate 87% more downstream traffic and nearly 10% more upstream traffic than copperbased subscribers. The percentage of heavy bandwidth users on both networks, those consuming more than 100GB of data per month, was pretty consistent across both network media.

While 13% of fibre connections qualified for this category, 12% on copper networks did so as well. However, beyond that 100GB point, fibre’s faster connections allowed users to download far more. As a result fibre users in the over 100GB club accounted for a staggering 70% of overall fibre user downstream traffic, while their copper-based peers only managed to eat up 31% of downstream bandwidth on copper networks. For the quarter, 44% of fibre-based endpoints and 38% of copper-based subscribers consumed more than 20GB of traffic. These types of users accounted for 94% and 93% of traffic on their respective networks. Interestingly, as many as 20% of the users on fibre and 13% on copper networks rarely used the internet, consuming less than 1GB of bandwidth per month. Another surprise was the fact that what was once the biggest bandwidth hog of internet bandwidth, file sharing, appeared to now be generating only negligible amounts of traffic from rural US residents. According to the report, applications like BitTorrent, Limewire, WindowsMX and Kazaa accounted for 1% of the network traffic downstream and 5% upstream. Likewise, social media only accounted for 1% of download and 2% of upload traffic. “Despite the enormous popularity of such services as Facebook and Twitter, these services consume relatively little bandwidth (this may change rapidly as video becomes more integrated into social media),” Calix said. Tony Chan



China: the future of mobile? It’s been a few years in the making, but the Chinese handset industry finally looks like it is ready to assert itself, for better or for worse. While recent news of a blatant Chinese counterfeiter trying to get the jump on Apple’s design with a YouTube video is painting Chinese handsets in a bad light, the reality is that there are plenty of legitimate Chinese companies now coming out of the woodwork with their own brands, platforms, innovations, and maybe even a Chinese version of Steve Jobs, Tony Chan writes.

S

omething strange happened on the way to the launch of the iPhone 5. A Hong Kong-based company named Goophone uploaded a video on YouTube in a bid to make a claim on the design of its i5 smartphone, which looked a lot like the thenunreleased iPhone 5. The video, which did the rounds online and through social networks, contained some veiled threats against Apple – claiming a first to market advantage with its design. The gist of the video was that as the first company to show off the design, Goophone had “priority” to it. On top of that, Goophone reportedly claimed it has actually been granted the patent for the design in China, although the reports were unsubstantiated. As it turned out the iPhone 5 did look exactly like the Android-powered Goophone i5. Obviously, the i5 is no match for the iPhone 5 in terms of features and functionality, but many will be fooled by its looks. If Apple is adored for its design, then Goophone has definitely grabbed some of the spotlight. The Goophone de-

vice even has an Android theme that precisely mimics the look of iOS. So far, nothing has come from Goophone’s video, except lots of media coverage both in China and overseas of the product, and heated speculation on whether Goophone has the right to actually sue Apple, or block iPhone 5 sales in China. Reactions from the online gadget media community ranged from quiet amusement to blatant disbelief, but the story did receive at least one serious article in the San Fran-

cisco Chronicle, which highlighted Goophone’s patent claim as a growing form of trademark squatting in China. FROM COUNTERFEITS No one, it seems, is taking Goophone seriously as a challenger in the handset space, and probably no one should. Its credibility is suspect at best. A quick browse of its website reveals clones of all major handset brands and their flagship models, including copies of Samsung’s Galaxy SIII and Note, as well as the HTC One. On top of that, multiple vid-


eos on YouTube showing users unboxing parcels supposedly containing the Goophones Y5 – the company’s iPhone 4S clone – actually showed an exact replica of the Apple product, complete with Apple’s logo on the back. If those videos are taken as evidence, then Goophone appears to be actually counterfeiting iPhones, which would mean it is breaking the law – in any country.

And Goophone is far from the only clone phone maker around. A quick scan of an online retailer of Android phones yields dozens of clones of major brands running the Google operating system. One manufacturer, Drois, lists 30 models with model names like Gooapple, Touch, Desire, and Sensation – all clones of popular brands. But as counterfeiting represents the shady side of China’s handset sector, what gets put inside the phones shows another side – namely the ability for Chinese manufacturers to produce some impressive products at very low price points.

For example, the i5 is equipped with a 4-inch display, quad-core 1.4GHz Tegra 3 processor, 1GB of RAM, two cameras (8MP rear, 1.3MP front), Android 4.0, and quadband GSM and dual-band 3G support, specs that can stand up against most mid-tier models on the market. More importantly, the Goophone i5 sells for about US$150 in China – anywhere between US$200-US$300 including shipping worldwide. But counterfeits handsets are only half the story. TO THE REAL DEAL A mature manufacturing ecosystem, a dash of entrepreneurial spirit, and healthy amount of creativity has given birth to a vibrant domestic handset market – and some players who are making international headlines. Yes, there are the established big names, like Huawei and ZTE, who have risen up to the top of the global handset market, but there is also a growing list of other homegrown brands making their own waves. One of the hottest Chinese brands today is Xiaomi, a company founded only in 2010, and whose founder Lei Jun was recently interviewed by Forbes. Xiaomi is by no means a big name yet even inside its home market of China; as of September 2012, the company sold a total 3.5 million phones since it launched its first product in the fall of 2011, which isn’t much considering Apple had 2 million orders for the iPhone 5 in the first 24 hours. But there is no denying that Xiaomi is on the up and up.

At a recent launch event for its second model, simply named the Xiaomi Phone 2, the company attracted more than 1,000 attendees, many of whom paid US$31 to get a ticket with all proceeds going to charity. Apparently, Apple has done the same for Steve Jobs’ presentations in the past. So it’s no surprise that both local and international media is drawing a close comparison between Lei and Jobs, no doubt fuelled by the fact that Lei likes to do his presentations wearing a black polo (not turtleneck) and jeans. But it is definitely more than his dress sense that is capturing fans – and investors, who has reportedly piled US$500 million into the company, valuing the company at some US$4 billion. XIAOMI As a handset company, Xiaomi seems to have all the pieces in place. For starters, it is offering a compelling hardware package that features a 1.5GHz quad-core processor from Qualcomm together with a dedicated graphics chip, which the company claims delivers an equivalent graphics capability to the original Xbox. It also claims to offer the highest pixel density in the market. In addition, all the usual features of a high-end model are there, including cameras, HD video recording and 3G. As a bonus, Xiaomi is now offering a choice of either a 2000mAh or 3000mAh battery. More importantly, Xiaomi has a solid software strategy in place, traditionally a weak spot for Chinese handset makers. While its phones are based on


Android, they use Xiaomi’s own MIUI user interface. MIUI is a heavily modified version of Android that some say feels like iOS or the Samsung TouchWiz UI. For its latest model, it introduced multi-screen themes, including one built by Angry Birds developer Rovio. With this feature, the phone’s screen acts like a window to a much wider scene, such as a real life desktop – or in the case of Angry Birds, a semiplayable screen from the game. Users navigate to different sections of the larger scene to access different applications. In the case of the Angry Birds themes, there is even a launcher for launching birds. Another innovative feature of the interface is an option for making the icons bigger on the screen, a simple yet no doubt useful trick as the smartphone-using population begins to age. There is no guarantee of success for Xiaomi. Its principal competitive differentiation in the marketplace – besides the celebrity status of Lei, who has 4 million micro-blogger followers – seems to be its price point. Despite its high-end hardware configuration, it sells its phones at about US$320, or less than half of the price of the iPhone 4S (the latest model available inside China). Lei even went as far as to tell Forbes that the company loses money for every phone it sells, claiming that its business model is based on getting users to pay for services. At this point, there is no indication it has managed to build an ecosystem, like Apple, nor that anyone is making purchases

from their Xiaomi phones. But if anyone can do it, it might be Lei, who has already founded and sold several online companies – including an online retailer he sold to Amazon for a cool US$75 million. DOTCOM GOES MOBILE So far, because it is a wholly integrated handset firm with both hardware and software strengths, Xiaomi is almost unique among Chinese handsets makers – aside from of course Huawei and ZTE, which have deep pockets for software development and global scale for distribution. But there are plenty of other activities in the mobile space in China. Like the dotcom era,

“But if anyone can do it, it might be Lei, who has already founded and sold several online companies .” which gave birth to huge Chinese online brands like Baidu, Sina, Alibaba, Tencent and others, the mobile internet is increasingly becoming a hot investment area. Already, there are at least two domestically developed operating systems from Baidu and Alibaba, both built on Android but each promoted as its own platform, promising optimised delivery of each company’s own set of mostly cloudbased, services. The success of these nascent platforms is still very much up in the air. Already Alibaba’s

OS, Aliyun, has experienced a major setback. Its first international partner, Acer, had to cancel a launch of an Aliyunpowered handset because Google threatened to pull its support for Acer’s other Android-powered devices. Google’s rationale was the Aliyun took the work that was done on Android by the Open Handset Alliance (OHA) and claimed it as its own. Acer, as a member of OHA, should not ship noncompatible Android devices, Google argued. In this case, the argument can go both ways, but what is apparent is that money is going into developing the mobile internet in China. Aliyun might have fumbled this round, but there are plenty of opportunities with other domestic handset brands. Can Aliyun get it right the next time, or the time after that? Quite possibly. One thing is for sure though: if Aliyun can’t, then somebody else surely will.



Quality not quantity Petroc Wilton talks to UK industry thought leader Martin Geddes on why the bulk bandwidth charging model won’t endure

S

witching to a qualitycentric view of networks, away from the traditional focus on larger and large quantities of bandwidth, is the single biggest challenge the telco industry faces, according to British telecoms strategy consultant Martin Geddes. CommsDay spoke to Geddes in the UK about his argument that polyservice networks are a “mathematical inevitability” and what operators and regulators can do to prepare for the paradigm shift. CommsDay: From reading some of your work, it seems like that the traditional model of throwing more bandwidth at applications may not scale much further. Martin Geddes: It suffers from a fundamental problem: reducing packet serialization time, the bandwidth model, ultimately can't dig you out of

contention effects. As you multiplex more applications per device and more devices per user, and more users per household, and you also architect your network so that you have less isolation between households… isolation be-

tween the flows is going down, and contention effects are going up. And the ability for any one subscriber, or subscribing business, to contend with other flows, is waning. And the very thing that TCP requires to work, which is space between loss and delay, is likely to erode. Some of my colleagues have been working inside Tier 1 operators and the data very clearly shows that the outliers of loss and delay, which is what causes application failures, are not monotonically correlated with average loads over, say, fifteenminute periods. In other words, even at low apparent loads, you get applications failures occurring. Which means you can't over-provision your way out of network failure. And they're happening, you get these little flash-flood effects; the very nature of TCP makes the network unstable. TCP is designed to have the least possible level of stability,


because it creates a control loop that's as wide as possible. And not surprisingly, when you break the basics of control theory, bad things happen! As you saturate your network, you get increasingly rapid variation in loss and delay, which makes TCP start to oscillate, which makes the whole network start to oscillate, which makes it career off the cliff. And you see it happening in your networks. CD: So if I'm an operator particularly in these days where capex has become a very dirty word and throwing more bandwidth at problems is not free - what else can I do? MG: To solve the problem, you need to step outside the current framing of how [operators] see their business. You've had this 20-30 year battle between the 'circuit Catholics' and the 'packet Protestants', where the circuit Catholics have said 'reserve capacity on the path, and you get a deterministic outcome', and the packet Protestants say 'yeah, but you don't get any of the benefits of multiplexing if you do that…multiplex everything together and damn the quality! Quality problems - add more capacity!' And the resolution to this is Zen - you have to step outside the whole thing to focus on what's inside, which is that both of those [points of view] are locked into a bandwidth

idea of the universe. But the real resource the network has is contention… what you have to manage in the network is not bandwidth, but contention. And you flip the whole problem on its head by saying 'rather than how do I allocate work to queues, how do I allocate the loss and delay that comes with contention?' And when you think about the network that way, you can say 'ok - in which network elements, or where along the path, is the contention actually occurring?' You isolate where the problem is. And then 'how can I manage the contention at that point using existing

mono-service networks; you get voice, video and data, but all three of those come with a fixed quality. A polyservice network is able to allocate different statistical bounds on loss and delay to every flow, and trade across between all of them. That's the difference and it's like the difference between black and white or sepia, and colour. The internet's monochrome; it offers a single quality level. Which means that all the traffic has to carry the cost structure of the most demanding applications that you want to work - which ends up being bonkers.

“Today, when people talk about multi-service networks, they're really multiple mono-service networks; you get voice, video and data, but all three of those come with a fixed quality.”

CD: Can the techniques you're talking about be applied to existing infrastructure? Or is there an element of hardware refresh needed to make them work?

quality mechanisms or new, clever maths to do it better?' There's a very different way of thinking about networks that exists, and is possible, and I've been working with some mathematicians in the UK who've been using this stuff in Tier 1 fixed and mobile operators - to somewhat spectacular results. It's a different way of thinking about the problem. And what it takes you towards, ultimately, is from a monoservice to a polyservice network. Today, when people talk about multi-service networks, they're really multiple

MG: There's two parts to the problem here, one of which is using the quality-type thinking to begin to reason about your costs and revenues - so it's being able to start to express that problem in a language that makes it practical. Then you start to isolate where the problem is today; is it in the backhaul, in the middle mile, in the device? What is causing the accumulation of loss and delay along the path? Secondly, how do you manage it better? We're talking fundamentally new algorithms which could easily be mistaken


for QoS, but it's actually a different category of managing traffic. QoS is allocating priority, this is about allocating the 'holes': how do I trade loss and delay between flows. You can take any subsequence, any path along the network and pre-contend the traffic, so it doesn't selfcontend downstream. It's very similar to the theory of constraint in traditional manufacturing. CD: Could you engineer this thinking into new national broadband networks? MG: Yes. But I think there's a more basic problem, which is: what is the nature of supply and demand on broadband networks, and how do you match the two together. Historically, the way the telecom industry has worked is it went out and built supply, as with the PSTN, at a fixed quality, and then it set the marketing department free and said 'go out and hunt for demand'. Now, that works in PSTN, because if you have too much demand, you'll have a busy signal - which gives immediate feedback saying too much demand. But on the internet, the moment you start finding demand, it starts to destroy the properties of the supply; the moment you go from the empty network to the first customer and the second customer, the quality of delivery goes down monotonically with more customers.

And you get to a point where the quality degrades enough that you feel obliged to invest in more capacity. So the industry, as long as it has a single-service network, has a problem of declining quality. 2G, 3G, HSPA+, 4G… the time you've got to recoup that investment goes down. So the transition that needs to be made is from finding demand… to finding supply of the right quality, and quantity, to match it. Which means you need to start to understand what people actually want to achieve with networks, and what is fitness for purpose - what are the “Stop talking about bandwidth, think loss and delay, and what is the appropriate amount that you need for the kind of applications that we forsee people having.” bounds in loss and delay that different applications will require, and what is the hierarchy of failure that the customers want to have implemented. There's a flip in the nature of the business - but the discussions around national broadband networks are entirely anchored in supply, not about what is the contract between supply and demand. The nature of the debate around these things goes wrong because it doesn't start with 'what do these users require?' Once you start with [the realization that] all any network does is deliver pack-

ets, but real networks create loss and delay… how much of it are we willing to accept? Stop talking about bandwidth, think loss and delay, and what is the appropriate amount that you need for the kind of applications that we forsee people having. And then you've got to reason about what the kind of infrastructure is that you need to support that. CD: Who, in your view, should take responsibility for shifting thinking in this way? MG: One party… with a major stake… is the regulator. We've been talking to Ofcom, for example, in the UK, about the need to change the basis on which this market is measured. So just like the car markets went from… emphasizing speed, and then it became about fuel economy… in telecoms, it's gone speed, bandwidth, bandwidth, speed, [but] the next phase is one of 'fitness for purpose', which is kind of like fuel economy. And there'll be something equivalent to safety down the road, and then pervasiveness, that'll be something else. The next thing is, what is the rate of variation of loss and delay and contention on different networks? Because you can have two networks that offer the same bandwidth, one of which is totally useless, and one of which is wonderful - because of different contention effects.



Superfast Cornwall Petroc Wilton reports on how a UK rural broadband rollout is beating the clock

C

ornwall, a county at the southernmost tip of the UK, presents some serious headaches for mass broadband deployment. Rural and remote, the area is sparsely populated, and frequent rainfall can present major challenges for civil works and extensive copper networks. But via a publicprivate partnership under the banner of 'Superfast Cornwall', there's a new broadband network rollout already underway - and, so far, running ahead of schedule. If it stays on track, Superfast Cornwall could become the poster child for successful partnerships between industry and government in broadband deployments. CommsDay editor Petroc Wilton travelled to the UK to find how the commercial model for the partnership stacks up, how the stakeholders are driving and evaluating demand for the network, and how the access technologies are being deployed.

Superfast Cornwall is a partnership between the European Union, BT and Cornwall Development Company, the economic development arm of Cornwall Council. The European Regional Development Fund is investing up to £53.5 million to make the business case for the region workable, with BT putting in up to £78.5 million; the telco will ultimately own the network, with the public return measured in economic and social benefits, while a combination of BT's functional separation and national pricing will ensure equivalence of access at all levels of infrastructure. The end goal for the five-year project is to bring faster broadband to all of Cornwall's 250,000-plus premises.

"We're aiming to get FTTX to well, [we've] said 80% of premises, but it will be more than that,” explained BT's director for the Cornwall and Scilly superfast broadband program Dr. Ranulf Scarbrough. “We're going to call it later; we like to say when we're sure! Lots of people put wild claims out, try deliver on them, and then don't. So it will be more.” “And we will do that by 2014. We've got 105,000 passed now, so we're about 40%. And we should, by March, be about 200,000 give or take. We've exceeded every milestone so far. Because the partnership

model sees the network, its revenues, and the initial funding remaining in BT's hands, the project isn't hostage to the political agenda of the day - or obliged to commit to a specific financial return to public sector shareholders. “The funding is committed anyway; I think they've been bold here,” said Scarbrough. “Some other areas have said 'ooh, there's lots of money in these networks, we want a slice of that because we're putting money in.' Well, there's issues with that. If we start to do it differently here, then we can't use our economies of scale... and it's a difficult business


case. If we shared revenues from it, then there'd be less we could do. The return on investment for the public sector is economic and social outcomes, and that's what we're delivering. The jobs, a network which is there for the long term - once it's in, we should have broken the market failure and it should be completely sustainable from the ongoing revenues.” “We worked very hard on the consultations; I was involved with the previous project here, a thing called ActNow which ran with first generation broadband and pioneered the deployment in Europe. And we learned an awful lot of lessons about what public and private can't achieve on their own, and what they can achieve by working together,” he added. “The operating model here is relatively simple; because we've constructed it so that [BT and the Cornwall Development Council] both benefit from a good network, we both benefit from high takeup, the incentives are lined up from Day 1, rather than having an adversarial contractorsupplier relationship. And that's quite elegant, really.” “This is absolutely our flagship… this model of how we work together.” DRIVING DEMAND Scarbrough cast the local expertise of the Cornwall Development Council as key to expediting the rollout - helping

to prioritise key business areas for connection, smoothing local planning processes and

“We learned an awful lot of lessons about what public and private can't achieve on their own, and what they can achieve by working together” consultations, and so on. But one of the areas where the partnership model is most crucial, he said, is in driving local demand. “The marketing program is really big. We don't go out with the Cornwall Council logo, we don't go out with the BT logo - it's all about the Superfast Cornwall logo, which is the brand we've tried to get people to identify as local, relevant, informative, trusted, and that's what we go to market with pervasively,” he said. “It's tricky, because we're trying to drive takeup, but we can't sell

anything here - you have to buy from your service provider! But we're getting good results; the campaign's pretty pervasive.” “I'm stunned by how little focus there is on it [in other rollouts]. There's so many projects working out how to get all this fibre out there, but not saying 'okay, but the key thing is, the more takeup you can drive, the better network you can build, because the economics look better. And the better outcomes you'll get'. Actually, driving the takeup is probably equally important to driving the network build.” At the end of August, there were just over 12,000 connections through 27 service providers across 100,000 premises passed - most of which only went live earlier in 2012. ”We have said, jointly, we want to get 50% takeup by 2015; the build runs to 2014. I don't know whether we'll get there.



But having targets you know you can get to is boring!” said Scarbrough. “Our business case doesn't assume 50% takeup; it's a lot less than that… and if we only got halfway there, or three quarters of the way there, I think that's a bold ambition to say 'we're not just going to put this network out so we can say we've got fibre everywhere - we want people connected to it and benefiting'.” The partnership is also running a series of supporting projects to drive not just takeup but outcomes, ensuring people and businesses start to use the new network productively. “There's a lot of business support, a lot of stuff with skills, quite a lot of stuff on digital inclusion - so not just about the early adopters, but those who have never been on the internet. We're innovating anyway on the network, but

we're doing a lot with how people use it, what are the new products and services, with the universities,” said Scarbrough. “And the council's also leading its own transformation program, to try and exploit the network and change the way it operates. It's very holistic; the network is where most of the money goes, but it's not just about the network on its own." EVALUATION The other side of driving demand, of course, is actually assessing the social and economic impact of the network - particularly since those returns are the basis of the public investment in the rollout. “For your tax dollars, you will get economic outcomes; how are you going to measure that, are you going to measure that, how do you know if you're successful? There's actually quite a big activity to meas-

ure the economic outcomes: how many new jobs, how much gross value add, how many new business startups,” said Scarbrough. “It's quite hard to measure, because you've got to get businesses to work with you… we're doing some work in the autumn looking at the first businesses that have been connected for a year to see what's happened, to see what the results are.” “We did it on the first generation project, and we identified an annual GDP increase of over £150 million, for something like, from the public purse, about £2-3 million. So ROI was pretty good. I think this may be a longer burn, just because we're a little bit earlier; lots of businesses are doing what they were doing faster, or with more people in parallel, but the new applications and services are still a


little bit more on the cusp.” ACCESS TECHNOLOGIES The 80-90% FTTX target is a mix of fibre to the cabinet and fibre to the premise, with FTTC first on the agenda and access equivalency a key part of the architecture. "In Cornwall, there's 100 exchanges, and running through the peninsula are two backhaul chains… you've got about 14 handover points that sit on the backhaul. So the architecture is that handover exchange and the children around it,” said Scarbrough. “You've got Openreach, a functionally separate division of BT, as local access business, providing equivalence of inputs to everyone downstream. So BT Wholesale and others buy circuits from them on the same basis, and retail service providers buy from wholesale."

"From those children exchanges, you run fibre from the handover point, past the local exchange - so where you have an exchange you just have an aggregation node - out to where the copper junction box is. And in the case of FTTC, you're on fibre out to [new] cabinets, and then the service runs over the existing copper down to the enduser… and whether you're at the handover exchange, or one of the children, it's the same price, the same product, for everyone in the industry."

Handover points are set up in exchanges with at least four local loop unbundlers, and there are also passive products like duct and pole access available, plus sub-loop access. What it all boils down to is that any putative service provider has a choice of whether to resell or use their own assets at each level of the network, from backhaul right though to access. “We make everything that we do available to others

“If you're in a FTTC area and you want more bandwidth than we can provide - and FTTC does up to 80Mbps, distancedependent - then we're developing a product where you can order… a fibre-based service. ” to do,” commented Scarbrough. “No-one else is doing it with their own capital. FTTH works as an extension of the model; where practicable, Superfast Cornwall is running fibre rather than copper out from splitters to distribution points located on existing poles - called manifolds and then doing cable drops to order. But the joint venture is also looking at extending the FTTH footprint more broadly in response to demand. “Originally, what people said to me at the start of the project was 'if you're getting FTTC, does that eventually

mean you might get FTTP?" said Scarbrough. "The answer, until about a year ago, was 'it's one or the other'… but actually we've debunked that a little bit; we've run a trial at St. Agnes, of what is being called 'fibre on demand'. So if you're in a FTTC area and you want more bandwidth than we can provide - and FTTC does up to 80Mbps, distancedependent - then we're developing a product where you can order… a fibre-based service. I think that's going to be more important, because it means there's a grow-on capability for businesses and so forth. The difficult bit is how you price it and how you order it.” Outside the fibre footprint, Superfast Cornwall has been considering and in some cases trialling a range of technologies, from fixed wireless (following successful sharedinfrastructure trials with one of the UK's mobile operators) and TV white spaces tech, through to the use of regenerators and SHDSL on existing copper infrastructure, and even satellite. But for the moment, it's holding fibre on deciding where it will actually deploy these alternatives. “We've got a set of other technologies, and we've done trials on things, but we haven't deployed yet. And people say 'why aren't you getting on with it?' And the simple answer is we're trying to innovate and push fibre as far as we can,” said Scarbrough.


Investing in the future Delivering today • National Coverage • Supporting Wireless Networks • Shared, Safe and Resilient

For more information about our Site Sharing Services contact sitesharing@broadcastaustralia.com.au or call (02) 8113 4666


Small cells are a big deal The use of small cells in mobile networks holds the key to carriers coping with the coming explosion of data traffic, according to Alcatel Lucent global head of network engineering, Scott Nelson. He spoke to CommsDay's Geoff Long ahead of giving the opening keynote to the Institute of Electrical and Electronics Engineers' International Symposium on Personal, Indoor and Mobile Radio Communications in Sydney

CommsDay: So what's your theme for the IEEE keynote you're giving? Scott Nelson: Fundamentally, it's that the data explosion that happened in the wireline world 10 year ago, and that hasn't stopped, is now impacting wireless and the consequences of that are quite dramatic. We're talking about nearly 100 percent growth every year for the next decade. People talk about 25 to 30 times growth over the last five years and there will be another 25 to 30 times in the next five years and it ain't gonna stop. CD: And that's now impacting on the wireless network? SN: Well the bulk of the traffic still gets carried over the fixed network, that's the reality. But it's the growth and impact on the network that's the issue. There have been dramatic changes from 3G to

LTE but that alone is not enough – you can't get there just by getting more spectrum. You can't get there just by improvements in modulation schemes. The cells just have to get smaller, and that's the big change. I'm not saying the other things won't have their impact – the new modulation schemes will help and getting more spectrum will help – but you can't get a lot more spectrum at the low frequencies where you want it. CD: So spectrum and modulation and these things help, but small cells are the key? SN: Yeah, to get a thousand times you're not going to get there by 5 times improvement even 10 times improvement in modulation or even 10 times more spectrum. So we're going to get there with small cells and Wi-Fi and

the consequences of that on the network are pretty dramatic – it needs new techniques to manage that mobility as you move from cell to cell and interference issues between small cells and big cells, particularly where you can't get different frequencies. If you've got the same frequency at the macro and the same frequency


in the metro there has to be pretty tight coordination. CD: So where are we in terms of dealing with that? SN: There are standards emerging, in fact I think the standards are already written it's just a matter of implementing them. The preferred method of course is to have different frequencies. So the macro cell has the lower frequencies and the small cells have the higher frequencies and that way they don't interfere. That's where it's going to end up but we're not going to be there in the short term. CD: So for a carrier, would they be looking at getting the different frequencies? SN: Yeah, that's a longer term preferred model but in the short term the carriers have only got what they've got. And that typically means they've got to use the same frequencies for the so-called heterogeneous networks, or Het-Nets. And that's simply about the coordination between the macro and metro layer, although it's not trivial. CD: Telstra not long ago decided to turn off their public Wi-Fi network, is that going against the trend? SN: The issue with Wi-Fi from that point of view is it's not licensed. So at some point it becomes it's own worst enemy and it's hard to control.

Well, there is no control, it's unlicensed. So people can interfere with each other and there's absolutely zero planning. Now the Wi-Fi standards themselves overcome some of that if the cells are small enough. If it doesn't go outside the room you're fine but as a public offering it's fine but only up to a point. The fundamental problem is that it's unlicensed. So it will be more personal than public

“The issue with Wi-Fi from that point of view is it's not licensed. So at some point it becomes it's own worst enemy and it's hard to control.” because it's not going to go away in the home. The next versions of it we're going to get 1Gbps from your TV to your Wi-Fi hotspot in the house. And interestingly the technologies that we use to do that are exactly the same technologies that we use in other networks – the modulation schemes, the MIMOs, all that sort of stuff. CD: So Wi-Fi and mobile technologies are coming together? SN: Well actually they're coming together with the fixed world as well, because when you go right back down to the bottom of the technology they're all using OFDM. CD: What can we expect to come out of the labs next in terms of small cells?

SN: The automatic management of the traffic and the parameters to get the network tuned properly. It's reasonably complex and at the moment it's done a bit offline and by humans. In the future it's self organising – or Self Optimising Networks (SON). And that's fundamentally about reducing the opex. You've got to get some automation to it so it's as much about automating the operation so you start to get a different set of requirements because of the complexity of how things have to interwork. It's no longer big cells but lots and lots of little ones. And then there's carrier aggregation, or figuring out the technology to get a bit of spectrum here, here, and here and make it look like one big chunk when it's not contiguous. And that work is still going on. People know how to do it but you've actually got to know how to implement it. CD: And finally, is there enough spectrum to meet the demand for data assuming the uptake of small cells? SN: It's a natural resource but what's interesting is that it's natural to a degree but the good thing is that as the cells get smaller, you can go to higher and higher frequencies. And the higher frequencies you go to the more bandwidth you get to.



Defining software defined networking Software defined networking promises to revolutionise how networks are built and managed by decoupling the control layer of networks from the data transport layer. By taking all the intelligence of a network and putting it under software control, SDN introduces a new networking paradigm – one that offers greater control of network resources and enhanced flexibility in management, provisioning and service delivery. Tony Chan reports

W

hen it comes to one of the hottest topics in the networking industry today, software defined networking, it is easy to get sidetracked and lost in the growing number of solutions and methodologies being proposed by all the major vendors, particularly for telecoms carriers facing increasingly broad technology choices and challenging network requirements. The idea behind SDN might seem simple at first – a network environment that is programmable via software – but the way it manifests itself in real service provider network environments is still an evolving process. No one doubts the ultimate benefits of SDN. By handing control of the network to software, SDN not only logically centralises network control; it also allows for the creation of applications that automate network processes, simplify resource management, and support new services. What the industry can’t seem to agree on is a systematic methodology on how to bring those features to the market. There is plenty of work being done, most prominently by the Open Networking Forum (ONF) in the form of the OpenFlow protocol standard, but the ONF is far from being the only game in town. An increasing number of network equipment vendors, including Cisco and Juniper, are

enabling direct programmability of their platforms by opening up their network interfaces to third party developers. And other standards bodies, such as the Metro Ethernet Forum (MEF), are now working on specific sets of APIs to enable SDN. “There are actually two implementations going on in the industry today. One is the proprietary router market where the hardware and software are provided by one vendor,” said Margaret Chiosi, technical strategist at AT&T and leader of a Technical Committee on cloud and SDN at the MEF. “This is analogous to the server market where IBM in the 70s provided the hardware and software for that market; not only the software for the operating system, but the tools, the database, as well as the applications.” “The second implementation is the ONF/OpenFlow standard, where the control and the data plane are separate and there is a standard interface defined between them. The controller is open source software that can be provided by one vendor, the data plane is provided by another vendor, on merchant silicon, verses custom silicon by another vendor. The analogy to the server market, again, is the Unix environment – or Linux, which is an open-source operating system

based on Intel hardware.” It’s important to note here that neither AT&T nor the MEF are members of the ONF and are by no means endorsing either model. ONF AND OPENFLOW As its steward, ONF describes the OpenFlow protocol as “the first standard interface specifically designed for SDN” that “enables networks to evolve by giving logically centralised control software the power to modify the behaviour of network devices through a well-defined ‘forwarding instruction set’.” Put another way, OpenFlow asks equipment vendors to support a defined set of application programming interfaces (APIs) on their silicon, thus allowing third party software developers to write applications, such as network controllers, that run features on that silicon. In simpler terms, OpenFlow allows software sitting in centralised servers to tell switches in the network what to do. When it first appeared in vendor presentations a little over a year ago, OpenFlow promised a whole new paradigm for network manageability. With heavy-hitting backers that include many of the major names in the telecoms and internet space, OpenFlow quickly be-


came the de facto – some say the most hyped – standard for SDN development in the industry. Today, the ONF has more than 70 member companies, including all the major telecoms equipment vendors, the majority of global carriers, some of the biggest internet firms – Google, Facebook, Microsoft, and Yahoo – and a large number of leading IT solutions firms. But while OpenFlow is definitely the 100-pound gorilla in the SDN room, vendors are already highlighting its limitations. For starters, there’s a lot more going on inside service provider networks than just traffic. They have subscribers, they need to bill for services, they interconnect with multiple other networks. Of course OpenFlow, currently in release 1.3, will continue to evolve, but vendors are already branching out with their own strategies. GOOGLE’S OPENFLOW TRIUMPH In an ideal network environment, or at least as ideal a network environment that Google could create for a backbone connecting 12 global data centres, OpenFlow is certainly proving its worth. By combining centralised control of the network, traffic engineering and traffic prioritisation, Google is tapping into many of the benefits of OpenFlow, embodied by the ability to dynamically control different types of traffic on its backbone. Those features now allow Google to run the backbone at close to 100% utilisation, compared to the typical network utilisation rates of between 30% and 40%, Google principal engineer Amin Vahdat told NetworkWorld. The key to Google’s achievement is the ability to dynamically switch different types of traffic in the event of a network failure. W While Google doesn’t have any extra capacity on its network to support the traffic on an impacted link – since all the links would be running at close to

maximum utilisation – the ability of OpenFlow to move and prioritise traffic on the fly means that important traffic that is rerouted from a downed link replaces only low priority traffic on the restoration link. “In other words, we can protect the high-priority traffic in the case of failures with elastic traffic that doesn't have any strict deadline for delivery,” Vahdat said. “We can also route around failed links using non-shortest path forwarding, again with the global view of network topology and dynamically changing communication characteristics.” Google’s feat is pretty larger-thanlife, if not jaw-dropping out-ofthis-world, but can it be replicated in a service provider environment? Like all of Google’s technical feats, the solution had plenty of Google in its creation. For starters, Google built its own networking gear for the project – albeit because it started it two years ago when no OpenFlow-enabled

OpenFlow today focuses on Layer 2/Layer 3, which fails to meet carriers’ requirement for multilayer network capabilities equipment was commercially available. It also architected its own software controllers for the network, again because it was ahead of its time. As such, Google was building a completely new network, so it didn’t have to worry about legacy equipment, existing customers and traffic – or, perhaps more importantly, cost of migration. Few carriers today can afford to spend two years building a network, and few can built one that is so isolated from other services and network elements. Then again, if carriers really wanted to build a network like Google’s, they probably could today. A number of vendors – Bro-

cade, Huawei, Juniper, Cisco to name but a few – have commercialised, or have demonstrated, OpenFlow-enabled gear in their portfolio. And there are a number of OpenFlow controllers, such as NEC’s Trema, and BigSwitch’s Floodlight, now available on the market. If carriers put in the time and money, service providers might be able to reap the benefits of Google’s example on part of its infrastructure, but is that enough? NICHE APPLICATIONS Even staunch backers of OpenFlow such as Verizon, a board member of ONF and a keen supporters of OpenFlow, are seeing limitations with the technology in its current form. At a recent conference in the US, Stuart Elby, chief technologies at Verizon Digital Media Services, highlighted several gaps between what OpenFlow offers today and where it needs to be to be useful in carrier networks. These include the fact that OpenFlow today focuses on Layer 2/Layer 3, which fails to meet carriers’ requirement for multilayer network capabilities. OpenFlow needs “to include other transport technologies, including optical,” Elby said. At the same time, he argued that OpenFlow only solves part of the lower-layer networking problem, and doesn’t provide “protocol specifications for major aspects of a SDN ecosystem.” In his slides, Elby showed OpenFlow’s ability to control the data plane, but also highlighted the fact that carriers need much more to fully support services. In one use case, where Verizon is using OpenFlow to optimise video streams to its mobile users, the carrier still have to send traffic back and forth to other network elements, such as a transcoding/ transrating box, which optimises the bit rate of streams to mobile devices based on the condition of the radio access network. Verizon also deployed a network cache as


part of the delivery model, as well as a real time traffic modelling unit to feed network data back into the OpenFlow controller. Another deployment model feature the use of an OpenFlow controller for real time service assurance. In this case, OpenFlow’s functionality is limited to controlling the traffic flow, but it doesn’t solve the complex issue of data collection and orchestration, or offer control of the analytic engine required to make sense of the data. While Elby notes “OpenFlow is being applied to niche applications that can immediately benefit from its improved economics,” it is also clear from his deployment scenarios that it is not a cure-all to the complexity of service provider networks and service delivery environments. SDN IN CARRIER NETWORKS One industry body that recognises the potential of SDN – though not necessarily in the form of OpenFlow – in operator networks is the MEF, which oversees the standardisation work behind Carrier Ethernet, essentially a backbone and operating wide ar-

ea network technology. According to the MEF, which has not endorsed OpenFlow but acknowledges the initiative being led by ONF, SDN is applicable to carrier WANs in three distinct use cases. One is resource bursting, where capacity can be turned up instantly to support changing demand from applications, particularly for supporting cloud computing. Another is network slicing, where a carrier can create a network inside a shared infrastructure – much like virtual private networking – but with much greater control in terms of specific paths, equipment location and even protocols. The third is related to the second: traffic engineering which, as Google demonstrated, offers the ability to run networks at close to full utilisation. A lot of what the MEF describes sounds similar to OpenFlow, but the MEF is cautious enough not to directly associate itself with ONF’s effort. “Getting back to the relationship of MEF to SDN, going back to the original definition of open, programmable and application aware, the MEF Technical Committee is focused on the APIs

needed to dynamically configure, monitor, and managed Ethernet services more easily,” MEF’s Chiosi said. BEYOND SDN The reality is that few vendors in the service provider and carrier network space see OpenFlow and by extension SDN as an end in itself. While some of the biggest names, including Cisco and Juniper, have adopted support for OpenFlow as well as other SDN technologies, they see SDN as only a small sliver of broader strategies to bring network programmability to a much larger part of a carrier network. In addition to OpenFlow and SDN, Cisco’s solution now takes the form of its Open Network Environment, or Cisco ONE, initiative, which encompasses the entire solution stack from transport to management and orchestration. As part of Cisco ONE, the company has released One Platform Kit (onePK), a software development kit delivering a set of APIs to all of Cisco’s operating systems and hardware platforms. So instead of just abstracting the switching layer with SDN,

Alcatel-Lucent’s top down approach to network programmability While the industry warms up to software defined networking and its ability to bring programmability to networks, Alcatel-Lucent is making a move of its own, but from a completely different direction. The company has so far steered well clear of SDN and OpenFlow. Instead, Alcatel-Lucent is attempting to add programmability to service provider networks from within the networks themselves. Through its Open API Platform (OAP) initiative, described as “an end-to-end API Monetisation and Optimisation software solution” for service providers, the company is actually focusing on existing APIs already found in networks and helping service providers take advantage of them. What Alcatel-Lucent is talking about is not a set of network interfaces or schemes like SDN, but a set of services that help service providers to identify, manage, and deploy the APIs already inside their network. In this way, OAP assumes from the start that networks are programmable – its goal is to help service providers define what they should do with it. “What Alcatel-Lucent is doing is saying, here’s an ability that you have in your network, what is interesting around that, and how do we think about making that programmable,” said Laura Merling, who heads up Alcatel-Lucent’s API Strategies and Solutions. In this sense, network programmability is not about giving software access to the underlying equipment, but services access to the network, the classic example being the ability for applications to self-provision resources from the network. “You want to give an enterprise the ability to say, ‘what’s the status of the network, and is it available for me to send this big data now?’ and then to be able to invoke burstable bandwidth to support that transport,” said Merling. In other words, Alcatel-Lucent’s approach centres around exposing a service provider’s network capability to applications. So while OpenFlow and other SDNs are focus on giving network builders programmability of the underlying hardware, OAP is working on exposing that programmability to high-level applications.


Cisco ONE aims to also abstract higher service layers from the network into programmable interfaces, albeit Cisco controlled interfaces. “My perspective is that service abstractions are a vital next step in the evolution of network programmability,” said David Ward, VP, service provider chief architect and CTO at Cisco. “Abstractions will allow for the definition of layered APIs and NPIs (Network Programming Interfaces). Enabling multilayered APIs across all of the underlying network elements will be a critical first step to ensure integration with operator development environments.” A similar approach has been adopted by Juniper, which has had a software development kit for its JUNOS network operating system for some time. Late last year, Juniper added the source code to its implementation of OpenFlow to the SDK, along with support for other SDN platforms, such as BGP-TE (border gateway protocol-traffic engineering), PCE (path computational element), and ALTO (application layer traffic optimisation). It’s apparent from their roadmaps that industry stakeholders see OpenFlow, and SDN, as key components for networks going forward, but it is also clear

that it will be just another feature among a host of other programmable capabilities in service provider networks. OPENFLOW NOW That doesn’t mean service providers should wait, says Charles ‘Chuck’ Jones, vice president of Worldwide Systems Engineers at Brocade. “The biggest risk is for service providers and carriers not to adopt an OF and SDN strategy. If they don’t do that, they are going to be left behind from a competitive perspective,” Jones said. “In that planned development, I think they will have evaluate certain feedback from their vendors, and the industry, the direction of OpenFlow and adoption timing of OpenFlow and so forth.” He does recognise however that the landscape will continue to change, especially over the next year as OpenFlow evolves. On the other hand, he also firmly believes that adoption is inevitable. “There’s a lot to be done in the standardisation, in the deployment, I think it will take more time than it should because any time you roll out and deploy new and innovative ideas, there’s some barriers to overcome with comfort with the technology, the reliability,” he said.

“I think what you’ll see in the next weeks and months is the proof of concepts, small production networks that are being deployed now, and the expansion of the capabilities. I would predict that at the end of 2013, there’ll be wider adoption and acceptance of the capabilities, and customers will start to look for places to deploy it.” One area that might kickstart the adoption is what Brocade dubs the hybrid mode model. “Brocade has announced, and I think no one else has announced this, a hybrid model. Right now, the OpenFlow protocol is written with the idea that network devices would be 100% OpenFlow enabled,” said Jones. “Brocade, with a little bit of pragmatism, has said that is probably not always going to be the case.” The company is now proposing a solution that carves off bandwidth for OpenFlow applications and OpenFlow functionality, while reserving part of the network’s resources for running traditional environments. “Effectively what it is creating is a combination of what we called hybrid mode. It creates comfort for early adopters that the experimentation, and the implementation, won’t compromise their existing customer base and so forth,” said Jones.

Why Cisco is not afraid of OpenFlow It is easy to understand why vendors such as Cisco might have reservations about OpenFlow and by extension software defined networking. After all, OpenFlow offers to take a lot of the value from existing vendor platforms by taking the intelligence out of hardware and putting it inside servers running applications. As Google’s Vahdat puts it: “What OpenFlow and software-defined networking really enables us to do is separate the evolution path for hardware and software. In other words, you can get the hardware that meets your needs and separate that from the software that meets your needs for a particular deployment. Historically, those two things have been wedded together.” That might be the case for Google, but the reality for telecoms operators extends far beyond the ability to move traffic around more efficiently at the transport layers of the network. “SDN, network virtualization and overlay networks (choose your favourite descriptor) are not going to commoditize the underlying networking infrastructure. These architectures actually place more demands on the core infrastructure to enable network virtualization securely, with high performance, at scale,” wrote Padmasree Warrior, chief technology officer at Cisco, in a blog post. “Why? Because customers expect their core infrastructure to be seamlessly integrated with servers and fabric interconnects. They want a common management framework across all switches (physical and virtual)… SDN no more minimizes the underlying infrastructure than a new steering wheel undermines the importance of a car engine.”


Internal affairs How Huawei’s Phil Tarling is driving image change from the inside

A

s Huawei continues its rapid program of global expansion, the vendor is remaking its image in a bid to move on from the suspicion and controversy that has dogged some of its international operations. In recent weeks, the firm has even been petitioning governments in both the USA and Australia to end what it says is unfair discrimination based on national security concerns. But Huawei’s campaign to change its image also encompasses a major internal overhaul – restructuring its take on corporate governance and internal audit under the leadership of Phil

Tarling, the chairman of the global Institute and Internal Auditors. CommsDay editor Petroc Wilton caught up with Tarling at a recent Sydney visit to dig

into his role in Huawei’s new transformation drive. CommsDay: Can you outline, in general terms, what you’re doing at Huawei? Phil Tarling: My official title is VP of the Centre of Excellence for Internal Audit. My role is to develop the internal audit function within Huawei globally to bring it to what they want to be leading edge. They want Huawei to be looked at by everybody else and say 'this is what we need to do, this is the model that we need to follow’. That’s pretty exciting, both for Huawei and for me, because it does


push expectations and becomes quite a challenge when at the same time the company is converting from being a Chinese company with international offices to a global company with, potentially, a Chinese office. The Centre of Excellence is based in the UK; it was deliberately based there because of the provenance that the UK has in good corporate governance, having led the world from the very beginning with the Cadbury reports and those sorts of things in corporate governance… there’s two sides. There’s the internal audit side, which is really making certain that we’ve got the proper systems in place, the right controls that the business needs to achieve what it needs to achieve, and then you’ve got the investigators. One of my other roles, and probably one of the reasons why Huawei searched me out, is that I’m the [chairman] of the global institute of internal auditors at the moment… so I’m sort of involved at the pinnacle of the profession, as it were. So I have some pretty good contacts around the world, and I’m hoping to use them! And that’s what we’re hoping to do going forward… rather than re-invent everything, to bring everything in line with best practices. CD: So it’s not just a limited financial role; it covers broader business processes and cor-

porate structure? PT: It’s the whole thing: business process, corporate structure. At the moment, we’re doing the tough audits, the ones that haven’t been done. For example, we’ve just recently done an audit of foreign exchange; nobody had ever looked at Huawei's foreign ex-

“The Anglo-Saxon way doesn’t work – it might work with my Anglo-Saxon colleagues but it certainly doesn’t work with the Chinese.” change management. For a global firm, that’s very important! Whilst the management team had been aware of it, and the foreign exchange guys had been working on it, there’s never been this indepth audit of ‘exactly what do we do, are the business processes correct, or have they just sort of come up over time’ The guys that I’ve got have vast experience of internal au-

dit. One of them is exgovernment communications HQ in the UK, so he’s got the telecoms background as well as the audit background. The other guy is ex-risk management in Cisco, so again he’s got a telecoms background, but he’s also got this risk management audit side. We're building up people from across the globe. At the moment, we have about 170 people in internal audit and investigation around the world; we do have authority to go up to 300. We’ll work our way up to that, rather than try and do it in one big splash, because we want to make sure we get everything right first before we start getting lots of numbers. At the moment, the majority of them are Chinese nationals; we’ve got to build it up through time so that we get a better spread of who we need. To deal with, say, audit in Brazil… you need a bit of knowledge of the local culture. CD: How much of a challenge


have the cultural barriers been so far? PT: My Chinese director Zuo Chuan is basically my conduit into how to get things done. It would be silly for me to say ‘I’ve been there nine months and I actually understand Chinese culture’; that’s nonsense, I don’t. I’m getting to understand some bits of it, but with Zuo Chuan, he makes sure that I’m sort of on there… so I’ll say to him ‘look, we need to do this’, and he’ll say ‘right, the best way to do that is this way’. Because the Anglo-Saxon way doesn’t work – it might work with my Anglo-Saxon colleagues but it certainly doesn’t work with the Chinese. So you have to be keen to that sort of difference. [But] we’re able to go anywhere we want in the company; I decide what audits the Centre of Excellence is going to do, nobody else tells me. And whilst at the moment, the main audit team are doing the general program, the Centre is looking at those areas that aren’t usually audited, or haven’t been audited – all those sorts of areas. And I think the fact that we’re allowed to do that speaks a lot for the openness that we’re pushing. CD: How far will your role, and that of your team, go towards improving Huawei’s profile globally? In particular, especially in the United States, there’s been quite a lot of con-

troversy around Huawei’s links [to the Chinese state]; even in Australia, we had an issue where it transpired Huawei had been excluded from NBN tenders. PT: The Global Institute of Internal Auditors is actually based in Orlando in the US, so I’m going to be very much the face of internal audit in the States as well – and everybody will know that I’m a VP at Huawei! So it’s going to be interesting. I have said to my colleagues in the Institute about this, and they’re relaxed with the fact that I’m employed by this company which appears to be hav-

“In Europe, for example, Alcatel, Ericsson, Nokia; they’re all saying ‘hey, this is stupid, we need competition; the only way we’re going to grow is the competitive edge’. ing a few problems with the US government! The interesting thing to me is that they seem to be forgetting all the things that we’ve done to try and provide a little bit of comfort to them. For example, we’ve set up a cyber-security centre in the UK; we’ve got John Suffolk, who’s an ex-government computing guy in the UK, working on cyber-security issues for us. That centre is totally independent; we just fund it. We

let them get on; they have full access to all our products. Now, if there are any backdoors or anything in anything that we’re issuing, they’ll find it – and they have the independence to be able to find it and then blow the whistle. We’ve got so many opponents out there – competitors, shall we call them – that if we had back doors, they’d find them, and they’d be shouting from the rooftops: ‘hey, look, we found the Huawei back door!’ In Europe, for example, Alcatel, Ericsson, Nokia; they’re all sort of saying ‘hey, this is stupid, we need competition; the only way we’re going to grow is the competitive edge’. And if you start pulling that competition out, the only losers, actually, will be the telecoms industry… because some of the patents that we hold are necessary for the US companies to go forward. And if they start restricting us, then they may lose out themselves on taking things forward. I think by looking at the transparency that we're offering through internal audit, through having an audit committee, all those sorts of things for good governance – I think, in the end, we’ll be able to persuade people that it’s not all bad… it’s a company doing business, and wanting to do business. [And] I think in the end, the force of change will actually make it happen.


Cashing in on white space Mobile network operators across the globe are all coming to terms with the reality that spectrum, by its very nature, is a finite resource. And with the data explosion forcing service providers to think outside the box, TV white space technology is entering the conversation as a potential complement to existing spectrum assets. David Edwards reports

T

elevision white space is the unoccupied bandwidth that exists between TV channels in the UHF spectrum. The technology has great propagation characteristics in terms of distance and its ability to penetrate buildings. So why isn’t it being more widely considered as a viable alternative to dedicated spectrum? Ericsson Australia head of strategic marketing Kursten Leins says while mobile networks could be theoretically adapted to utilise TV white space, there remained a limited amount of usable white space spectrum “due to the need for guard-bands to protect against service interference.” He also points out the inverse relationship between population density and the volume of spectrum available. “In areas of low population (i.e. rural areas), there may be

larger amounts of TV whitespace available; however due to the low population, demand for spectrum is correspondingly low also. By contrast, cities with high population densities also have many primary and secondary TV broadcasting sites, creating a

Rural broadband and machine-to-machine applications are the best fit for TV white space technology. much more heavily utilised TV spectrum band and therefore far less spectrum is potentially available for serving highcapacity, high-density urban areas,” he explains. Indeed, TV white space has been strongly mooted as something of a rural broadband solution. Independent wireless technologist Simon Saunders, of RealWireless, earmarks ru-

ral broadband and machine-tomachine applications as perhaps the best fit for TV white space technology. “With M2M, machines are potentially in very different places to people, so that’s helpful – the transducers, the grids you might want to monitor, the hospitals you might want to connect up to patients in rural areas – that has a very different profile to typical mobile traffic and population movement generally… and might well make use of white space,” he says. “It doesn’t actually need that much bandwidth compared to some of the other applications; most machines just need a reliable long range connection but not a lot of bandwidth. So that can be quite opportunistic – and that’s often not time-critical, if there’s a bit of interference for a while, it can hold onto that traffic and


transmit it later on. So it does seem that the M2M and the rural broadband applications are a better fit than some of the sorts of offload and mobile and LTE alternatives that have been vaunted for this.” Other players have suggested using TV white space for wireless broadband backhaul. In Australia, the Commonwealth Scientific and Industrial Research Organisation’s Ngara Access technology – originally designed as wireless access using TV white space to deliver broadband services over existing broadcasting infrastructure – can now also be used for backhaul. The CSIRO’s Jay Guo says that both wireless and access “have their merits in different scenarios.” Alcatel-Lucent’s department head of autonomous networks and system research Holger Claussen also sees a real opportunity for backhaul via white space technology in Australia over the next five years. “To take advantage of all the equipment that’s out there already, using it for backhaul makes a lot of sense,” he said at a recent IEEE conference. According to Claussen, TV whitespace is robust enough for something as important as a backhaul, especially with a large number deployment of small cells in a network. ANTENNA TECH “I think with appropriate antenna technology… if you formed the beam so that you really have a point-to-point link between the small cell and the point where you have the fibre backhaul, this way you can reject a lot of the interfer-

ence and the spectrum can be reused by others as well, so that’s could be a possibility,” he says “It doesn’t solve the problem if the spectrum that is available changes dynamically. But I think the kind of approach is using some database – which is not very exciting I think – but maybe more dynamic approaches of sensing when spectrum is used locally and then switching over can be used later on.” There’s no escaping the fact that TV white space is unlicensed spectrum – like Wi-Fi is – and that’s one of the main reasons mobile network operators are cagey about using it. Saunders explains that as soon as multiple applications pop up to use the spectrum, they have to contend with each oth-

Regulation – not technology – remains the main barrier to progressing with white spaces er for that available bandwidth. “So the applications… [including M2M applications], will follow very different protocols and standards over the air – in fact most of them are entirely proprietary. So the way they will manage with respect to each other is a completely unknown quantity at this point,” he says. “There are some moves to establish conventions at least, but [if] one device is using some variant of LTE, another device is using the wireless protocol that people have proposed for M2M, and somebody else is adopting a kind of

variant of WiMAX that’s [to] be used for small cell backhaul – [those] protocols don’t know anything about each other, can’t detect each other [and] can’t be intelligent about coordinating among each other, so you run into some serious interference issues.” One thing that may go a long way to address these concerns is across-the-board standardisation of frequencies. A new study from PolicyTracker, ‘Developing a Global Ecosystem for TV White Spaces’, concludes that over the next two years, a number of major tech standardisation efforts will be completed, opening up the gates for a substantial volume of TV white space devices to flood onto the market. However, TV white space technology remains, for the most part, unharmonised. Saunders says that the technology faces a tough challenge in terms of creating a mobile device that can tune across the whole frequency band. “It’s a very wide range, 470-862MHz. If you wanted a phone that could work anywhere in the world and make the most of any white space band, it’d have to be in that [spectrum bracket]… and that’s a huge challenge,” he says. When it comes to regulation, the US and UK – via the FCC and Ofcom, respectively – have led the way in moving towards utilising TV whitespace for wireless broadband, with other countries now beginning to follow suit. But author of the aforementioned PolicyTracker study, Catherine Viola, maintains that regulation – not technology – remains the main barrier


to progressing with white spaces, with respondents citing a slow pace of introducing white space rules, uncertainties stemming from the World Radiocommunication Conference 2012 decision on a second digital dividend in International Telecommunications Region 1, unduly conservative protection requirements for incumbent services and a lack of regulatory harmonisation as factors that still need to be addressed. Saunders agrees that this possible second digital dividend is looming as a potential hurdle to white space acceptance, adding that operators view white space as either an “add-on complement to doing things in other spectrum… or as a negative – something that is not helpful from their point of view and creates competition they’re not keen on.” “And it’s not reliable enough from their point of view to put their brand to it and make it off to customers.” “ I don’t see it being at all positive for existing mobile network operators… [and] while some of them will investigate the potential, on the whole they see it as not helpful and a potential impediment to them getting a second digital dividend in a timely fashion, and they’re in no doubt they need that and as soon as it can possibly be made,” he explains.

2ND DIGITAL DIVIDEND “For example, the mobile industry as a whole – notably through the GSMA – has a very strong lobby now for a second digital dividend, and this white space uncertainty is one of the hottest topics in those groups around what they need to do to ensure that doesn’t derail the momentum behind this second digital dividend.” But Viola says that despite the aforementioned regulatory barriers, consensus is building around using geolocation databases to manage white space devices, which in turn is acting as a catalyst for white space rule making. “As regulators

“It is important to remember that network performance is a key differentiator between service providers” start to align behind geolocation, PolicyTracker expects the pace of regulation to accelerate and a harmonised, multiregional regulatory approach to TV white spaces to emerge. That, in turn, will provide the clarity and certainty that technology developers need to complete the various white space standards without delays and bring commercial solutions to market,” she says. Regulation aside, Heavy Reading analyst Tim Kridel

adds that operators’ mindsets will have to change before the technology becomes a commercial reality. Quoting a recent Heavy Reading report on TV whitespace, Kridel says that “positive experiences with Wi-Fi offload could make those sceptics receptive to using TV white space for aggregation, offload or both.” “Sceptics also would want to see TV white space technologies in extensive commercial service without aggregation so they can scrutinise and validate vendor claims about their products' ability to share spectrum and deliver a certain level of QoS.” But while TV white space does present opportunities to network operators, Ericsson’s Leins says that the fundamental business consideration for mobile broadband service providers remains “their ability to provide a consistent and predictable service to end users.” “Today, this is achieved through the use of dedicated spectrum allocation for individual operator services,” he says. “It is important to remember that network performance is a key differentiator between service providers, and while use of white space may, in the future, potentially provide additional secondary access capacity, it is not considered as an equivalent alternative to dedicated spectrum.”


YOUR

DATA CENTRE IN THE PALM OF YOUR HAND

ONEDC® is proof that we’ve been listening. We know that your IT infrastructure is critical to your business. We understand you need visibility and control of it. So, we give you secure access 24/7. We're Australia’s only data centre provider that gives you a real-time view of your colocation service. With ONEDC the power is back in your hands. Check security access logs. Monitor your power usage. Even unlock a rack, all from your handheld device.

The revolution has begun.

www.nextdc.com/cdm | sales@nextdc.com

13NEXT


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.