Edit
Can HP ‘Invent’ again?
I
Traditionally, HP has been a research company that built innovative, hi-tech and useful products like pocket calculators and laser printers
was sad to learn that HP, once a technology stalwart, is slashing 27,000 jobs by 2014 (and 9,000 this year). That’s a clear indication of a sinking ship. No doubt, this move will slow down its fall. HP CEO Meg Whitman recently told analysts that HP is going to invest more in research, development and innovation. You may recall a past HP tagline that was simply, ‘Invent’. And HP was once spoken in the same breath as companies such as Xerox. HP missed out on many opportunities. Its successive leaders struggled to turn around the company. True, it excels in areas like printing, networking and servers, but these are the few aces left in HP’s hand. Along the years, HP acquired illustrious companies like Compaq, 3Com, Palm, and Tandem, but took a long time to integrate those companies’ product lines and to produce top selling solutions based on that IP. There were ‘bad’ acquisitions like Palm, and delayed decisions about producing tablets (HP Slate). And there were ‘good’ acquisitions like 3Com, Compaq and 3PAR that gave HP a stronger position in networking, compute, and storage. But now that HP is getting back to innovation & research, here are some areas it should focus on. Traditionally, HP has been a research company that built innovative, hi-tech and useful products like pocket calculators and laser printers. It could have continued in this line and built industry-specific solutions such as label printers, card readers and bar code scanners. The Imaging and Printing Group at HP missed the digital photography bus. Yes, HP did acquire an online photo printing company, and tried to offer customized printing solutions. And HP made a recent effort with cloud printing services. Why didn’t HP get into medical systems like GE, Siemens and Philips? With its clout in imaging, it could have produced diagnostic imaging systems for healthcare. It’s not too late for HP to pursue today’s hottest technologies in the business world: cloud services, Big Data, BI/analytics. Look at what EMC is doing. EMC was just a storage company that later created the networked storage category. Today, it manufactures Big Data analytics appliances (Green Plum). Oracle, once a maker of only RDBMS products is doing likewise (after its acquisition of Sun). And then there’s Dell, once a PC company that reinvented itself, and is now selling end-to-end enterprise solutions and also services. HP has focused too much on hardware and I think it needs to tone its software muscles. Dell is quietly setting up a software division and I think HP should do likewise. Software is the key to smarter network management tools, BI/Analytics and Big Data. And HP can embrace open frameworks like Hadoop (just as EMC did). Another possible area is hardware-based security (embedded in silicon), video surveillance, and biometrics (HP Imaging). HP could boost its consulting and global services business, an area long dominated by IBM. HP needs to have a vertical focus and make products for industries like banking & insurance, healthcare, retail & logics, manufacturing, travel and government. Clearly, the workforce at HP needs to tilt towards R&D. HP should think about setting up a separate research company on the lines of Xerox Parc. There’s hope yet for HP.
Follow me on Twitter
@brian9p
4
informationweek june 2012
u Brian Pereira is Editor of InformationWeek India. brian.pereira@ubm.com
www.informationweek.in
contents Vo l u m e
1
|
I s s u e
0 8
|
J u n e
2 0 1 2
20 cover story Indian Data centers go ‘active-active’ Organizations are increasingly relocating their IT infrastructure to professionally managed data centers because of the cost advantage and the assurance of maximum uptime. This has suddenly created demand for data centers in India. The country’s leading data centers are preparing to meet this demand with huge investments in state-of-the-art infrastructure
26 29 30 33
server Why you should push your servers to the limit The key server challenge in the data center today is the optimal utilization and management of servers with lower TCO. To address these challenges technologies such as server virtualization, consolidation and converged infrastructure are gaining importance
Essel Propack’s server virtualization journey Zoeb Adenwala, Global CIO, Essel Propack takes us through the company’s server virtualization journey and pens down five pitfalls that need to be addressed
34
storage Storing more in less
Power & Cooling Mind your data center’s PUE! Increase in power tariffs and increasing operational costs of a data center have been forcing businesses to explore energy-efficient power management and cooling technologies for their data centers. Let’s take a look at the recent innovations happening in this space and the kind of power management and cooling technologies being used by some of the most energy-efficient data centers in India
How storage vendors are innovating to help data center managers cope with Big Data and the information explosion
38
A peek inside Wipro’s LEED GOLD Certified Data Center
Storage consolidation is rewarding
39
Netmagic’s approach to green data center
40
Tulip’s unique data center design strives for lowest PUE
41
SAP’s journey to energy-efficient data centers
The fast diversifying Kalyani Group just consolidated its storage and moved to a common architecture — it can now offer new resources and utilities as a service, on-demand to users across the Group. This has improved efficiency and reduced costs and storage inventories Do you Twitter? Follow us at http://www.twitter.com/iweekindia
6
Cover Design : Deepjyoti Bhowmik
informationweek june 2012
Find us on Facebook at http://www.facebook. com/informationweekindia
If you’re on LinkedIN, reach us at http://www.linkedin.com/ groups?gid=2249272
www.informationweek.in
THE BUSINESS VALUE OF TECHNOLOGY
interview 44 ‘Our network virtualization technology saves data centers thousands of dollars’ Martin Casado Co-Founder & CTO, Nicira
interview 46 ‘Less than 5 percent of data centers are in a professional environment’ Lt. Col. H S Bedi Chairman and MD, Tulip Telecom
News
12
HP cuts 27,000 jobs RIM launches BlackBerry 10 platform
58 feature Software-defined networking: A no-hype Go beyond the sudden buzz about software-defined networking. Here’s what it is, what it isn’t, and what’s ahead
64 feature Are smarter buildings a reality in India? More and more Indian companies and developers are adopting the smart building concept, which in turn is generating a huge revenue opportunity for vendors like IBM, Cisco and Capgemini
EDITORIAL.........................................................4 INDEX..................................................................8
EMC launches 42 products
news analysis..............................................17
Google officially acquires Motorola Mobility
case study................................................... 48
14
Software: A profit generating business for IBM
opinion.......................................................... 52
15
Oracle to ship Java EE7 and Avatar by mid next year
feature......................................................... 53
Indian IT infra market to reach USD 2.05 bn
16
L&T Infotech goes social with Microsoft SharePoint Essar group deploys SAP HANA on Cisco UCS
technology & risks.................................67 global cio................................................... 68 practical analysis................................. 69 down to business..................................... 70
june 2012 i n f o r m at i o n w e e k 7
Imprint
VOLUME 1 No. 08 n June 2012
print online newsletters events research
Managing Director : Sanjeev Khaira Printer & Publisher : Sajid Yusuf Desai Director : Kailash Shirodkar Associate Publisher & Director : Anees Ahmed Editor-in-Chief : Brian Pereira Executive Editor : Srikanth RP Principal Correspondent : Vinita Gupta Principal Correspondent : Ayushman Baruah (Bengaluru) Senior Correspondent : Amrita Premrajan (New Delhi) Copy Editor : Shweta Nanda Design Art Director : Deepjyoti Bhowmik Senior Visualiser : Yogesh Naik Senior Designer : Shailesh Vaidya Designer : Jinal Chheda, Sameer Surve Marketing Deputy Manager : Sanket Karode Deputy ManagerManagement Service : Jagruti Kudalkar online Manager—Product Dev. & Mktg. : Viraj Mehta Deputy Manager—Online : Nilesh Mungekar Web Designer : Nitin Lahare Sr. User Interface Designer : Aditi Kanade Operations Head—Finance : Yogesh Mudras Director—Operations & : Satyendra Mehra Administration Sales Mumbai Manager- Sales : Marvin Dalmeida marvin.dalmeida@ubm.com (M) +91 8898022365 Bengaluru Manager—Sales : Kangkan Mahanta kangkan.mahanta@ubm.com (M) +91 89712 32344 Delhi Manager—Sales : Rajeev Chauhan rajeev.chauhan@ubm.com (M) +91 98118 20301
Head Office UBM India Pvt Ltd, 1st floor, 119, Sagar Tech Plaza A, Andheri-Kurla Road, Saki Naka Junction, Andheri (E), Mumbai 400072, India. Tel: 022 6769 2400; Fax: 022 6769 2426
Production Production Manager : Prakash (Sanjay) Adsul Circulation & Logistics Deputy Manager : Bajrang Shinde Subscriptions & Database Senior Manager Database : Manoj Ambardekar manoj.ambardekar@ubm.com Assistant Manager : Deepanjali Chaurasia deepa.chaurasia@ubm.com
associate office- pune Jagdish Khaladkar, Sahayog Apartment 508 Narayan Peth, Patrya Maruti Chowk, Pune 411 030 Tel: 91 (020) 2445 1574 (M) 98230 38315 e-mail: jagdishk@vsnl.com International Associate Offices USA Huson International Media (West) Tiffany DeBie, Tiffany.debie@husonmedia.com Tel: +1 408 879 6666, Fax: +1 408 879 6669 (East) Dan Manioci, dan.manioci@husonmedia.com Tel: +1 212 268 3344, Fax: +1 212 268 3355 EMEA Huson International Media Gerry Rhoades Brown, gerry.rhoadesbrown@husonmedia.com Tel: +44 19325 64999, Fax: + 44 19325 64998
Editorial index Person & Organization Alexander Varghese, UST Global ��������������������������� 65 Anil Gaur, Oracle ���������������������������������������������������������� 15 Anil Nama, CtrlS Data Center ����������������������������������� 23 Anup Changaroth, Ciena Corporation ��������������� 52 Deepak Varma, EMC India ����������������������������������������� 31 Dinesh CS, Biodiversity Conservation India ������� 66 Harjyot Soni, Tulip Telecom ������������������������������������� 40 Jayabalan Velayudhan, APC ������������������������������������� 35 John Dunderdale, IBM Software Group ������������ 14 Kiran Desai, Wipro Infotech ������������������������������������� 38 Lt. Col. H S Bedi, Tulip Telecom �������������������������������� 46 Mahesh Trivedi, Netmagic Solutions ������������������� 39 Mandar Marulkar, KPIT Cummins ������������������������ 48 Manpreet Singh Chadha, Wave ������������������������������ 66 Martin Casado, Nicira ������������������������������������������������� 44 Mitesh Agarwal, Oracle India ����������������������������������� 27
Japan Pacific Business (PBI) Shigenori Nagatomo, nagatomo-pbi@gol.com Tel: +81 3366 16138, Fax: +81 3366 16139
N Nataraj, Hexaware �������������������������������������������������� 60
South Korea Young Media Young Baek, ymedia@chol.com Tel: +82 2227 34819; Fax : +82 2227 34866
Perry Stoneman, Capgemini ������������������������������������ 66
Printed and Published by Sajid Yusuf Desai on behalf of UBM India Pvt Ltd, 6th floor, 615-617, Sagar Tech Plaza A, Andheri-Kurla Road, Saki Naka Junction, Andheri (E), Mumbai 400072, India. Editor: Brian Pereira, Printed at Indigo Press (India) Pvt Ltd, Plot No 1c/716, Off Dadaji Konddeo Cross Road, Byculla (E), Mumbai 400027.
Rajesh Awasthi, NetApp India ��������������������������������� 30
RNI NO. MAH ENG/2011/39874
Sanjay Motwani, Raritan India ������������������������������� 36
Naresh Singh, Gartner ������������������������������������������������ 21 P.A. Sathyaseelan, Dell ����������������������������������������������� 32 Pratik Chube, Emerson Network Power, India ����������������������������� 37 Raghavendra Rao, SAP Labs ������������������������������������ 41 Rajesh Janey, EMC India and SAARC ������������������� 18 Rajesh Tapadia, ITI Data Center ������������������������������ 23 Rohan Parikh, Infosys �������������������������������������������������� 64 Samir Bodas, iCertis ���������������������������������������������������� 17 Santhosh D’Souza, Cisco India and SAARC �������� 26
ADVERTISERS’ INDEX
Seepij Gupta, Forrester Research �������������������������� 36
Company name Page No.
Website Sales Contact
Sitaram Venkat, Dell India ���������������������������������������� 26
Juniper
2 & 3
www.juniper@dnbindia.in
Srinivas Tadigadapa, Intel South Asia ������������������� 28
Eaton
5
www.eaton.com/powerquality/india EatonPowerQualityIndia@eaton.com
Zoho
9 www.zoho.com
india-sales@ManageEngien.com
Fluke Networks
13
india-marketing@flukenetworks.com
www.flukenetworks.com
Priya Sharma
Hitachi 25
Suresh Menon, Fujitsu India ������������������������������������� 27 Valsan Ponnachath, Cisco ���������������������������������������� 66 Vikram K, HP India �������������������������������������������������������� 28
Cloud Connect
42 & 43
www.cloudconnectevent.in
surajit.bit@ubm.com
Vinay Sinha, AMD India & South Asia ������������������� 28
Interop
50 & 51
www.interop.in
salil.warior@ubm.com
ICSC
57 www.icse.in
VS Gopi Gopinath, Reliance Communications ���������������������������������������� 22
FTS
63 http://fts.informationweek.in anees.ahmed@ubm.com
IBM
71 www.ibm.com
ibm.com/puresystems/in
Microsoft
72
microsoft.in/readynow
www.microsoft.in
anees.ahmed@ubm.com
Yogesh Zope, Kalyani Infotech ������������������������������� 33 Zoeb Adenwala, Essel Propack ������������������������������ 29
Important Every effort has been taken to avoid errors or omissions in this magazine. In spite of this, errors may creep in. Any mistake, error or discrepancy noted may be brought to our notice immediately. It is notified that neither the publisher, the editor or the seller will be responsible in respect of anything and the consequence of anything done or omitted to be done by any person in reliance upon the content herein. This disclaimer applies to all, whether subscriber to the magazine or not. For binding mistakes, misprints, missing pages, etc., the publisher’s liability is limited to replacement within one month of purchase. © All rights are reserved. No part of this magazine may be reproduced or copied in any form or by any means without the prior written permission of the publisher. All disputes are subject to the exclusive jurisdiction of competent courts and forums in Mumbai only. Whilst care is taken prior to acceptance of advertising copy, it is not possible to verify its contents. UBM India Pvt Ltd. cannot be held responsible for such contents, nor for any loss or damages incurred as a result of transactions with companies, associations or individuals advertising in its newspapers or publications. We therefore recommend that readers make necessary inquiries before sending any monies or entering into any agreements with advertisers or otherwise acting on an advertisement in any manner whatsoever.
8
informationweek June 2012
www.informationweek.in
‘Data centers are becoming software defined’ Data centers around the world are increasingly being virtualized and organizations are restructuring business processes in line with infrastructure transformations. Raghu Raghuram, SVP & GM, Cloud Infrastructure and Management, VMware tells InformationWeek about the software defined data center, and how virtualized infrastructure will bring in more flexibility and efficiency http://bit.ly/JAVfYv
Social collaboration yields great benefits for Persistent Pune-based mid-sized IT services vendor, Persistent Systems has significantly improved its collaboration process by deploying Cisco’s enterprise collaboration platform, Quad http://bit.ly/KLhHO2
Rasick Gowda @Rasick tweeted: Matt Stuart @mattstu555 tweeted:
InformationWeek – Data Center - ‘Data centers are becoming software defined’ http://www.
informationweek.in/Data_Center/12-05-18/Data_ centers_are_becoming_software_defined.aspx via@iweekindia #dataman
InformationWeek – Software > Social collaboration yields great benefits for Persistent http://www.
informationweek.in/Software/12-05-22/Social_ collaboration_yields_great_benefits_for_Persistent. aspx via @iweekindia #Cisco
Leon Baranovsky @leonb tweeted: Matt Piercy @Cloud_Matt tweeted:
‘Data centers are becoming software defined’ http://www.informationweek.in/Data_Center/12-05-18/ Data_centers_are_becoming_software_defined.aspx
Mag Studios @MagStudios tweeted:
Data centers are becoming software defined. Take a look...
http://bit.ly/JAVfYv
How Hexaware is using the cloud to improve its competitiveness Hexaware’s cloud journey has not only helped it in driving productivity, but has also resulted in a reduction of nearly 2.7 million pounds of carbon and a positive environmental impact http://bit.ly/LurMtC
InformationWeek – Software > Social collaboration yields great benefits for Persistent
http://www.informationweek.in/Software/12-05-22/ Social_collaboration_yields_great_benefits_for_ Persistent.aspx via @iweekindia
How a pickpocket gets your ATM PIN If your ATM pin is your birth date or a repeated number like 1111, then you need to change it right away http://bit.ly/Jt5kHi
Hardik Shah @HardikShah81 tweeted:
How a pickpocket gets your ATM PIN: If your ATM pin is your birth date or a repeated number like 1111, then you need to change http://bit.ly/LgZOoG
Mark Boeder @markboeder tweeted: Shaping Cloud @ShapingCloud tweeted: How Hexaware is using the cloud to improve its competitiveness -#cloud
http://bit.ly/LurMdp
IQinIT LTD @IQinIT tweeted:
Cloud Computing - How Hexaware is using the cloud to improve its competitiveness http://www.
informationweek.in/Cloud_Computing/12-05-25/ How_Hexaware_is_using_the_cloud_to_improve_ its_competitiveness.aspx via @iweekindia #IQinIT
10
informationweek june 2012
InformationWeek – Security > How a pickpocket gets your ATM PIN http://www.informationweek.in/
Security/12-05-22/How_a_pickpocket_gets_ your_ATM_PIN.aspx?utm_medium=twitter&utm_ source=twitterfeed via @iweekindia
Follow us on Twitter Follow us on Twitter @iweekindia to get news on the latest trends and happenings in the world of IT & Technology with specific focus on India.
www.informationweek.in
Social Sphere Join us on facebook/informationweekindia
Home
Profile
Friends
Inbox
Search
Wall Info Photos Boxes Discussions
InformationWeek India Share:
Post
Photo
Write something... Share
InformationWeek India The special prize goes to Jayavardhan Katta Jayavardhan has been working with IBM on Open Source technologies since last six years. He is a RedHat Certified Engineer and actively participates in Open Source communities and forums. His areas of interests are Virtualization and Cloud Computing.
Like Comment Share l
l
InformationWeek India Which is the shortcut key you use the most? Ctrl + C, Ctrl+ V, Alt F, Ctrl +S or anyother? Let us know! Wall Info Friend Activity Photos Events
Like Comment Share l
Website
l
Ravindra Anant Naik and 3 others like this. Rahul Meher Ctrl + X , Ctrl + A, Ctrl + Z May 15 at 3:21pm · Unlike 1
About
Gt Kiv ctrl + c May 15 at 3:46pm · Unlike 1
InformationWeek is the leading news and information source for information...
Write a comment...
More
Fan of May
Like, Tag and Share us on Facebook
Ravindra Anant Naik completed his BBM degree from Alva’s College, Moodbidri, Mangalore University in 2009. He is an avid follower of InformationWeek Facebook page. He is the Fan of the Month for May.
Get links to the latest news and information from the world of IT & Technology with specific focus to India. Read analysis, technology trends, interviews, case studies whitepapers, videos and blogs and much more…
Participate in quizzes and contests and win prizes!
Like Comment
Tag Photo
Like Comment l
l
Share
june 2012 i n f o r m at i o n w e e k 11
News
T h e m o n t h i n t e c h n o lo g y
HP cuts 27,000 jobs
EMC launches 42 products
Hewlett-Packard, in its second quarter results, revealed that it will be cutting 27,000 jobs — 8 percent of its workforce — during the next couple of years, amid slumping profits. As per the results, earnings per share of USD 0.98 came in above HP’s earlier guidance of USD 0.91. On the down side, revenue in HP’s printing and imaging group fell 10 percent from the year-earlier quarter, while PC unit revenue rose a paltry 0.4 percent. CEO Meg Whitman repeated a few times that HP’s strategic pillars are cloud, security, and information management. Revenues for other groups were down year over year but mostly up from last quarter. HP is reworking its services group, whose revenue was down 1 percent from the year-earlier quarter, so that it can better leverage its software assets. Revenue from enterprise servers, storage, and networking fell 5.5 percent from a year earlier. HP said its restructuring plans should generate annualized cost savings of between USD 3 billion and USD 3.5 billion by 2014, money that will be plowed back mainly into its cloud, information management, and security businesses and to restructure its services businesses around those areas.
EMC Corporation launched 42 products focused on hybrid cloud at EMC World 2012. The new products and technologies span across EMC’s entire information storage, backup, virtualization and management portfolio. The announcements are aimed at extending EMC’s leadership in two of the most transformative trends in IT history — cloud and Big Data — making it easier and faster for customers to benefit from the shift to hybrid cloud computing. Over a year ago, EMC had also announced a portfolio refresh with several storage systems, technologies and new capabilities. The announcements mark the company’s most significant single-day launch of new capabilities for customers and partners.
RIM launches BlackBerry 10 platform Research In Motion (RIM) recently unveiled its vision for the BlackBerry 10 platform at the BlackBerry World conference in Orlando, Florida and released the initial developer toolkit for native and HTML5 software development. The toolkit is available in beta as a free download . Christopher Smith, Vice President, Handheld Application Platform and Tools at Research In Motion said, “Developers can use this first beta of the tools to get started building apps for BlackBerry 10 and as the tools evolve over the coming months, developers will have access to a rich API set that will allow them to build even more integrated apps. The toolkit we are delivering today also meets developers on their own terms. Whether using the powerful Cascades framework, writing direct native code or developing in HTML5, BlackBerry 10 will empower developers to create attractive and compelling apps that excite customers.” The toolkit includes the BlackBerry 10 Native SDK with Cascades, which allows developers to create graphically rich, high performance native applications in C/C++ using Qt. The toolkit also includes support for HTML5 application with the BlackBerry 10 WebWorks SDK, allowing developers to create native-like applications using common web programming technologies.
12
informationweek June 2012
Google officially acquires Motorola Mobility Google recently announced that the acquisition of Motorola Mobility has closed, with the search giant acquiring it for USD 40 per share in cash. Google had initially announced plans to acquire Motorola Mobility in August. Google will run Motorola Mobility as a separate business, which will remain a licensee of Android and Android will remain open. As part of the acquisition, Sanjay Jha has stepped down as CEO. In his place, Dennis Woodside, who has previously served as President of Google’s Americas region, has become CEO of Motorola Mobility.
www.informationweek.in
News s o f t wa r e
Software: A profit generating business for IBM IBM has significantly transformed its business over the last decade by shifting to higher value areas, improving business efficiency and investing in strategic long-term opportunities. Software has become a leading profit driver for IBM, and it is an important reason behind the company’s ongoing transformation. IBM Software is increasingly shifting to higher value areas, improving business efficiency, and investing in long-term opportunities. The company plans to differentiate itself with new growth areas including smarter planet, business analytics, cloud computing, smarter commerce, business integration and social business, to name a few. The company is looking at acquisitions (in Q1, 2012- IBM spent USD 1.3 billion) to add capabilities in analytics, cloud, and smarter planet segments. The model is not to buy a company just for the revenue it delivers, but to drive its value and intellectual capital across IBM and integrate its capabilities into IBM’s go-to-market strategy. Each company IBM Software has acquired fills an important piece of the puzzle, as it continues to evolve toward high-value innovation segments, such as analytics, business process management and smarter commerce. Another way IBM is growing its business is by developing a wide range of emerging technologies that matter to clients. ‘Unleash the Labs’ is a concerted effort to bring IBM’s research, scientific, engineering and developer talent closer to clients to work collaboratively on various real-world business problems. This goes far beyond lip service about listening to customers. IBM Software’s development strategy involves actively collaborating with clients, incorporating their knowledge and requirements into the development process.
Evolution of IBM Software
IBM released its personal PCs in 1980s; and in 1990s came the revolution
14
informationweek June 2012
of mainframes and databases. IBM extended its presence in desktop by acquiring Lotus (in 1995), and a year later, IBM acquired Tivoli to bring centralized management for distributed environment. These acquisitions brought high growth in IBM’s software business. In the last five to ten years, IBM has been focusing on other growth areas and acquiring companies for the same. For instance, the acquisition of SPSS in 2009, which added predictive analytics to the organization’s business analytics capability.
Analytics: a high focus area To grow its analytics portfolio, IBM has spent billions of dollars and acquired companies like Cognos, SPSS etc. Last year, IBM acquired Q1 Labs, a provider of security intelligence software. The move aims to accelerate IBM’s efforts to help clients more intelligently secure their enterprises by applying analytics to correlate information from key security domains and creating security dashboards for their organizations. “Middleware cannot align the business and hence we needed an application. Analytics and business
Some facts about IBM Software l
Contributes more than 44 percent of IBM’s business profits
l
In 2011, IBM’s revenue for software was up 9 percent
l
By 2015, about 50 percent of IBM’s profit is expected to come from software
l
IBM Software had made more than 75 acquisitions since 2003 and expects to spend USD 20 billion in acquisitions by 2015
l Since
2000, IBM Software has spent nearly USD 70 billion in R&D
l
IBM Software has over 4,000 lab services employees
John Dunderdale, VP-Growth Markets, IBM Software Group
applications are integrated to make business decisions. Today, most of the CEOs and CIOs are looking at analytics to make use of the huge information they have and grow the business. Analytics is a good portfolio for us. Fundamentally, analytics will become the foundation of what we do,” said John Dunderdale, VP-Growth Markets, IBM Software Group. Bharti Airtel is one the customers using IBM’s analytics solution. Amrita Gangotra, Director - IT and CIO (India and South Asia), Bharti Airtel, said, “We are looking at building connectivity for rural reach. Our vision for 2015 is to enrich the life of millions and support the business growth. So, we are looking at new technologies like cloud and analytics. We started with MIS and today we are ready for predictive analytics.” She asserted most of the business innovation will ride on technology. Eyeing the demand, IBM has taken aggressive steps to remix its business so that it is positioned for leadership in the high-growth business analytics space. The combination of its middleware and consulting expertise makes IBM wellsuited to help organizations extract new value from their business information and respond at the same rate at which the data arrives. —Vinita Gupta
www.informationweek.in
S o f t wa r e
Oracle to ship Java EE7 and Avatar by mid next year Java enthusiast and leader in database software, Oracle, plans to launch the Java EE (Enterprise Edition) 7 platform and project Avatar during the second quarter of 2013. The earlier version, Java EE 6, has had over 40 million downloads so far and has been the first choice for enterprise developers and the number one application development platform. According to Oracle, the Java EE 7 platform will further enhance the Java EE platform for cloud environments. As a result, Java EE 7-based applications and products will be able to operate more easily on private or public clouds and deliver their functionality as a service with support for features such as multi-tenancy and elasticity (horizontal scaling). “While traditionally Java EE has offered services within platforms like messaging service and things like that, in EE 7 the platform itself can be offered as service in cloud,” said Anil Gaur, VP, Software Technologies, Oracle, on the sidelines of the JavaOne event in Hyderabad held between May 3-4,
2012. “Developers are looking for a PaaS standard for the next generation of cloud-based applications, and the Java EE platform will be the PaaS standard.” Oracle’s project Avatar aims to enable hybrid applications where HTML5-based user interfaces (UIs) share content between Java clients and Java EE servers, both in data centers and in the cloud. “It is basically addressing the need where customers or developers want to write their application once and use the same applications and same UI on different kind of devices like your mobile devices, desktops, and laptops. They don’t want to write applications multiple times,” Gaur said. With 97 percent of enterprise desktops running on Java and 85 percent of the phones in the world Javaenabled, Oracle is aggressive on tapping this market. Although the company declined to share specific numbers on the revenue generated by Java and related products, Gaur said that Java is an integral part of Oracle’s strategy. “If you look at the fusion middleware,
Anil Gaur, VP, Software Technologies, Oracle
applications or even in the database, they embed pieces of Java. So it is a core part of our strategy and it’s in our best interest to move Java forward. And, we also understand that we can only move forward with Java with participation from other major vendors like IBM, Apple, and many other such players.” —Ayushman Baruah
h a r d wa r e
Indian IT infra market to reach USD 2.05 bn The Indian IT infrastructure market , comprising of servers, storage and networking equipment, will reach USD 2.05 billion in 2012, a 10.3 percent increase over 2011, according to Gartner. The IT infrastructure market is expected to reach USD 3.01 billion by 2016. Revenue growth will be primarily driven by ongoing data center modernization, as well as new data center build-outs. Servers is the largest segment of the Indian IT infrastructure market, with revenue forecast to reach USD 754.5 million in 2012, and grow to USD 967.2 million in 2016. The external controller-based storage disk market in India is expected to grow from USD 439.4 million in enduser spending to USD 842 million in 2016. This is the fastest growing segment within the IT infrastructure market. The enterprise network equipment market in India, which includes enterprise LAN and WAN equipment, is expected to grow from USD 861 million in 2012 to USD 1.2 billion in 2016. “The key growth driver for the data center market is the ongoing investment in large captive data centers coupled
with the capacity growth witnessed within the data center service provider space. Indian organizations are heavily focusing on optimizing their infrastructure capacity by implementing virtualization and incorporating newer ways of data center design,” said Aman Munglani, Research Director at Gartner. “Though India is in the early stages of cloud adoption, cloud service providers will also be a key contributor to the infrastructure consumption, especially for commodity type, scalable technologies, such as scaleout systems and extreme low energy servers.” Naresh Singh, Principal Research Analyst, Gartner said, “Indian IT organizations are making a big shift from a distributed IT setup to a more manageable and efficient centralized model, leading to consolidation of branch and remote IT resources into fewer, but larger data centers.” Data center site consolidations are happening, especially for the in-house data center owned by organizations.” —InformationWeek News Network
june 2012 i n f o r m at i o n w e e k 15
News S o f t wa r e
S o f t wa r e
L&T Infotech goes social with Microsoft SharePoint
L&T Infotech, a global IT services company, announced that it has used Microsoft SharePoint Server 2010 to create CliquePoint, an enterprise social collaboration platform for reducing e-mail usage and increasing business agility. This has increased active user’s productivity up to 20 percent. After evaluating many commercial and open source options, L&T Infotech eventually decided to build CliquePoint on Microsoft SharePoint due to its ability to leverage existing social features, metadata services and integration with backend systems. The entire solution was developed in four months by leveraging .NET framework. “We wanted to provide a platform where everyone including enterprise business systems could come together and share information. To get there, it needed a technology platform that would facilitate collaboration across the organization in an easy, responsive, and accurate way,” explained Abhay Chitnis, Vice President and Chief Technology Officer, L&T Infotech. “Microsoft SharePoint Server 2010 was the best fit for our needs and an excellent technology foundation on which we could build. CliquePoint was designed to connect easily to our existing IT environment. It offered many out-of-the-box features, as well as the flexibility and customizability to develop a comprehensive enterprise social collaboration. We also tried to make it very easy for users to jump right in and find the required information,” said Chitnis. The enterprise social collaboration
16
informationweek June 2012
Essar group deploys SAP HANA on Cisco UCS
tool, CliquePoint is split into communities, personal collaboration and business events. Communities help people with common interests to exchange ideas and best practices. Personal collaboration features provide people with comprehensive tools such as micro-blogs, blogs, and wikis, enabling them to connect and follow others. The most important feature introduced is tracking business events and involving enterprise business systems in collaboration. The enterprise collaboration portal gives employees streamlined access to information, which enables them to know and understand each other better and ultimately work better together. This has helped to foster innovation through dynamic collaboration. CliquePoint has enabled teams to consolidate all information, making it readily available. Today, teams can access communities, initiate blogs, contribute to wikis, and share files, including documents, videos, and audios with other members. This has led to a significant increase in employee productivity by 20 percent. The turnaround time on RFPs has reduced considerably, as users collaborate using CliquePoint, which in turn is integrated with other applications thus making information available through a single window. Integration of business events on the wall and ease of tracking initiatives has helped to improve business agility and resource fulfilment.
Essar Group recently deployed Cisco UCS to support the implementation of SAP HANA (High Performance Analytic Appliance). Optimized on the Cisco UCS platform, HANA, along with SAP BusinessObjects BI solutions, brings the data within the reach of decision makers in seconds, helps enable innovative new applications, and combines high-volume transactions with analytics. “We expect in-memory computing to be a game-changer and were looking for a server platform designed to achieve high availability and performance. The Cisco Unified Computing System has met our expectations and we believe that it serves as an ideal server platform for enterprisecritical applications like SAP HANA. With Cisco UCS, we are confident of achieving our ROI from our investment in SAP software,” said N Jayantha Prabhu, CTO, Essar Group. With Cisco UCS as the preferred infrastructure choice, Essar plans to roll out SAP HANA across the organization in phases over the next 12 to 18 months. SAP solutions, combined with the Cisco UCS, provide significant value through standards-based, high-performance unified fabric, improved memory capacity, reduced power consumption and lower total cost of ownership. By building on the benefits of the Cisco UCS platform, Essar will now enjoy quick access to data. “SAP HANA optimized on Cisco UCS, enables faster decision making and is fast becoming the platform of choice for deployment by a number of large enterprises,” said Avinash Purwar, Senior VP, Cisco India and SAARC.
—InformationWeek News Network
—InformationWeek News Network
www.informationweek.in
News Analysis
iCertis bets big on cloud; set to double manpower in India Microsoft Azure-focused cloud player, iCertis is bullish on India, and is set to invest USD 10 million over the next two years in India By Srikanth RP iCertis, a firm focused on providing services and products on the Microsoft Azure cloud and Office 365, is extremely bullish on India, and is set to double its manpower from 150 people to 300 people by December 2012. The firm also plans to invest USD 10 million over the next two years in R&D efforts. “Our pipeline in India is extremely robust, and we are extremely positive on the immense market potential for cloud-based solutions. We see BFSI, telecom and manufacturing as the high growth verticals,” says Samir Bodas,Co-founder and CEO of iCertis. His firm and a host of other players are competing aggressively to get a first mover advantage for capturing a market, which is estimated to reach a massive USD 16 billion by 2020, according to research by NASSCOM and Deloitte. India’s hot potential as a cloud computing market, was also one of the key reasons why iCertis decided to first launch its Contract Lifecycle Management product (ICLM) on Microsoft’s Windows Azure platform in India. The product is the world’s first contract lifecycle management product on the Windows Azure platform. Using the Azure platform, the firm has been trying to target niche white spaces in the cloud. For example, the contract lifecycle management space is a domain that is extremely fragmented, and does not have a clear leader. This is a USD 3.5 billion market globally, growing 20 percent on a year-on-year basis. With the growing importance
“Our pipeline in India is extremely robust, and we are extremely positive on the immense market potential for cloudbased solutions”
Samir Bodas
Co-founder and CEO, iCertis
given to compliance, improving efficiency and managing contractual risk is a high priority area for enterprises. However, implementing a traditional contract management solution takes months to deploy, and is typically expensive. Bodas says that there is a huge market opportunity in India, as well as globally, as most established ERP vendors offer solutions, which are hard to use, deploy and customize. With most firms using manual or Excel-based processes, there is a huge potential to automate the contract management process. A cloud-based solution is perfect, as it takes away the twin problems of high cost and huge time of deployment. iCertis has already bagged two high profile clients — Microsoft and ICICI Bank. Being a corporation, which manages and executes hundreds of thousands of contracts annually, Microsoft needed a solution that was scalable, easy-to-use and readily customizable to its unique processes. “We have deployed the product globally, including in India with great user adoption. This is one of the largest Azure-based implementations within Microsoft,” says Michel Gahard,
Associate General Counsel, Microsoft. Gahard says that if it had taken the traditional approach of implementing an onsite solution from a big ERP vendor, it would have taken it close to a year in implementing and deploying the product, while iCertis just took a month. Similarly, India’s second largest bank, ICICI Bank, deals with massive contract volumes, leading to huge contract administration costs. The Bank believes that the icertis Contract Lifecycle Management product will enable significant cost reduction, as well as ensure compliance. In a nascent but fast growing market, the cloud computing domain gives both established and relatively small players like iCertis to move fast and get a first mover advantage. With the Indian cloud computing market on the threshold of exploding, iCertis has a sizeable business opportunity, as it looks to target other niche spaces in the supply chain domain.
u Srikanth RP srikanth.rp@ubm.com
june 2012 i n f o r m at i o n w e e k 17
News Analysis
EMC transforms to become USD 20 billion tech giant Moves on from pure-play storage vendor to a provider of cloud and Big Data solutions; creates solutions for Indian service providers and government at its Center of Excellence in Bengaluru By Brian Pereira The story of Darwin’s evolution was retold at the EMC offsite in the historical city of Mysore (May 3 - 4). Just as mankind and all species have evolved to survive, it is the same with business. The theme for its media offsite was “Transform: IT + Business + Yourself.” In other words, businesses must transform to survive. EMC too has evolved to become one of the fastest growing companies (and the second most admired one) in the Fortune 500 list (EMC’s current ranking is 152). From a USD 8.2 bn company in 2004, EMC has swelled into a USD 20 bn organization (Jan 2012) with 48,500 employees worldwide. Of course, it has gone the inorganic route to achieve this growth; EMC spent USD 14 bn acquiring 30 companies over the years — the bigger acquisitions being VMware, Iomega, Data Domain and RSA. It invested USD 10.5 bn in R&D in eight years. And in its 12-year tenure in India, the company has invested more than USD 500 million (with an incremental investment of USD 1.5 billion by 2014). Today, it wants to be a leading player in Cloud and Big Data. It’s a less known fact that EMC manufactured motherboards when it began its operations in 1979. In the following year it moved to storage and was selling a single storage product (Symmetrix) in 1990. It later created a network storage category. EMC continues to sell storage products and the Symmetrix line today. In fact last year (January 2011) it launched 40 new storage products and technologies in a single month. Today, EMC has evolved to become a company that sells solutions for cloud computing and Big
18
informationweek june 2012
Data — the hottest areas in business computing today. While EMC does not want to become a global services and consulting company (like IBM) it works closely with an ecosystem of service providers, and offers solutions through well-crafted partner programs. The latest example is its partnership with Tulip Telecom — EMC will provide its range of storage solutions to help Tulip launch cloud storage services at Tulip Data City (a data center) in Bengaluru. EMC also wants to be more relevant and offer vertical specific solutions. For instance, it has been providing solutions to various institutions in the Indian Government (its latest being EMC Documentum xCP Saksham eGov Case Management). These solutions are conceptualized and developed at its Center of Excellence (CoE) in Bengaluru, one its six global innovation hubs. EMC completed a decade in India in 2010 and in that year it integrated its entire portfolio from all acquisitions; it has more than 2,300 customers here and a presence across 135 locations. A big differentiator for EMC India is R&D and innovation at its CoE. Another differentiator is the tight collaboration between its sales and development teams. EMC India is the only global unit that has all its functions under one roof, in Bengaluru. The CoE is generating products such as Saksham eGov Case Management, which are relevant in the Indian context (made in India, for India). EMC technology also powers other projects such as Aadhar/UIDAI, Passport Seva, Excise & Customs among many others. Apart from all the innovation happening at its India CoE, it is also
helping customers in their journey to cloud computing. In the past one year, it hosted 270 customers at its executive briefing center at the CoE in Bengaluru. In Q1 this year it hosted 100 customers — this reflects on the strong traction it is getting from customers and partners. EMC wants to be more relevant and help out in other areas, like technical skills. To address the shortage of skills in the industry EMC initiated an academic program called EMC Academy in 2005. Rajesh Janey, President, EMC India and SAARC said, “Transformation is a definite part of growth. So (to survive) one has to evolve and change to address customer needs. That’s why we have evolved to becoming a company that can help customers take advantage of cloud and Big Data. And we have a strong portfolio of products to help customers build for their business requirements.” According to the Digital Universe Study conducted by IDC and EMC in 2010, digital information in India will grow from 40,000 petabytes to 2.3 million petabytes over the next decade (2010 to 2020) — twice as fast as the worldwide rate. Enterprises will face an increasing challenge to store, protect, and manage the rapidly growing digital information and comply with backup requirements. This also escalates the cost of storage for organizations. EMC is responding to these business requirements with solutions such as Greenplum HD — a modular data computing appliance that includes a database, data integration accelerator, and big data analytics based on the Hadoop software framework. u Brian Pereira brian.pereira@ubm.com
www.informationweek.in
the data center
manual Server: 26 to 29
Storage: 30 to 33
Power & Cooling: 34 to 41 june 2012 i n f o r m at i o n w e e k 19
Cover Story
Indian Data centers go ‘active-active’ Organizations are increasingly relocating their IT infrastructure to professionally managed data centers because of the cost advantage and the assurance of maximum uptime. This has suddenly created demand for data centers in India. The country’s leading data centers are preparing to meet this demand with huge investments in state-of-the-art infrastructure By Brian Pereira
I
f you’ve been tracking data center trends in India, you’d know that there has been a lot of ‘activity’ in this space in the past 18 months. Companies in India have been building new data centers and pouring in truckloads of money for land, compute and building infrastructure. While researching this story, the editorial team at InformationWeek visited three popular data centers in India, and spoke to data center heads and solutions providers. We saw state-of-the art infrastructure and came within spitting distance of the latest technology. We traversed high-security areas, touched the racks, and sniffed the carefully conditioned air in the hot and cold aisles. There were two main observations made during our visit to these data centers. Firstly, the people who plan and design data centers are obsessed with redundancy (at all levels). Can’t blame them though, for if their data center is down, their customers’ business comes to a grinding halt. It’s unthinkable to have a situation where the ATM network of a top private sector bank is down, or a vast cell network goes silent, because a rack of servers in a data center went kaput. And hence we have n + 1 and n + n data center redundancy (see box story ‘ Obsessed with Redundancy’). Secondly, designing a data center comes close to building a submarine or a space ship. All are mission-critical projects. Engineers and architects need to consider every little detail
20
informationweek june 2012
while building a data center. Overlook something (small) and that could spell disaster. For instance, one data center in Mumbai was affected during the city’s mega floods one monsoon, simply because they overlooked a little detail. In the absence of electricity, you need a portable diesel generator (genset) to activate pumps at the fuel station; but if there’s no diesel for the portable genset, you can’t draw diesel from the sub-terrain fuel reservoirs. After that incident, the data center company went out and bought hand pumps (they also keep 48 hours supply of reserve fuel).
Investments
Let’s talk about the investments in the data center space, and the demand. The latest example is Tulip Data City (TDC) in Bengaluru, commissioned in February 2012. TDC is considered as Asia’s largest data center with an area of 0.9 million square feet. Tulip is investing ` 900 crore in TDC over a three-year period. Tulip earlier built five data centers across the country, and plans to update these with the latest technology. And
that’s an additional investment. Says a confident Lt. Col. H S Bedi, Chairman and MD of Tulip Telecom, “I do believe that the return on this investment should be good. We should have revenues of ` 1,000 crore per year and EBITDA of 40 percent per year — but this will happen after four years.” (Read our interview with Col. Bedi in this issue). Another example is Reliance Communications. It has been hosting customers like HDFC Bank for the past 11 years. With prospective international customers and local companies knocking at its doors, it is fast utilizing its data center capacity. It has four data centers (IDCs) at the Dhirubhai Ambani Knowledge City in Mumbai, and will commission the fifth IDC in June 2012. Reliance Communications has already built nine data centers across four cities. Reliance did not give details about investments. And then there’s Netmagic Solutions, now partly owned by NTT Com (74 percent stake). Netmagic has seven data centers, including two in Mumbai. Since it’s data centers are filling up, it is building a third facility in Mumbai
We should have revenues of ` 1,000 crore per year and EBITDA of 40 percent per year — but this will happen after four years Lt. Col. H S Bedi
Chairman and MD, Tulip Telecom
www.informationweek.in
Data Center to meet demand. When we asked a spokesperson at Netmagic about investments, he could only say that it is between ` 20,000 - ` 25,000 per square foot. Its Chennai data center, which has an area of 30,000 square feet, runs up a bill of ` 75 crore for one data center alone (and they have seven). So, building a data center requires massive investment. Tata Communications (TCL) estimates this business requires investments of ` 1,500 crore over five years. That’s why TCL plans to hive off its data center business into a separate subsidiary. There are others who are building new data centers or upgrading existing facilities. For instance, Trimax-ITI will roll out its new data center in Mumbai this June. It is also planning to expand its facility (Phase 2) in Bengaluru this year.
Demand drivers
The demand for data center services in India has suddenly picked up. But why? The short answer is, businesses want to focus on core competency and their respective markets. Leave the IT stuff to someone else. The huge costs associated with real estate, IT infrastructure, and the manpower to manage it has been a worrisome overhead that eats into a company’s profits. And this is a recurring investment due to technology obsolescence. When a business grows, it has to make additional investments in IT infrastructure. Add to that the huge cost of maintenance and the skills to manage the IT infrastructure (salaries for fulltime staff are another huge overhead). It has been proven that hosting a company’s IT infrastructure in a professionally managed data center (where investment-heavy resources like power are shared), and in a centralized rather than distributed manner, works out to be more cost-effective in the long run. Because of the scale of the business and the shared services model, data center companies are in a position to make huge investments in power and cooling infrastructure. Chiller units, monstrous generators, large banks of batteries and UPS units, Power Distribution Units (PDUs) and specialized computer room air conditioning units
(CRAC) require investments running into several crores. This is economically unviable for most businesses. Also, it is not practical to run a data center in a standard office building. The other driver is the demand for managed hosting services. This is attracting customers because of huge cost savings. The data center provider will lease an entire server to one customer who has full control over his leased server. Administration (monitoring, updates, application management, etc.) can be offered as add-on services. This works out much cheaper than deploying a server in-house, investing in power and cooling, and having a full-time in-house administrator — or an IT team. Another option is collocation (colo or coloc). A customer can place his own server in a data center and use shared and redundant resources like power and HVAC. Hence, the customer does not have to make upfront investments for such resources. Data centers also offer locked cages for customer servers, monitored by IP cameras at the customer’s end. Customers can also visit the site and do server administration from a cubicle. According to Gartner, the key growth driver for the data center market, is the ongoing investment in large captive data centers coupled with the
capacity growth witnessed within the data center provider space. “Indian organizations are making a big shift from a distributed IT setup to a more manageable and efficient centralized model, leading to consolidation of branch and remote IT resources into fewer, but larger data centers,” says Naresh Singh, Principal Research Analyst, Gartner. “Data center site consolidations are happening and relocations are happening, especially for the in-house data center owned by organizations.”
New Services
Data center providers are mainly offering infrastructure services at present. But the differentiator is specialized services (over the cloud) such as security, digital video surveillance and storage. To offer such services, data centers are increasingly partnering with vendors and solution providers. A very recent example is Tulip’s partnership with EMC for on-demand cloud storage and backup services. Last year, Tulip partnered with MindTree and Iomega (an EMC company) to offer digital video surveillance solution on the cloud. “We see a bit more managed services being offered in data centers. But we think that over time all the data centers will offer cloud services. While this is still some time away in India, (data
Obsessed with Redundancy To ensure maximum uptime in a data center, all components and at all levels must have good resilience. Meaning, if one component or system fails, there should be a switchover to a backup system in the least possible time, and with minimum or zero disruption to business. To ensure this high state of availability, data centers factor in n+1 redundancy where the ‘+1’ denotes a backup component. The ‘+1’ component is kept on standby and does not participate within the system during normal operation, hence this is also called ‘active/passive’. There is another kind of redundancy called 1+1 or ‘active-active’. Here the redundant component is kept on ‘hot standby’ (powered on). An example is dual power supplies in a server or mirrored hard drives in a RAID configuration. And then there is n+n redundancy for very critical and dependent systems like UPS and batteries. For example, there are multiple clusters of UPS units and associated banks of batteries. If one cluster were to fail, any of the other clusters will take over. Usually, two clusters are in operation, with two kept on standby. So each cluster is a backup for the other. Naturally, n+n offers the maximum availability and uptime, which is realistically 99.995 percent. Source: Wikipedia
june 2012 i n f o r m at i o n w e e k 21
Cover Story centers) in the rest of the world have started offering more cloud services,” informs VS Gopi Gopinath, President & COO - Enterprise Business, Reliance Communications. Rajesh Tapadia, CTO & Business Head - Data Center & Cloud Services, Trimax Data Center Services says, “I see a lot of interest for cloud services in the government sector. We are preparing to offer these services in our data centers. We also believe the SME sector is underserved today, and we are building services for companies in tier-2 and tier3 cities where there are many SMEs.” ITI has a partnership with Trimax IT Infrastructure & Services Limited (Trimax). Last October, Trimax partnered with NEC India to offer SMEs a range of SaaS services using an integrated application aggregation platform. NEC will be leveraging Trimax’s data center capabilities to provide business applications, such as CRM, ERP, human resource management, inventory, and security to SME customers.
Key Trends
A key trend observed is data centerin-data center (DiD). A data center will host a major IT service provider like HP, IBM or Dell. And the service provider will in turn host infrastructure for its own customers. This works well for the service provider as it can focus on its core expertise (applications and services) without making upfront and expensive investments in power and cooling. For instance, Tulip is hosting a major IT vendor and that vendor is hosting a telecom company. Tulip does not seem to mind, even though its core business is telecom services. Tulip is currently hosting HP, IBM, and NTT among other customers. “At Tulip Data City, we host other telcos and systems integrators, so we are multi-tenant. HP and IBM can bring in their own customers. We are service provider agnostic. I welcome other telcos,” offers Col. Bedi. Another example is Netmagic’s strategic partnership with Spectranet, an opportunity worth ` 200 crore. Spectranet will set up a multi-location commercial DiD across Netmagic’s data centers in Mumbai, Noida,
22
informationweek june 2012
We see a bit more managed services being offered in data centers. But we think that over time all the data centers will offer cloud services V S Gopi Gopinath, President & COO - Enterprise Business, Reliance Communications Bangalore and Chennai. Spectranet will independently deliver service and support to their customers hosted at the DiDs — with Netmagic providing the core data center infrastructure support and services. This is clearly a win-win deal as Netmagic will benefit from Spectranet’s vast fibre and IP network, and shall be able to extend these services to its customers. Netmagic will also have the opportunity to host Spectranet’s customers in its data centers. We also see data center designers going in for modular architectures as it is easier to plan proactively for future expansion and to deploy additional data centers. For instance, Tulip has four towers and there are modular data centers on each floor, in all four towers. Vendors like Dell and data center companies like CtrlS have also been offering something called data centerin-a-box. Essentially, this solution integrates storage, computing and networking. They say this integrated solution is not only easy to deploy, maintain and upgrade but is also very cost-effective and delivers exceptional value. This reminds us about unified computing solutions and integrated stacks such as Cisco UCS and the OracleSun boxes.
Going green
While LEED certification and green data centers were only spoken about a few years ago, it is now a norm for new data centers. LEED stands for Leadership in Energy and Environment Design and is a global standard for green building design. Mahesh Trivedi, Vice President - DC Infrastructure, Netmagic Solutions showed us around his Vikhroli data center in Mumbai. He told InformationWeek
about all the initiatives taken to make this data center green. Since cooling is costly, Netmagic made a lot of effort to trap the cold air in the aisles between server racks, and to properly direct the cold air flows over the warm server blades in the racks. “When we designed the Vikhroli data center we decided to make it green. So we used chemical-free paints and good insulation to increase the thermal efficiency of the building; we also used refrigerants that have less impact on the Ozone layer. By the time this data center was complete, the technology had evolved a lot. We also gained a lot of knowledge and experience in making green data centers. So for the next data center in Chennai, we decided to go in for LEED certification.”
Innovation
There are challenges that are typical to India, and data centers here have innovated to work round these challenges. The biggest challenge is continuous and stable power supply. Power is scarce and unstable in Noida, so Netmagic had to deploy express feeders at its Noida data center. An express feeder is a direct connection between the power company’s sub-station to the data center company without any tee-off or branching. So the other loads connected in parallel do not affect the supply. Another workaround was increasing the voltage of the electrical feed. “If you up the voltage you can get more stable power. So at Noida, we are at 33,000 volts, and it is a robust and reliable grid. In Mumbai, our Reliance power feed is at 11,000 volts and the Tata feed is at 22,000 volts,” says Trivedi. (Trivedi shares more about innovations at Netmagic data centers in a separate story in this issue.)
www.informationweek.in
Data Center Tulip has done likewise for its data center in Bengaluru. Once its data center becomes fully operational, it will consume up to 100 MW of power. To make power distribution on that scale possible, Tulip opted for a unique design after doing a lot of research. Here the entire power distribution system has been designed in a fashion such that high tension is delivered directly to the floor plate, then to an LTE system, and it eventually gets distributed across the data center. Elaborating on this unique design, Harjyot Soni, CTO of Tulip Telecom says, “Since we are putting transformers on the floor plate, we have to take care of the Electro Magnetic Interference (EMI) and Electro Magnetic Conducting (EMC) radiations. This is so, because we usually have network cables running and if you have such a large amount of power source nearby it can affect the signals being carried by the network cable. So, you need protection systems so that it doesn’t impact the server house. And since we are carrying high tension to the floor plate, the design of the panel, the distribution is very different from any other standard data center.” (Read more on this in Harjyot Soni’s article later in this issue.)
Certification & Standards
While there is a string of (ISO) standards and certifications for data centers, the ones that are most talked about are TIA/ANSI-942 and Tier certifications, the latter was devised by the uptime institute. With Tier IV being the highest level for this standard, most data centers in India are in the Tier III category, while some claim to be Tier III ‘plus’. Only one data center claims to be Tier IV certified, the Hyderabadbased CtrlS. Achieving Tier IV requires massive investment as one of the requirements is — all HVAC systems need to be independently dualpowered. Tier III plus refers to meeting the Tier IV requirements for availability and redundancy (except power). Reliance Communications’ Gopinath opines it is difficult to justify the cost of Tier IV certified infrastructure and share it with customers. He says customers ask for it without fully un-
We believe the SME sector is underserved, and we are building services for companies in tier-2 and tier-3 cities where there are many SMEs Rajesh Tapadia, CTO & Business Head - Data Center & Cloud Services, Trimax Data Center Services
derstanding what it means. “We are a Tier III-plus data center. We would not be able to justify the cost to our customers for getting to Tier IV. This does not add value to the customer and just adds cost to us. So we tell the customer if they need anything in addition to Tier III requirements, we can provide it to them. But we have 100 percent redundancy right from the grid level to the rack level — and this is a requirement for Tier IV. So for power we are Tier IV. “ Anil Nama, CIO, CtrlS Data Center, says there were a lot of standards and much confusion in the early days, going back to 1998. Each vendor came out with a set of standards and there were conflicts — and each standard has its own interpretation. But things began to change when the TIA standard came along in 2005, although this was mainly telecom related. In 2006 the Ashrae 9.9 standard emerged, and then IEEE 493 in 2007. The combination of these gave data centers a standard for architecture and also set the engineering specifications to roll it out. “When building a data center, you need standards for the architecture, electrical systems, mechanical and for telecom. When we started building our data center, we found it impossible to use an existing building and fully adhere to these standards. For instance, as per the Tier IV standard, the minimum floor load required is 1,400 Kg per square meter (the amount of weight you can place on the floor, in a square meter). In a commercial building the floor load is 50 Kg per square meter. We could strengthen it to 600 or 700 Kg at most. But that did not meet the requirement of the standard. So we had to build our data center from the ground up,” said Nama.
With an keenness for adhering to standards, CtrlS made a series of changes to its data center design, and that helped it achieve the Tier IV certification.
Conclusion
The demand for data centers is being driven by fast growing large enterprises who want to avail of the cost benefits. SMEs will need the expertise and application services of data centers, notably, SaaS type services. And as more organizations adopt cloud computing, they will move their IT infrastructure out of commercial buildings to professionally managed data centers. Within India, the demand for data centers will come mainly from three cities: Mumbai, Bangalore and Delhi. But Indian data centers are also moving overseas. For instance, Tata Communications has an exchange data center in Singapore. With their expertise in data center services and state-of-the-art facilities Indian data centers are an attractive proposition for global companies. They will either host infrastructure for international clients in India or do so overseas (through partnerships with local service providers). For instance, Reliance Communications has more than 700 carrier contracts worldwide. Through these contracts it provides services to global customers in 230 countries and territories across six continents. All this proves that the future is indeed bright for the Indian data center industry. And it also paves the way for cloud computing and infrastructure management services.
u Brian Pereira brian.pereira@ubm.com
june 2012 i n f o r m at i o n w e e k 23
Data Center Facts A GLANCE AT DATA CENTERS OF TWO MOST POPULAR WEBSITES Data centers consume a tremendous amount of electricity; some consume equivalent of nearly 1.8 million Indian homes. Companies like and are beginning to lead the sector down a clean energy pathway through innovations in energy efficiency and by prioritizing renewable energy access (Source: How Clean is Your Cloud? from Greenpeace International)
+You
Search
Images
Maps
Play
Search
Wall
Share:
Google is the most popular search engine that receives more than 1 billion searches a day and numerous other downloads and queries
Google uses 260 million watts continuously across the globe, which is equivalent to the power used by around 200,000 homes and roughly a quarter of the output of a standard nuclear power plant
The search giant claims that its data centers use 50% less energy than the typical data center and are among the most efficient in the world
25% of search giant’s energy was supplied by
overhead energy (like cooling and power conversion) to 14%
1 2 3 4 5 6 7 8 9 10 24
informationweek june 2012
>
Next
Photos
Boxes
Discussions
Photo
Facebook has 901 million monthly active users at the end of March 2012. More than 300 million photos were uploaded on Facebook per day in the three months ended March 31, 2012
Facebook operates 9 third-party data centers in the U.S.
Facebook claims that its first company-built data center in Prineville, Oregon has PUE (power usage effectiveness) of 1.07 at full load, as compared with an average of 1.5 for its existing leased facilities
renewable fuels—such as from wind farms, which it soon plans to take to 30%
Google has reduced non-computing or
Post
Info
Like
Comment
Share
With the data center in Oregon, the social networking site has: •
Reduced the energy usage; 38% less energy is required to do the same work as Facebook’s existing leased facilities
•
Lowered CAPEX by 45%, and reduced OPEX Like
Comment
Share
www.informationweek.in
servers
cover story
Why you should push your servers to the limit The key server challenge in the data center today is the optimal utilization and management of servers with lower TCO. To address these challenges technologies such as server virtualization, consolidation and converged infrastructure are gaining importance By Vinita Gupta
O
ptimal server utilization is the key to making your data center more efficient. With the increased need to provide on-demand services, today’s data centers face the challenge of consolidating applications on to fewer resources to enable better utilization of the organization’s existing resources. Servers are also expected to operate in multi-vendor technology environments, and deliver technical innovations to power organization’s infrastructure for adapting to cloud or traditional deployments. In this story, we will discuss some of the data center server expectations and challenges faced by the CIO and ways through which they can overcome these challenges.
Server expectation and challenges
The volumes of data that most organizations amass will grow exponentially with each passing year. Thus, the threat of running out of data space, high power utilization, thermal energy consumption, and rising energy cost will emerge as the main concerns. From a CIO’s perspective and from a compute standpoint, there are two
26
informationweek june 2012
requirements — maximizing power in servers while keeping them cool, and minimizing power usage cost. CIOs will look at servers that can get more work done in a shorter amount of time, using less power. According to Mitesh Agarwal, CTO & Director – Systems, Oracle India, few years back, CIOs were not looking at cost of power and cooling consumed by the servers, but now they are calculating the Total Cost of Ownership (TCO). The change can be attributed to the fact that CIOs buy for peak, yet the average utilization remains very less (about 15-20 percent). For example, in a bank the number of transactions is comparatively high during morning than in the afternoon. But CIOs need the infrastructure that can help manage transactions during peak periods — this leads to high cost (CAPEX). To offset this and reduce TCO, CIOs are looking at servers that consume less space, cooling and power. Today, along with performance and power consumption, innovative designs are needed to balance the processing power with memory and I/O (input/ output) bandwidths, latencies and capacities. Sitaram Venkat, Director - Enterprise Solutions Business, Dell India asserts that expectations from the data center servers are changing. Earlier it was about how servers were deployed and managed, but now it is how these are procured. CIOs are now looking at higher performance, ease of
manageability. The high performance of servers is usually tagged with high performance processors but it is not only that it’s also important to have the right I/O option and memory. “As multi-core, multi-thread processors take hold in the data center, the biggest challenges today’s workloads and users’ demands place on servers are keeping the CPUs supplied with data and presenting the results to users and applications within the acceptable time limits,” says Santhosh D’Souza, Vice President – Technology, Data Center and Cloud Computing, Cisco India and SAARC. To overcome these challenges, vendors are innovating technology. For instance, the patented Extended Memory technology and Unified Fabric innovation in Cisco UCS balances processor, memory and I/O resources provided to applications. The other challenge is introducing new technologies such as cloud and virtualization in the existing data center. Server virtualization, consolidation and converged infrastructure can help address these server challenges and expectations.
Adopting virtualization and consolidation Server virtualization and consolidation is no longer a buzzword. According to Forrester’s Forrsights Hardware Survey, Q3 2011, consolidating IT infrastructure via server consolidation, data center consolidation, or server virtualization
www.informationweek.in
Data Center initiatives is likely to be an organization’s top hardware/IT infrastructure priorities over the next 12 months. With server virtualization and consolidation in data centers, organizations can simplify the management of servers and applications, and move from fixed and silo-based environments to shared and scalable platforms. Furthermore, server virtualization and consolidation leads to reduced cost and improves operational benefits by scaling up the utilization levels of an organization’s existing hardware. While consolidation plays an important role in reducing the complexity of resources spread across locations, virtualization helps enterprises to consolidate applications onto fewer systems to increase resource utilization, improve security and lower licensing costs. Today, virtualization and consolidation are business initiatives and not IT initiatives. For instance, the ARPU (Average Revenue Per Unit) of a telco is very low, so the telecom industry looks at an IT solution that can reduce the cost of transaction. The answer to telecom industry’s need in this case is consolidation. Also, CIOs are now consolidating data centers so that there is just one primary and one disaster recovery data center, which in turn reduces the number of vendors involved and ensures ease of management. Eyeing the immense opportunity, server vendors have started providing server virtualization solutions by
Few years back, CIOs were not looking at cost of power and cooling consumed by the servers, but now they are calculating the Total Cost of Ownership Mitesh Agarwal
CTO & Director – Systems, Oracle India
partnering with the virtualization vendors. For instance, Dell has partnered with Microsoft and VMware. However, virtualization does not come without its own challenges, which range from increased management complexity, unpredictable performance to complicated pathways to cloud. In fact, complexity involved in virtualization has emerged as the number one concern for a majority of IT decision-makers. Shedding light on the challenges of server virtualization, Suresh Menon, Head of Solutions, Fujitsu India, says, “The benefits of server consolidation and virtualization have been the same for many years, namely better utilization of resources, ease of deployment and overall lower costs of management and operations. However, the challenge is moving to a private cloud environment and also the need to work with nonvirtualized servers in the environment.“ To overcome these challeneges and make the most of the benefits of server consolidation, CIOs are looking at an integrated compute, storage, power and networking solution. Most of them are
also considering an end-to-end solution from a single vendor, for example, Dell’s data-center-in-a-box.
Converged infrastructure can help
Over the past two years, there has been a considerable churn in converged infrastructure technology, and Forrester expects this pace to continue in 2012. Vendors will focus on improving the integration of storage and network virtualization and increasing the integration of the systems-level virtualization capabilities with higherlevel management software. The major system vendors with blade server offerings such as Cisco Systems, HP, and IBM will lead the charge. Above and beyond the underlying hardware and system management tools, vendors will compete to extend the converged infrastructure concept to include composite application services, presented as abstractions including servers, network, storage, and security policies. HP, with its CloudSystem Matrix, is a prime example of this trend. Vikram K, Director, ISS, HP India,
Servers consume 40% of total data center energy consumption, but 30% of this is wasted by "dead" servers — servers consuming energy but performing no useful work Source: Forrester Research
Of 1,240 hardware IT decision-makers surveyed globally, 52% consider consolidating IT infrastructure via server consolidation, data center consolidation, or server virtualization as their firm’s top hardware/IT infrastructure priorities over the next 12 months and 29% consider it as critical priority Source: Forrester Research
48% of the surveyed IT decision- makers said automating the management of virtualized servers to gain flexibility and resiliency is their high priority, while 15% said it’s their critical priority Source: Forrester Research
Virtualization can reduce server energy consumption by as much as 82% and floor space by as much as 86% Source: Gartner
june 2012 i n f o r m at i o n w e e k 27
servers
cover story
informs, “Infrastructure convergence is widely accepted as the optimal approach for overcoming IT and application sprawl to transform IT into a true business enabler. It does so by establishing a shared services data center environment to simplify and accelerate IT to significantly lower operational costs, shift resources to innovation, and drive business agility.” Also, today different companies are coming together and forming alliances to offer a converged infrastructure. For example, the Virtual Computing Environment (VCE) company, an alliance between Cisco and EMC, with investments from VMware and Intel, aims to simplify and accelerate virtualization and help customers transition to private cloud infrastructures. The alliance’s Vblock platform, designed to enable rapid virtualization deployment, also offers flexible processing, network performance and storage capacities and supports incremental capabilities such as enhanced security and business continuity. The complexity of converged infrastructure, along with its inherent lack of standardization above the basic VM layer tends to lead to vendor lock-in. This isn’t necessarily bad as long as the
Infrastructure convergence is the optimal approach for overcoming IT and application sprawl to transform IT into a true business enabler Vikram K
Director, ISS, HP India
tradeoffs for lack of vendor choice are mitigated by tangible benefits from the solution.
Silicon Innovations
Apart from server vendors innovating there are innovations and improvements happening in the underlying heartbeat of server technology, that is semiconductors. For example, the introduction of nextgeneration servers based on Intel’s Sandy Bridge architecture. “At Intel, we recognize that just increasing compute performance alone does not address all the challenges that prevent IT from achieving the scale that they need. That’s why we not only focused on improving performance, but on improving all aspects. So that means supporting more platforms in the data center such as storage and
Server predictions for 2012 l
The next major cycle of x86 server refresh will come early in the year Semiconductor improvements will continue unabated in 2012. Example: next-generation servers based on Intel’s Sandy Bridge architecture.
l
Converged infrastructure offerings will continue to evolve and differentiate Vendors will focus on improving the integration of storage and network virtualization and increasing integration of the systems-level virtualization capabilities with higher-level management software. The major system vendors with blade server offerings such as Cisco Systems, HP, and IBM will lead the charge.
l
ARM-based servers will appear from major vendors In 2011, the industry got its first look at a new twist in non-x86 servers, ones centered on ARM CPU technology. HP announced Project Moonshot with a new server architecture based on Calxeda’s ARM-based SOC, as the first instantiation from a major vendor. (Source: Forrester Research)
28
informationweek june 2012
networking, addressing the security risks that IT faces every day, helping to reduce operating costs and energy use, and improving I/O to get data where it needs to be, when it needs to be there,” says Srinivas Tadigadapa, Director – Enterprise Solutions, Intel South Asia. Likewise, AMD has invested significantly in its R&D center in Bengaluru and Hyderabad and the company’s talent pool of chipsdesigners and developers have been actively working with the global teams on some key projects. Its India Development Center has played a crucial role in the designing and development of the APUs (Brazos Platform). “Our priorities this year are to expand our AMD Fusion family of processors, and deliver our newest generation of server platforms and continue leadership in the graphics space,” says Vinay Sinha, Director Commercial Sales - AMD India & South Asia. Today’s enterprise data centers are designed to deploy more powerful applications and manage increasing amounts of data, while still being expected to scale flexibly, control costs, and reduce server deployment. Additionally, to reduce carbon footprint, effective server management plays an important role in keeping the fast escalating power and cooling costs under control. With virtualization, consolidation and converged infrastructure solutions, organizations can significantly improve utilization and simplify management to achieve flexibility and agility to address growing demands.
u Vinita Gupta vinita.gupta@ubm.com
www.informationweek.in
servers CIO Voice
Data Center
Essel Propack’s server virtualization journey Zoeb Adenwala, Global CIO, Essel Propack takes us through the company’s server virtualization journey and pens down five pitfalls that need to be addressed
C
onsolidation is the name-of-the-game in an economy fueled by M&As — companies today are relying on inorganic growth to gain the edge over their competition. As organizations are now spread across the globe, this command and conquer strategy leaves their CIOs scurrying to consolidate IT in the wake of a business merger. The case was no different with Essel Propack Ltd (EPL) — one of the global leaders in the packaging and laminated tubes segment — which operates out of 21 locations spread across 13 countries. Having grown by leaps and bounds primarily through the inorganic route, it faced a major challenge in terms of consolidation of its IT infrastructure. Each organization that was acquired possessed its own IT infrastructure consisting of multiple servers and varied architecture with some geographies hosting more than one data center. The two challenges that this dispersed infrastructure posed was standardization of IT delivery and the high cost associated with maintenance of individual infrastructure at each location.
The consolidation process
The first step taken to resolve this issue was consolidation of various data centers. We consolidated various ERP and mail servers into one centralized server farm, which was hosted by an Indian ISP. We also virtualized servers and storage to reduce the number of physical servers and storage boxes. For active directory and e-mail services, Essel Propack deployed blade servers with virtualization technology running on
5 pitfalls to address l
Complex configurations of data, applications, and servers can be confusing for the average user to contend with
l
Advanced skills are required to manage virtual servers
l
Since creating VMs is easy, there is always a possibility of server sprawls i.e. large number of unnecessary servers. Thus, there is a need to have strict control on server creation
l
Debugging issues is also complex and you will need different skill sets to look into each specialized area e.g. virtualization software, operating system, network or storage boxes
l
Security is still an issue in virtual world — any server if compromised can result in other servers getting affected
Windows 2008 Enterprise Server and hosted servers with an ISP. The rack space savings that accompany the deployment of blade servers have resulted in a considerable reduction of costs. Secondly, the management of blade servers is easier in comparison to traditional server. The process of consolidation was started by monitoring the existing IT usage in each country. This information was utilized to design a centralized data center, followed by server and storage visualization. Next, a robust WAN design using MPLS was implemented to ensure reliable and secure connectivity to the central data center from all 21 locations. Once the design was finalized, the company created a centralized Active Directory structure and added locations in a phased manner to the central AD forest. The next step was to consolidate all e-mail users to the centralized MS Exchange solution and move to SAP as the common ERP platform. These technologies helped us reduce costs on hardware and software licenses, reduce maintenance costs at every location and helped us leverage the cost advantage of hosting the data center in a low-cost country like India. In addition, the ancillary benefits of this exercise include standardization of IT practices across all the operating locations worldwide and ease of management of this global infrastructure.
june 2012 i n f o r m at i o n w e e k 29
Storage
cover story
Storing
more in less
How storage vendors are innovating to help data center managers cope with Big Data and the information explosion By Ayushman Baruah
A
s data continues to grow exponentially, data center managers, globally and in India, are facing huge challenges to efficiently manage their data and storage. According to a study by IDC in association with EMC, digital information in India will grow from 40,000 petabytes to 2.3 million petabytes over the next decade (2010 to 2020) — twice as fast as the worldwide rate. As a result, enterprises of all sizes — small, medium and large — will face an increasing challenge to store, protect
30
informationweek june 2012
and manage the rapidly-growing digital information, and comply with backup requirements. This explosion of data, referred to as Big Data in contemporary lingo, has created both the need and appetite for innovation in the storage arena. And so you have technologies such as multitiered storage, fluid data and storage virtualization. The other need for efficient data management stems from the very nature of any data, which tends to lose its importance over time. There is a constant need to archive the old
and keep the latest or most important data readily available. It is believed that data access follows the 80/20 rule — the most recent 20 percent of the data attracts 80 percent of the access. “One of the most serious concerns of data center administrators is how they can take care of the growing size of data, the rack space, floor space, and power & cooling requirements. The storage component, which has the maximum mechanical parts, requires maximum power and cooling,” says Rajesh Awasthi, Director - Cloud and
www.informationweek.in
Data Center Telco, NetApp India. According to most authorities InformationWeek spoke to, the average utilization rate of storage in most data centers is only about 40-45 percent. This indicates that much of the storage capacity in data centers is underutilized. Awasthi of NetApp defines storage efficiency as the ability to store maximum amount of data in the smallest possible space and at the lowest possible cost. “It is nearly impossible to predict how long any data file will be retained on disk. All data begins its life on primary storage. Whether it’s a database entry, user file, software source code file, or an e-mail attachment, this data consumes physical space on a disk drive somewhere within your primary storage environment. The creation of data on primary storage begins a chain of events that lead to storage inefficiencies,” he says.
Vendor offerings
Traditionally, there has been a gap between management of storage and management of data. Data management which includes management of files, file systems and structured data, has often been a separate discipline from the management of the underlying storage infrastructure. Data administrators have historically concerned themselves with the redundancy, performance, persistence and availability of their data. On the other hand, the storage administrators have focused on delivering physical infrastructure that satisfies the data’s requirements. Typically, the storage is configured first and then within the constraints of the configured storage, data management takes place. If the storage requirements of the data change, the data must be migrated to different storage or the underlying storage must be reconfigured. Either process is disruptive and requires multiple domain-specific administrators to work closely together. Database giant Oracle attempts to address such gaps with solutions like the Oracle Enterprise Manager. “Offering a single console to manage
multiple server architectures and myriad operating systems, Oracle Enterprise Manager’s capabilities include asset discovery, provisioning of firmware and operating systems, automated patch management, patch and configuration management, virtualization management, and comprehensive compliance reporting. An open, extensible system that can be integrated with existing data center management tools, Oracle Enterprise Manager manages across the entire infrastructure stack — from firmware, operating systems and virtual machines, to servers, storage, and network fabrics,” reports an Oracle whitepaper. According to the report, Oracle Enterprise Manager allows data center staff to observe and take action against energy misuse, and supports viewing energy consumption in terms of real dollars. The performance of spinning diskbased storage device (which showed no significant improvement) was another challenge that remained unaddressed for many years. That situation has changed dramatically thanks to Flashbased storage devices. For instance,
4 milliseconds can be considered a representative response time for small spinning-disk reads. Flash-based devices can deliver the same read in 0.4 milliseconds, an order of magnitude improvement in the response time. Oracle’s new Database Smart Flash Cache feature leverages this I/O breakthrough offered by Flash-based storage devices. “Flash cache is a technology available inside the servers and it’s like bringing storage inside the servers,” says Mitesh Agarwal, CTO & Director – Systems, Oracle India. Leading provider of storage solutions, EMC, has recognized that with the increase in the scale and scope of storage, administrators are compelled to reduce their labour requirements while maintaining complete control over the complex, rapidly growing infrastructures. “Dynamic workloads make it difficult to provide predictable, consistent performance levels. What these administrators need is an underlying infrastructure that provides full visibility, is self-managing, and is able to quickly adapt to change in tiered environments, or automated storage tiering,” says Deepak Varma, Regional
Tiered storage management Key deterrents to implementing tiers of storage: l
High user downtime resulting from both tedious data migration procedures and long restore windows for archived data.
l
Significant administrative effort in restoring user access to migrated data,
l
Lack of solutions that can migrate data across heterogeneous storage platforms.
l
Lack of centralized management of distributed data to reduce administrative complexity.
l
Inability to organize data intelligently and present it to users logically.
A data management solution needs to overcome these limitations to deliver a simple yet powerful solution to implement tiers of storage across multiple storage systems, independent of their location or performance characteristics. A tiered storage solution for the data center should provide: l
High level of automation
l
Centralized management of heterogeneous storage
l
Policy-based migration of data across storage tiers
l
Non disruptive data movement
l
Business view of data independent of physical storage tiers (Source: NetApp Whitepaper)
june 2012 i n f o r m at i o n w e e k 31
Data Center
Storage
cover story
Presales Head, South, EMC India. For organizations to take full advantage of these multi-tier environments, EMC created FAST (Fully Automated Storage Tiering), which automates the movement of data within a storage system, as a replacement to the significant amount of manual storage administrative tasks. “This new kind of storage intelligence allows users to move content between various tiers of storage in a nondisruptive fashion ensuring application availability at all time,” says Varma. EMC recently introduced a new server Flash caching solution, VFCache. According to the company, together VFCache and EMC Flash-enabled storage systems dramatically improve application performance by leveraging intelligent software and PCIe Flash technology — testing resulted in up to 3X increased throughput while reducing latency by 60 percent. The company sees huge adoption of data de-duplication technology among Indian enterprises to manage their information storage and backup more efficiently. Traditional backup solutions store data repeatedly and incrementally, expanding total managed storage by 5 to 10 times. “Data de-duplication helps address this and deliver an improved return on investment (ROI) by reducing storage infrastructure and management for both backup and archive. At the same time, it also addresses environmental issues by reducing the data center footprint for backup along with attendant energy and power savings. Apart from immense cost savings, de-duplication further
enhances the efficiency of a new-age data center,” says Varma. Dell too has big commitments for the data center space and according to P A Sathyaseelan, Executive Director Enterprise Solutions, “Dell has moved from the edge of the data center to the core of the data center.” Dell has been aggressive on its storage strategy with a series of acquisitions in the past. It acquired EqualLogic in 2007 and Compellent Technologies in 2011, both of which contribute significantly to the company’s storage portfolio today. In 2005, Compellent introduced a technology innovation referred to as fluid data that defied the rigid boundaries of conventional storage. “Part of Dell’s fluid data architecture, the Dell Compellent Storage Center is an all-in-one storage array that always puts data in the right place at the right time for the right cost. With fluid data technology, you can optimize efficiency, agility and resiliency to slash storage costs up to 80 percent, scale on a single platform and secure data against downtime and disaster. Best of all, with fluid data, your storage dynamically adapts directly in line with your business needs,” says Sathyaseelan.
STORAGE ON DEMAND
As data continues to increase, management of both data and storage is getting complicated and expensive, which is why many companies are outsourcing it to specialized players who can manage it in an OPEX-based model. For instance, in September 2009, Max Healthcare and Dell Services entered into a 10-year agreement under
Dell Compellent Storage Center Built from the ground up to manage data differently, Dell Compellent Storage Center provides a fully virtualized storage platform that includes: l
Storage virtualization that abstracts and aggregates all resources across the entire array, providing a high-performance pool of shared storage
l
Thin Provisioning and Automated Tiered Storage to deliver optimum disk utilization and intelligent data movement
l
Space-Efficient Snapshots and Thin Replication for continuous data protection without wasted capacity
l
Built-in automation and unified storage management to streamline storage provisioning, management, migration, monitoring and reporting
32
informationweek june 2012
which Dell Services would manage all IT operations for Max Healthcare, including infrastructure management, data hosting, applications portfolio management, and project management office. As per an IDC study, India will grow twice as fast in data creation as the rest of the world in this decade, though the staff managing this data would only grow by about 1 percent. This is a clear indication of the need to automate data centers. Apart from automation, the concept of co-hosting or co-locating the data centers to external providers as a part of their IT requirements is a trend fast picking up. A case in point is EMC, which recently partnered with Tulip Telecom. As per the agreement, Tulip will provide end-to-end managed on-demand storage services and backup-as-aservice (BaaS) using EMC unified storage and backup and recovery technologies from the Tulip Data City in Bengaluru. Tulip’s cloud-based storage and backup service offerings are powered by EMC VNX unified storage, EMC Avamar and EMC Data Domain technologies for storage and backup and recovery, enabling Tulip to offer enterprises managed services to address the growing complexities related to storage and backup of data. With increase in data volume and stringent government regulations, storage costs are mounting. Consequently, storage is increasingly becoming a challenge which is forcing companies to rethink their existing approach to data storage. High-end storage comes at a high premium, and using top-shelf storage for all data assets is no longer a cost-efficient solution. This is driving organizations to look at different approaches to data storage such as intelligence-based automated tiering, where management of data in turn manages the storage. Some organizations are also looking at cloud-based storage services providing storage on demand.
u Ayushman Baruah
ayushman.baruah@ubm.com
www.informationweek.in
Data Center
Storage Case Study
Storage consolidation is rewarding The fast diversifying Kalyani Group just consolidated its storage and moved to a common architecture — it can now offer new resources and utilities as a service, on-demand to users across the Group. This has improved efficiency and reduced costs and storage inventories
W
ith the Kalyani Group fast diversifying, Yogesh Zope, CIO of the Group’s flagship Bharat Forge, was looking at a centralized dynamic shared service. Zope, who is also the CEO of IT Services at the Group’s company Kalyani Infotech, feels it’s important to ensure that IT isn’t a bottleneck holding business back. Dynamic allocation of resources is something high on his priority list. “Kalyani Group is now diversifying a lot and last year the company decided that at least the technology part should be available as a shared service. We decided that the entire mail server and security architecture — firewalls, MPLS, point-to-point links, and so on — should be consolidated. We decided we will consolidate within the Group companies, all of which had their own infrastructure,” informs Yogesh Zope, CEO of IT services, Kalyani Infotech. Storage was fairly scattered such that within Bharat Forge (the Group’s flagship company), there were three storage systems. According to Zope, one storage system was allocated for design, another for SAP and a separate one for the NAS. He says the company also had a separate storage system for the SQL and Oracle database system.
Consolidating
Zope and his team decided to move onto a single architecture with single MAS (managed application server), NAS and SAN. They also decided that in the process, they should have tiered architecture and that the storage they use should support it. They decided there should be at least some solution for file archiving to make it tiered. All important data should be on SAS or Fibre, and unused data should
“Now we have only two storage systems. This makes the system very simple to manage and cost has come down significantly due to file archival”
Yogesh Zope, CIO, Bharat Forge & CEO - IT Services, Kalyani Infotech move to SATA. Zope chose EMC’s storage solution on parameters such as scalability, redundancy, disk sizes, RAID levels, power utilization and data compression. Zope started with human resources consolidation within the IT enterprise across the Group, then with physical infrastructure consolidation of servers, storage, networks and so on. This was followed by processes consolidation, from an IT services perspective. “Earlier we never had web mail, it was highly person-driven,” Zope says. All that was moved to a consolidated environment. The obvious reason was that not all the companies could afford their own data center or some of them didn’t even need a full-fledged data center.
Major, major benefit
Zope and his team took a threepronged approach — improve efficiency, reduce risk due to IT, and reduce costs from IT. The overall benefits from the entire consolidation exercise were many. “From a storage perspective, obviously we had a lot many storage parts — at least 15-odd different storage systems. Now it is only two — one MAS and one SAN. This makes the system very simple to administer and manage and amenable to thin provision; the storage utilization has improved drastically; and again due
to file archival, the cost has come down significantly. Earlier, we didn’t have that control. Archival without policies is practically impossible to deploy. Now in the MAS, more than 60 percent data has been archived. This is significant data. It is now more than SATA,” says Zope. Furthermore, the storage system gave Zope’s team the basic building blocks for virtualization and consolidation for various other ITrelated services. Also, with reduction in the number of equipment, power consumption reduced. Cost reduction was one of the other major benefits. “Earlier, I had distributed storage systems and the kind of assetmaintenance costs we were incurring were very high. With the new storage, I could negotiate a five-year contract and it was very easy to justify the investment,” he asserts. Zope cites two examples of what this means to users. First, the entire process of manually managing Bharat Forge’s old design files was eliminated by the tiered storage architecture. Second, with the NAS replicator, the process became seamless and human intervention was practically eliminated. “That was a major major benefit,” he says. u Ayushman Baruah
ayushman.baruah@ubm.com
june 2012 i n f o r m at i o n w e e k 33
Power & Cooling cover story
Increase in power tariffs and increasing operational costs of a data center have been forcing businesses to explore energy-efficient power management and cooling technologies for their data centers. Let’s take a look at the recent innovations happening in this space and the kind of power management and cooling technologies being used by some of the most energyefficient data centers in India By Amrita Premrajan
Mind your data center’s PUE!
A
constant challenge businesses face today is the high energy costs linked to the operation of their power-guzzling data centers, which are highly energy intensive. According to Gartner, energy-related costs account for approximately 12 percent of overall data center expenditure and are the fastest-rising cost in the data center. Added to this, power tariffs have increased manifold in the past few years in India, so has the power density per rack. Both these factors have led to skyrocketing of data center’s operational expenses. The high cost of owning IT infrastructure is prompting businesses
34
informationweek june 2012
to closely look at their data center infrastructure and find out where to cut unnecessary energy costs, which in turn is driving them to evaluate new power management and cooling technologies. In this story, we will trace recent innovations in power management and cooling technologies and how these technologies are being used by some of the most energy-efficient data centers in India, which are running on an energy-efficient PUE (Power Usage Effectiveness) rating.
Determining Energy Efficiency
With respect to energy efficiency
of a data center and also the kind of environmental impact it makes, the US Green Building Council’s (USGBC) Leadership in Energy and Environmental Design (LEED) certification is a very relevant rating system. It provides independent, third-party verification that a building was designed and built using strategies aimed at achieving high performance in key areas of human and environmental health: sustainable site development, water savings, energy efficiency, materials selection and indoor environmental quality. The effective usage of power in a data center is measured using a popular metric called PUE. This
www.informationweek.in
Data Center metric was proposed by The Green Grid (a non-profit trade organization of IT professionals that addresses power and cooling requirements for data centers). PUE is determined by dividing the Total Facility Power by the Total Power drawn by IT equipment. Anything that isn’t a computing device (lighting, cooling, sensors, biometrics etc.) and yet draws power falls under ‘Facility Power’ category. The Green Grid says that ideally, a PUE value approaching 1.0 would indicate 100 percent efficiency of the data center (meaning all power is used by IT equipment only). Though achieving the ideal PUE of 1 is practically impossible, the PUE metrics indicates that the lower your Data center’s PUE the more energy efficient it is. A measure of a data center’s PUE is definitely the first step in getting a broad understanding about the overall energy efficiency of the data center. Some businesses in India that have very efficient data centers include: Aircel’s Gurgaon Data center (PUE of 1.8), Netmagic’s Bangalore Data center (PUE of 1.7-1.8), Wipro’s Greater Noida Datacenter (PUE of 1.6 – 1.75) and Tulip Data City, Bengaluru (PUE of 1.5). “PUE has a global average of 2.2. Cooler countries do not need so much power for cooling, so the PUE could be as low as 1.2 or 1.3. But in India the average PUE could be 3.0 to 3.5 as you need a lot of power for cooling in a warm country,” says Lt. Col. H.S. Bedi, Chairman and Managing Director, Tulip Telecom.
INNOVATIONS
In the past few years, quite a few innovations have been happening
IBM’s Solar Powered Data Center enabling substantial energy savings IBM launched the world’s first solar array for high-voltage data centers in November 2011. The new array is spread over more than 6,000 square feet of rooftop covering IBM’s India Software Lab in Bengaluru. The array is designed specifically to run high-voltage data centers — integrating AC- and DC-based servers, water-cooled computing systems and related electronics. This solar array is the first move to blend solar-power, water-cooling and powerconditioning into one single package suitable to run massive configurations of electronic equipment. IBM plans for the Bangalore solar-power system to connect directly into the data center’s water-cooling and high-voltage DC systems. The integrated solution can provide a compute power of 25 to 30 teraflops using a POWER 775 system on a 50kW solar power supply. The efficiency benefits of directly utilizing the DC power generated by Photovoltaics instead of the usual AC-DC and DC-AC conversion can cut energy consumption of data centers by about 10 percent. By not utilizing the grid power for up to 5 hours per day, this system generates additional savings of up to 20 percent per year. in the area of power distribution and management technologies. Talking about the recent developments in the UPS market, Jayabalan Velayudhan, Director, Strategy and Business Development, IT business APC says, “Today, modular UPS come with hot swappable power distribution breakers with in-built capability of monitoring the power. There are modular batteries, which have temperature compensated charging and battery monitoring in-built with the UPS system. Such features help in monitoring the UPS as a system and help in enhancing the availability.” Within a data center, power that leaves UPS enters a Power Distribution Unit (PDU), which then performs the function of sending the power directly to the IT equipment in the racks. Updating about how traditional PDUs operate in data centers, Sanjay Motwani, Regional Manager, India and
Our APC’s InRow cooling products have intelligent controls that actively adjust fan speed and cooling capacity to match the IT heat load Jayabalan Velayudhan, Director, Strategy and Business Development, IT business APC
Middle East, Raritan India says, “When it comes to power, most data centers use a simple, or what is termed a ‘Dumb PDU,’ in the racks to power up the devices. These PDUs are only good enough for providing power to the device — and of no use when it comes to planning the capacity or other parameters.” Today, a new class of Intelligent PDUs (iPDUs) have entered the market that have additional functionality. For example, Raritan’s iPDUs go beyond power control and measurement and have the capability to connect sensors for temperature, humidity, airflow, air pressure, water leak detection, smoke and fire, dry contact — making the iPDU itself a complete environmental solution. To make it easy for data center operators to view power activity, real-time information can be seen on the iPDU’s LED display at the rack and remotely on the browser-based user interface. Userconfigurable critical and non-critical power and environmental thresholds can also be set, and alerts can be sent so that data center managers can take corrective actions. Another interesting innovation that has happened is the incorporation of power management features at the processor level. Elaborating on this, Mitesh Agarwal,
june 2012 i n f o r m at i o n w e e k 35
Power & Cooling cover story
Director, Solution Consulting, Systems Business, Oracle India says, “There was no mechanism inside a server, across the industry, where one can reduce the power of the processor or the consumption that it draws from the power unit and so on. What we have done with the SPARC T4 processor — an extremely powerful processor that has a multicore, multithreaded design — is that we have incorporated unique power management features into the core and the memory levels of the SPARC T4 processor that enables it to automatically drop down the thread, the core and reduce instruction rates, whenever the processor or the memory sits idle.”
SOFTWARE
Another important piece in increasing the energy efficiency of a data center is the Data Center Infrastructure Management (DCIM) software, which is a tool that enables energy monitoring and management within the data center. Schneider Electric’s StruxureWare Data Center Operation Suite is one such DCIM solution that enables monitoring and operating power, cooling, security and energy usage from the building through IT systems and provides complete visibility and control over
Today, a new class of Intelligent PDUs have entered the market that go beyond power measurement and serve as a complete environmental solution Sanjay Motwani
Regional Manager, India and Middle East, Raritan India
a data center’s daily operations. The software includes various features like maintenance scheduling and live dashboard to monitor key performance indicators. It also includes branch circuit monitoring feature to enable live measurements from branch circuit for calculating capacity utilization if no metered PDUs are available. Emerson recently launched Trellis Platform, a DCIM solution, which is a single pane closed loop control system that bridges gaps between IT and physical infrastructure, and provides real-time insights into the PUE or the impact of virtualization. The solution gives data center managers real-time visibility into the entire data center’s performance. Some of the software applications, which were released with Trellis Platform launch include Inventory
Early Adopters of Interesting Cooling Solutions Aircel’s LEED certified Gurgaon Data center utilized the Earth Air Tunnel (EAT) system for fresh air intake, and ground source heat exchange was used instead of cooling towers. Four pipes each 60m in length were buried sixteen feet below the ground and fresh ambient air was drawn into these pipes for delivery to AHU’s Advantage. This resulted in about 6 degrees reduction in temperature of fresh air supplied and 20 percent reduction in chiller capacity due to EAT. It also led to 20-25 percent improvement in efficiency of the chillers due to geothermal heat-exchange. CtrlS Datacenter in Hyderabad was one of the first data centers in India to deploy water-based cooling technology. The system used water-based heat transfer technology that converts water into cool air, which is then pushed into the data center through the indoor units. The system is also designed to ensure that water does not enter inside at any point of time, thereby eliminating any risk to the IT infrastructure. In the data center, the watercooled condenser units are placed outdoors to prevent water from entering the data center. The cooling units are placed near the indoor units for maximum cooling efficiency. Water is pumped from cooling towers and a piping mechanism takes it to the condenser and back.
36
informationweek june 2012
Manager that provides data center management knowledge about the location of devices and equipment, relationship between these components, and resources that are being used by data center equipment; Site Manager that reports the health of the infrastructure including environmental conditions to data center personnel; Change Planner that works with Trellis Inventory Manager and assures that installs, moves and decommissions of equipment are planned, tracked and communicated to team members in a consistent manner; and Energy Insight that calculates total data center energy consumption, electrical costs and PUE/data center infrastructure efficiency (DCiE) value.
Innovative Cooling Technologies
The Manual on Best Practices in Indian Data Centers, published by BEE, says that the cooling systems in a data center typically consume around 35 to 40 percent of the total data center energy consumption, with the cooling systems representing the second highest energy consumer next to IT load. This makes it extremely important to optimize the cooling by exploring various innovative solutions that can make cooling as energy efficient as possible. The traditional method of cooling the data center was by hot aisle/ cold aisle arrangement and then by hot aisle or cold aisle containment technique. But in the recent past, we have been seeing some innovative cooling technologies. Informing us about one such technology, Seepij Gupta, Senior Analyst, IT Service, India and APeJ,
www.informationweek.in
Data Center Forrester Research says, “There are data centers where they don’t have hot aisles and cold aisles. What they have is a chimney above each and every rack. These chimneys suck in the hot air, and the air is taken outside the data center through pipes, where it gets cooled and then comes back and is circulated again, inside. You still have air conditioners, but they are not running at 19-21 degrees (which is typically the temperature in Indian data centers). Now they would be running at around 23 degrees, and substantial cost savings are associated with every degree of increase in temperature.” An important issue linked to cooling within data centers, is the need to have intelligent systems that would understand the varying cooling needs within the data center depending on the load that keeps varying throughout the day, and automatically adjusts the compressors. Highlighting this, Pratik Chube, GM – Product and Marketing, Emerson Network Power, India says, “What happens within a data center is, when the data center is working, the load is higher, more heat is generated and you need to cool that heat. Whereas when it is idle, you don’t need that kind of cooling. That means your compressor should be intelligent enough to understand how to modulate its engagement and disengagement to vary as per the load and it has to happen dynamically. Yes, there are compressors, which have step-based modulation but they cannot change dynamically. The need is of a compressor, which is digitally synchronized with the load. Which means, say if your load is 33 percent, then your compressor should be engaged only for 33 percent of its capacity. And when the load goes down, compressor should start disengaging itself as per the need. Talking about the innovative solution Emerson Network Power has in this respect, Chube elaborates, “This is what we have incorporated in the Digital Scroll Technology, which senses the load through an Intelligent
Our Digital Scroll Technology senses the load in the data center and varies compression and de-compression cycle of the compressor Pratik Chube
GM–Product & Marketing, Emerson Network Power, India
Communicator (iCOM), which is an intelligence developed inside the cooling equipment in the form of a chip and display, and dynamically varies compression and decompression cycle of the compressor. We are able to save more than 30–35 percent if we use the Digital Scroll Technology in conjunction with the Electronically Commutated Fans.” On innovations happening with respect to cooling, at a granular level, new design approaches are emerging that focus on row or rack-based cooling. Velayudhan of APC informs, “In these approaches, the air conditioning systems are specifically integrated with rows of racks or individual racks. This provides better predictability, higher density and higher efficiency. The best choice for rack-based IT loads is a row cooling approach due to its energy-efficient advantage. APC’s InRow cooling products have intelligent controls that actively adjust fan speed and cooling capacity to match the IT heat load to maximize efficiency and address the dynamic demands caused by virtualization in today’s IT environments. Removing the heat at the source and eliminating mixing of hot and cold air streams allows the system to predictably control IT inlet temperatures. These products offer a 30 percent increase in efficiency over traditional cooling architectures.” Velayudhan highlights another interesting solution — Schneider Electric’s EcoBreeze, which is a modular indirect evaporative and air-to-air heat exchanger cooling solution, with the unique ability to switch automatically between air-toair mode and indirect evaporative
heat exchange mode to consistently provide cooling to data centers in the most efficient way. The ability of the solution to automatically switch to the most efficient of cooling modes is a unique feature of EcoBreeze. The design of EcoBreeze is able to reduce energy consumption by leveraging temperature differences between outside ambient air compared to IT return air to provide economized cooling to the data center. Also, its modular design allows the unit to adapt to the future cooling needs of the data center.
Conclusion
Along with ensuring the high availability of the data center, the increasing power tariffs are pushing CIOs to closely monitor the energy efficiency of their data center by scrutinizing various power management, power distribution and cooling technologies that are currently deployed within the data center. And then finding out which new technologies can be used exactly at which level within the data center so as to increase its energy efficiency. It is quite evident that a lot of developments are happening in the data center power management and cooling technologies market. And this is just an opportune moment for CIOs to invest judiciously in various elements contributing to developing an energy-efficient data center and thus considerably bring down the data center energy bills.
u Amrita Premrajan
amrita.premrajan@ubm.com
june 2012 i n f o r m at i o n w e e k 37
Power and Cooling Data Center Expedition
Data Center
A peek inside Wipro’s LEED GOLD Certified Data Center Kiran Desai, VP and Business Head, Managed Services Business, Wipro Infotech, talks about the power management and cooling technologies deployed within Wipro’s LEED GOLD-certified Data Center in Greater Noida
W
ipro Enterprise Data Center was accredited with LEED GOLD Certificate from the Indian Green Building Council (IGBC) in July 2011. To receive this recognition, Wipro achieved 42 points against the minimum requirement of 39 points. Various areas that Wipro focused on to achieve the certification were energy efficiency, cooling efficiency, reduction in carbon footprint, minimum environmental impact, waste recycling and rain water harvesting. Though the initial investment in building a green data
Kiran Desai, VP and Business Head, Managed Services Business, Wipro Infotech
center is approximately 30 percent higher than that of other data centers that are not as per green standards, it provides savings in the long run. Our data center results in energy bill savings of approximately ` 100 million annually for some of our large customers.
Energy-efficient data center
Our Power Usage Effectiveness (PUE) measurement ranges between 1.6 to 1.75. We have been measuring the PUE of our data center, since the day it started operating. We measure the PUE of our data center by dividing total data center load by the total IT load. We have used UPS of IGBT type ( Insulated Gate Bipolar Transistors), with harmonics up to 3 percent. This UPS results in upstream PF of 0.99, which in turn leads to energy savings. We have used K13 Rated Isolation Transformers to reduce electrical losses, and Bus Bar Trunking (BBT) to reduce the distribution/heat loss. Ours is the first data center to use EC fans in precision air handling systems in India to improve energy efficiency. We have used VFD (Variable Frequency Drive)-based water cooled chillers, which result in high energy efficiency at part loads. Also, we have deployed a Variable Speed Pumping System to optimize the energy consumption of HVAC (Heating, Ventilation and Air Conditioning) systems. We have specially designed the Return Air Path, to minimize short cycling of cold air inside the data center server room, and have implemented special thermal insulation to reduce trans-heat gain. Apart from this, we have used Cooling Technology Institute-certified cooling towers to reduce water and energy consumption. We have used an Intelligent Building Management System specially modeled to integrate all data center components, enabling central monitoring and control, and resulting in overall energy efficiency. Also, by taking advantage of the low ambient temperature conditions in the winter season, we managed to provide free cooling for utility and common areas. In addition, we have used an energy modeling software for design optimization of the data center facility. The building material was specially selected to provide natural thermal insulation and the building orientation was specially designed to provide natural thermal insulation. The viewpoints and claims expressed by the author in this article have not been verified by InformationWeek India.
38
informationweek june 2012
www.informationweek.in
Power and Cooling Data Center Expedition
Data Center
Netmagic’s approach to green data center Mahesh Trivedi, VP – DC Infrastructure, Netmagic Solutions tells us about the energy-efficient measures he took and innovative solutions he deployed while building the data center in Chennai
N
etmagic’s Chennai data center received Gold LEED certification from USGBC in July 2010. We have adopted many green measures to conserve energy, and to reduce the environmental impact. The PUE of our Chennai data center is in the range of 1.7 to 1.8. We measure PUE in a very basic manner — total power input to the facility divided by total power going out to the IT infrastructure. We measure the power going out to IT at the output of the UPS. So our PUE is total incoming power measured divided by total output at the UPS. This calculation of PUE is done both manually and online through our Building Management System (BMS), and typically we take these readings every two hours, and we average it over a period of one month.
Building the Data center
The broad approach of LEED is based on 3 Rs — Reduce, Recycle and Reuse. What they promote is not only improvement of energy efficiency but also reducing the environmental impact. Data centers, being highly energy intensive, can score good points purely by implementing energy-efficiency measures. We implemented measures right from the data center building stage to site selection and
using certified recycled material for construction. We selected a non-agricultural area, which was an identified industrial site to build the data center, so that we do not impact the local natural ecosystem. We also used local labour, certified recycled materials as far possible, and leveraged rain–water harvesting. In our data center, we have used LEED-certified glass, which has high reflective index, and high thermal reflection for the glass sections. We have also used certified reflective paints that help in reflecting direct sun rays. Apart from this, we invested in insulating the core and the shell of the building using nitrant rubber etc. So it is thermally efficient.
Power and Cooling technologies
For our data center, we have selected high-efficiency UPS, without filer and low harmonics that enables higher efficiency. We have a Power Monitoring Module at every panel that checks all parameters of power at the panel level. We monitor the power at each and every step, right from the main incoming panel to the last mile usage at the rack level, on real-time basis through our BMS to see how we can save power at every stage. For the back office, what we call the human occupancy areas in the data center, we are using the Variable Refrigerant Volume (VRV) air-conditioning systems that are very energy-efficient. Within the data center, we are using Electronically Commutated fans with Variable Frequency Drives (self regulating fans), which modulate themselves according to the load, thus saving energy. Also, the refrigerants used are Ozone-friendly. We also have occupancy sensors in the data center, which switch the lights on/off automatically, based on human presence, thus enabling energy savings. Apart from this, we have done hot-aisle, cold-aisle configuration for better efficiency of the AC units. We have also used copper cables wherever required, so that the transmission losses are minimal. All these measures in their own way are contributing significantly in maintaining excellent PUE of the premises.
Investments for Green data center
It is a prevalent misconception that a green data center is expensive. If you plan this well, select the right techniques judiciously, and also the right products, then the investment increase will not be more than 20 percent as compared to any other data center. If one actually calculates the ROI, we should be able to break-even in less than 18 months. For a data center whose life span is considered to be minimum of 7 to 10 years, 18 months is a short break-even period. Mahesh Trivedi, VP – DC Infrastructure, Netmagic Solutions
39
informationweek june 2012
The viewpoints and claims expressed by the author in this article have not been verified by InformationWeek India.
june 2012 i n f o r m at i o n w e e k 39
Power and Cooling Data Center Expedition
Data Center
Tulip’s unique data center design strives for lowest PUE Harjyot Soni, CTO of Tulip Telecom elaborates on the unique design principles that make Tulip Data City highly energy-efficient with a design PUE of 1.5
T
ulip Data Center Services’ Bengaluru facility — Tulip Data City (TDC) is a multi-tiered facility built to Tier-4 and Tier-3 standards. When the entire data center facility becomes fully operational, it will consume up to 100 MW of power. To make distribution of that kind of power possible, we came up with a unique design after doing lot of research around it. Here the entire power distribution system has been designed in a fashion that you have high tension delivered to the floor plate, then to an LTE system, and it eventually gets distributed across the data center. Since we are putting transformers on the floor plate, we have to take care of the Electromagnetic Interference (EMI) and Electromagnetic Conducting (EMC) radiations. We usually have network cables running and such a large amount of power source nearby can affect the signals being carried by the network cable. So, we are using protection systems to avoid any impact to the server house. And since we are carrying high tension to the floor plate, the design of the panel and distribution is very different from any other standard data center. We partnered with Honeywell for Enterprise Building Integrated Solution, which is basically a Building Management System. This BMS is the master control unit for the entire building, with the system monitoring all the parameter readings — ranging from temperature, humidity and CO2 — on a real-time basis. BMS takes all these inputs and
40
informationweek june 2012
keeps firing all the right commands to the various mechanical systems. The key was to automate the entire system to a level where monitoring is absolutely real-time. Also, for energy saving measures, we deployed automated light control and LED-based lighting within the data center. Almost 28 percent of the data center power goes into cooling. So, we selected the most efficient air-cooled chiller system available in the market, which in turn led to reduction in the power bill. Along with this, we have deployed the Facility Plant Manager System, which continuously monitors the utilization of chilled water circulating in the pipes, and automatically schedules the activation of the chillers based on the requirements, thus saving energy. The key element that enabled us to achieve our current low PUE of 1.5 is measuring each and every parameter closely and as granularly as possible. When the facility will become fully operational, it will have more than 12,000 racks and about 3.5 lakh servers in the facility, so it becomes all the more important to understand exactly how power is being consumed. To do this, we are evaluating solutions from a couple of vendors, which will enable monitoring power and cooling at the rack level. Currently, we are evaluating some data center infrastructure management tools, and other such tools from various vendors. Once we deploy these tools, it will give us much better control over the data center. The viewpoints and claims expressed by the author in this article have not been verified by InformationWeek India.
www.informationweek.in
Power and Cooling green Data Center
Data Center
SAP’s journey to energy-efficient data centers Raghavendra Rao, Vice President – IT, SAP Labs, Bangalore, shares insights on the measures the company has taken to improve efficiency of its data centers
G
reen IT today is much more than a cost saving ploy for organizations. A growing sense of awareness to contribute to the betterment of the environment is pushing organizations to embrace green IT. As a company that is dedicated to provide innovative software solutions, SAP has made Green IT a priority of its overall sustainability efforts. Our approach spans internal operations and the solutions we deliver to our customers. The focus is on three core areas: reducing energy use, managing electronic waste (e-waste) responsibly, and fostering dematerialization — or substituting high-carbon activities with lower carbon ones. Reducing energy consumption is a key focus area for us. And given the fact that amount of electricity consumed by data centers worldwide grew 56 percent between 2005 and 2011, we have continuously worked towards improving data center efficiency. We have been working to achieve 80 percent virtualization of our entire IT landscape and have remained on track to reach this goal.
Raghavendra Rao, Vice President – IT, SAP Labs
41
informationweek june 2012
At SAP, we have invested significantly in server virtualization and cooling systems to improve efficiency. In 2011, we increased the number of our virtual servers to 65 percent, a drastic jump from 37 percent in 2009. This shift translates into extensively greater computing power, but only a minimal addition to the number of our servers. Besides this, SAP is also dedicated to environment-friendly energy sourcing. The biggest SAP data center located in St. Leon-Rot, Germany, runs on an energy mix of 60 percent clean energy, 12 percent nuclear power and just 28 percent fossil-based energy. For the second largest data center located in Newtown Square, USA, SAP procures 100 percent clean energy. These two data centers in total account for 70 percent of SAP’s data center energy demand globally. We are also working towards drawing more power from solar energy. Solar panels directly produce DC power and this solar DC power doesn’t have to go through an inverter. Such a system can considerably reduce power consumed by the data center. While we have many internal initiatives to create energyefficient systems within our organization, we are also working towards providing software and services for our customers to better manage their operations. We have started calculating the CO2 footprint of our software products. We believe that efficient software and its deployment has the same impact on IT energy efficiency as a state-of-the-art data center or highly efficient hardware. We have invested heavily in optimizing the energy efficiency of our products. This includes efficient coding, virtualization, cloud and in-memory technology. Our efforts seem to be paying off — in 2011 and the year before, SAP won two awards for Green IT — ‘Best Green Idea Award for 2011’ instituted by Deutsches Institut für Betriebswirtschaft and ‘Best Green Idea and Green IT Award 2010’ instituted by the German Government, large corporations and technical university. In 2012, we will continue to implement this broader Green IT strategy, which will help us navigate a range of challenges. Among these is the fact that we will need to maintain more servers in our data centers as we continue to offer more solutions in the cloud. SAP’s goal is to make IT more sustainable in the broadest sense. Although making data centers more sustainable, by reducing their energy consumption, is a major focus, the rest of the IT Infrastructure is included as well. The viewpoints and claims expressed by the author in this article have not been verified by InformationWeek India.
june 2012 i n f o r m at i o n w e e k 41
Interview
‘Our network virtualization technology saves data centers thousands of dollars’ I think the key transformations in the data center are virtualization, enterprise enablement, and inter-DC mobility. Virtualization is driving IT execs to look at infrastructure as a pool of resources/capability vs. a series of siloed elements. Network virtualization is allowing enterprise networking to be delivered to the cloud, supporting enterprises to public cloud and hybrid environments.
Nicira, whose co-founders are university professors, is offering us networking virtualization technology that will bring down the cost of networking equipment (both CAPEX and OPEX), and also make it more efficient and easier to configure. The technology is totally independent of the hardware, so customers can use equipment from any vendor. Customers, which include Rackspace,
Cisco, Juniper and HP — but that is our primary difference: we are totally independent from hardware so our customers can use any gear (and any hypervisor) they choose. I will elaborate on the three benefits mentioned above: Business Velocity - NVP allows cloud service providers to eliminate the network bottlenecks that delay configuration and deployment of
Software Defined Networking (SDN) and the OpenFlow protocol make for the next wave of virtualization in data centers. A small and new company called Nicira is offering networking virtualization technology that will bring down the cost of networking equipment, and make it more efficient and easier to configure. Martin Casado, Co-Founder & CTO, Nicira explains how enterprise users will benefit from Nicira’s SDN platform eBay, AT&T and Japan’s NTT, hail this technology as a “game changer”. In an e-mail interview with InformationWeek India, Martin Casado, Co-Founder and Chief Technology Officer, Nicira explains how enterprise users will benefit from Nicira’s SDN platform. Incidentally, Casado’s research as a student at Standford University helped in the creation of OpenFlow and Software-Defined Networking. Casado began working on network virtualization in 2002, when he worked as a contractor for a US intelligence agency. Excerpts from the interview: What are some of the transformations you see in the data center today? What are CIOs, CTOs and data center managers asking for?
44
informationweek june 2012
Inter-DC mobility (combined with virtualization) supports free movement of workloads, thus increasing utilization. With regards to the technology and cost aspects, what value proposition is Nicira offering and how does that differ from that of Cisco, Juniper, and HP? Nicira customers gain three primary benefits: business velocity, operational efficiency, and CAPEX reduction. Our customers are already realizing service delivery acceleration from weeks to minutes, and dramatic cost reductions in data centers of tens of millions of dollars. Nicira’s Network Virtualization Platform (NVP) software works with hardware from any vendor — including
services. This occurs by making the network programmable and abstracting network services from the physical network, effectively creating a “click to compute” cloud model. Network virtualization fosters both faster time to revenue for public cloud providers (who can now differentiate at the infrastructure level) and competitive advantage for enterprise private clouds by accelerating new product and service introduction. We have seen customers reduce provisioning time for new services from weeks/days to minutes/seconds, which can translate into significant time-to-revenue gains. Operational Efficiency - Traditional networking approaches are highly manual and frequently require reconfiguring multiple network
www.informationweek.in
Data Center elements (one at a time) to provision new service onto the network or even simple VM mobility. Network reconfigurations are fragile and prone to human error, when a change to one node affects the other nodes in the network. Finally, 80 percent of the security vulnerabilities in a network occur from a mistaken keystroke during reconfiguration. NVP eliminates these issues by providing a programmatic interface for network services, which operate above the physical network fabric. This allows the physical network configuration and management to be far simpler and free from human intervention. Reducing “human touch” in cloud data centers, reduces both operational costs and downtime from both the network and server administration perspectives. CAPEX Reduction - In addition to reduced operational expenses, network deployments become more capital efficient from both the network and the server perspectives. As emerging Layer 3 Fabric architectures are adopted broadly, companies will benefit from lower cost, high performance network hardware, as well as vendor independence. Nicira is not dependent on any specific network hardware, allowing customers to choose the network architecture and vendors that provide the best price performance solution for their business. As servers become unlocked from network roadblocks, they can be fully loaded and eliminate the need for backup capacity in racks for future workload growth. Can you explain a bit about Nicira’s Network Virtualization Platform and how enterprises will benefit? Nicira’s NVP is a software-based system that creates a distributed virtual network infrastructure in cloud data centers that is completely decoupled and independent from physical network hardware, thus removing the limitations once placed on network mobility, and better utilizing server resources. The system is implemented at the network edge and managed by distributed clustered controller architecture. The system forms a thin
software layer that treats the physical network as an IP backplane and this approach allows the creation of virtual networks that have the same properties and services as physical networks — so customers still get the same security and QoS policies, L2 reachability, and higher-level service capabilities, like stateful firewalling, as they would with the physical networks. These virtual networks can be created to support VM mobility anywhere within or between data centers without service disruption. Another key feature of our NVP platform is its compatibility with any data center network hardware. It leverages Open vSwitch and can be deployed non-disruptively on any existing network. Customers can make changes to the network hardware in the future without disruption to the virtual network platform’s operations. Our customers call our NVP a “game changer.” Any workload can be placed anywhere. Data centers are saving thousands, even millions of dollars in infrastructure costs. And with our services in place, our customers can offer their customers more services because their networks are more flexible, agile, and their data center’s resources are being better utilized. Tell us a bit about your company, its founders and promoters. How are you raising capital? Nicira was publicly launched in February 2012 and we’ve raised USD Martin Casado received his Ph.D from Stanford University in 2007 where his dissertation work led to the technology on which Nicira is based. He received his Master’s from Stanford University in 2005. While at Stanford, Martin co-founded Illuminics Systems, an IP analytics company, which was acquired by Quova Inc. in 2006. Prior to attending Stanford, Martin held a research position at Lawrence Livermore National Laboratory where he worked on network security in the information operations assurance center (IOAC).
ad to get t Re 8 Mus to page 5 ht most
ions Turn rs to eig d quest e e k w ans ently as -defined e u freq softwar t u o b g a orkin netw
50 million in funding from a number of investors, including Andreessen Horowitz, Lightspeed Venture Partners and New Enterprise Associates, VMware Co-founder Diane Greene and Benchmark Capital Co-founder Andy Rachleff. Nicira was founded by Nick McKeown, Scott Shenker and myself, and its technology is based on much of the research I did at Stanford while working toward my Ph.D. (The research lead to the creation of OpenFlow and Software-Defined Networking.) Nick is a professor in the Electrical Engineering and Computer Science Department at Stanford University, and Scott is a professor in the Electrical Engineering and Computer Science Department at the University of California at Berkeley. Both have worked with Cisco in the past. Steve Mullaney, Nicira’s CEO, has more than 25 years of experience in marketing and has an extensive past in networking, formerly holding positions at Blue Coat, Force10, Cisco, Growth Networks, ShoreTel, Bay Networks and SynOptics. What are the current markets that Nicira operates in, and in the future, which markets will you tap? What about Asia and India? Nicira is targeting a global set of serious cloud providers, which includes both service providers and enterprises — as our named customers indicate: AT&T, NTT, eBay, Fidelity Investments, Rackspace and DreamHost. We will be expanding our global footprint in the coming months. We are currently adding customers in Europe and expect to have customers in India shortly.
u Brian Pereira brian.pereira@ubm.com
june 2012 i n f o r m at i o n w e e k 45
Interview
‘Less than 5 percent of data centers are in a professional environment’ What trends do you observe in the data center space, especially in India? Do you see a lot of corporates moving IT infrastructure to professionally managed data centers? The reality is that, of all the (corporate) data centers existing today, less than five percent are present in a professional data center environment. Most data centers are adhoc and run in commercial buildings. I met a CIO who had three data centers and he boasted that these were Tier III certified. So I asked him why this effort, when he does not run a data center company? There is a
need to consolidate and move into professionally managed data centers. And there are certain aspects that should be considered. For instance, the Power Usage Effectiveness (PUE), which has a global average of 2.2. Cooler countries do not need so much power for cooling, so the PUE could be as low as 1.2 or 1.3 (1 for servers and 0.2 for cooling). But in India the average PUE could be 3.0 to 3.5 as you need a lot of power for cooling in a warm country. But most corporates are running data centers in an adhoc manner in commercial buildings where they have their offices, and they do not
understand the importance of this. I believe most of these corporates are serious about moving (IT infrastructure) to a professionally managed data center. You have already set up five data centers in India, the most recent one being Tulip Data City in Bengaluru. So how much will Tulip invest in its new data centers? How are you raising funds? What kind of ROI are you expecting and after how long? We will invest ` 900 crore over a three year period; so far we have invested ` 450 crore. I had invested ` 230
Tulip Telecom sees huge demand for data center services and has already invested ` 450 crore in a world-class data center in Bengaluru. It will invest another ` 450 crore over three years and is expecting data center revenue of ` 1,000 crore per year after four years. InformationWeek visited Tulip’s newest data center in Bengaluru and examined its facilities. Hailed as the largest data center in Asia and the third-largest in the world, Tulip Data City spans 0.9 million square feet and is already winning the attention and acclaim of prospective international customers. Lt. Col. H S Bedi, Chairman and MD, Tulip Telecom tells InformationWeek Editor Brian Pereira what is driving demand for professionally managed data center services and how Tulip is responding with state-ofthe-art facilities 46
informationweek june 2012
www.informationweek.in
Data Center Tulip Data City is actually 22 data centers in one data center. It is a multi-tenant facility; we host other telcos and systems integrators. We are service provider agnostic; I welcome other telcos to avail of our services crore and got an approval of ` 150 crore from IDFC. We got ` 100 crore from StanChart Bank and ` 250 crore from ICICI Bank. The rest comes from internal accruals. I do believe that the returns on this investment should be good. We should have revenues of ` 1,000 crore per year and EBITDA of 40 percent per year — but this will happen after four years. And this is just for Tulip Data City. What are the USPs of your data center? What are the service and product differentiators?
Tulip Data City is almost 13 times larger than the largest data center in India. Scale brings in tremendous efficiency. Secondly, this is the most recent data center (commissioned in Feb 2012) and therefore it uses the very latest technologies. It is going to be LEED Gold-certified. This is actually 22 data centers in one data center. It is a multi-tenant facility; most telcos will host their own customers in their data center, but not those of other telcos. Typically, they will not allow fibre from other telcos to enter their data center.
But at Tulip Data City we host other telcos and systems integrators, and that’s why we are multi-tenant. For instance, HP and IBM can also bring in their own customers (read: who may be competing telcos for Tulip Telecom). We are service provider agnostic; I welcome other telcos to avail of our services. Another distinguishing factor is that this data center will consume 100 MW (megawatts) of power. There is no data center that is at this scale in power consumption. Our data center is also an efficient data center in terms of power usage effectiveness. I expect to achieve a PUE of 1.6 to 1.7 and if you compare this with an average data center, it will save at least 30 percent of power. By consolidating corporate data centers, we will actually be saving 30 percent of power on overall consumption. Consolidation of data centers is also a trend in the U.S. and other countries, because large data centers are more efficient than adhoc data centers. What’s your vision for the Tulip data center business? We are currently looking at assimilating the data centers that we have. The ones in Delhi and Mumbai will be used to full capacity quickly and thereafter we need to evaluate the demand and respond accordingly. As we go forward and get into cloud services we will also partner with IT vendors. We are also open to partnerships with other telcos and data centers. (Tulip recently partnered with EMC to offer managed backup and storage services. See: http://bit.ly/Kb4QSg). u Brian Pereira brian.pereira@ubm.com
june 2012 i n f o r m at i o n w e e k 47
Case Study
Unified architecture helps KPIT Cummins improve data center efficiencies By choosing a unified architecture of networking, storage and computing, KPIT Cummins has significantly enhanced the efficiency of its IT infrastructure By Srikanth RP
A
n end-to-end product design, engineering and IT services company with a workforce of around 7,000 globally, KPIT Cummins is one of the fastest growing mid-sized firms in India. Typical of a fast growing services company, the company faced issues in scaling up its IT infrastructure in line with business needs. In a competitive business environment, KPIT wanted to speed up the pace of provisioning of project infrastructure as this is crucial in getting a project quickly executed. “Today, the business demands provisioning of IT infrastructure quickly, and wants the required desktop to be provisioned immediately. However, as the provisioning of IT infrastructure depends on the project, our IT team faced a challenging task in provisioning infrastructure according to dynamic needs,” explains Mandar Marulkar, Head -Technical Infrastructure Management Services and CISO, KPIT Cummins. As projects can be short in duration, the IT team has less time for provisioning infrastructure according to dynamic needs. Any delay in provisioning the required hardware affects project delivery. Similarly, ineffective utilization of hardware infrastructure affects the margins of the projects. In addition, the challenges from a desktop perspective included the need to upgrade physical desktops regularly and provide users with machines with good memory and compute power to add applications in accordance with the changing business environment. Upgrading so many physical machines was a
48
informationweek april 2012
daunting task and required a lot of time and manpower. From an IT perspective, there were other compliance and security-related challenges with respect to the provisioning of software tools during employee onboarding and offboarding and changing project requirements. Most of the security tools were device based and had limitations for effective implementation of information security policies. As the user data, application and special privileges given to users for effective execution of project were configured on the local desktop, the users could not access the data/applications stored on their physical machines when out of office. This in turn restricted the user to work from home or any other devices. As a large percentage of users were either travelling or at customer sites, physical infrastructure such as workstations and PCs were not being fully used. There was an urgent need to ensure better utilization of workplace resources and cut down costs. One way to address the problem was to detach physical infrastructure from data, applications and compute through virtualization. KPIT was looking for a solution that
would help manage their data center/ desktops better, while also reducing the number and cost of physical machines. The company was looking to scale its infrastructure and enable employees to easily access corporate data from anywhere and over a broad range of end-user devices.
Virtualization at the desktop level
Given its issues with infrastructure manageability, KPIT decided to implement virtualization at the desktop level instead of refreshing PCs in a traditional way as was the norm every once in four years. To do this, KPIT chose the data center architecture and the UCS framework from Cisco along with the VCE Vblock platform. KPIT deployed the Vblock platform for a virtual desktop interface (VDI) implementation to serve 1,200 users across India. The Vblock 1, which is designed for large virtual machines, was deployed in a compact footprint and provided for a mid-sized configuration to deliver a broad range of IT capabilities to the organization. Explaining the rationale for choosing the Vblock infrastructure, Marulkar says, “Though Cisco was a new entrant in this space, we believed the value of a pre-configured
“The implementation has helped us right-size our infrastructure and improve its utility”
Mandar Marulkar
Head -Technical Infrastructure Management Services & CISO, KPIT Cummins
www.informationweek.in
Data Center
BENEFITS OF A UNIFIED ARCHITECTURE KPIT has been able to reduce the need for IT manpower for supporting end users by
2
75% for 1,200 users
The provisioning and de-provisioning of IT infrastructure for employees has reduced from a earlier to now
15 minutes
Use of thin clients will
save about
architecture which was tested thoroughly. In the Vblock platform, we found not only a certified solution but a convincing solution that could be deployed right out of the box, saving us time and cost in planning and architecting.” More importantly, Marulkar was impressed with the proposition of a single vendor who could be approached for any issue — whether it was related to the server, storage, network or virtualization. The single point of contact for support, which was either Cisco or its partner added to the comfort of the customer and helped it in reducing partner conflict. Post implementation, KPIT has managed to obtain optimum performance and right size its infrastructure in tandem with changing business needs. The advantages of the solution design enabled KPIT to move a number of core applications on to the VDI platform in a short period of time. The implementation helped KPIT to simplify and accelerate virtualization and enable the transition to private cloud infrastructure. The Vblock platform also offers processing,
couple of days
100 watt power, per user
network performance and storage capacities to support incremental capacities, such as enhanced security and business continuity
Provisioning at the click of a button
Today, KPIT can add on virtual desktops with ease, while also reducing costs, carbon footprint and the need for administrative personnel. Desktop provisioning can be done at the click of a button now. “The implementation has not only helped us right-size our infrastructure and improve its utility, but has also enabled much faster provisioning/ de-provisioning of desktops and applications irrespective of the number of users,” says Marulkar. The firm estimates that its data center footprint will reduce by 60 percent, over a two-year period. Post implementation, KPIT has been able to reduce the need for IT manpower for supporting end users by 75 percent for 1,200 users. Since most of the applications platforms reside in the cloud, onboarding and offboarding of employees is a lot easier in the virtual environment. The provisioning
and de-provisioning of IT resources for employees has reduced from a couple of days earlier to 15 minutes now. In terms of energy savings, the use of thin clients rather than a typical desktop will help it in bringing down power to approximately 55 watts from 150 watts, amounting to almost 100 watt power saving, per user. This is significant when interpolated to a large user base. Pleased with the results, KPIT Cummins has deployed the Vblock platform for all core production applications. Some of the critical applications include messaging and unified communication using Microsoft Exchange 2010 and Lync 2010, SAP ERP solution, configuration management and project management systems. In the future, Marulkar plans to set up disaster recovery for the existing VDI setup. Simultaneously, it is also extending Vblock to accommodate further 400 additional VDIs and setting up a self-service portal for more than 400 test and development environments on the new Vblock platform. u Srikanth RP srikanth.rp@ubm.com
april 2012 i n f o r m at i o n w e e k 49
Data Center
Opinion
Towards the data center without walls
T
he data center is changing again — enterprises are evolving their data centers through consolidation, virtualization, data protection, and cloud services. A key business driver for enterprises is the need to manage growth with lower costs by being more efficient. Moving expenses from less flexible capital expenses to on-demand operating expenses is now a new option for data center infrastructurebased services. The data center is also changing for carriers and data center service providers. Services are increasingly evolving from static physical capacity co-location to managed services and now to Infrastructure as a Service (IaaS), turning computing and storage into on-demand, pay-as-you-go services. There are over 7 million data centers located worldwide in small and medium-sized companies, large enterprises and service providers, ranging from closet size to huge 500,000-plus-square-foot uber-sized facilities. As businesses move some of their data processing to the cloud, it stabilizes enterprise data center capacity growth while adding to cloud provider data center demand. For example in 2008, Animoto was one of the initial success stories for Amazon Web Services. Their Facebook application experienced such viral growth that 750,000 users signed up in three days. There was no possible way a startup company could install the sufficient data processing capability to meet this instant demand, but Amazon was able to scale cloud-based server capacity to meet the business growth. Since then the cloud use cases have continued to expand from consumer driven demand to enterprise and government driven demand offering on-demand access to elastic pooled resources. The evolution of cloud towards expanded enterprise IT utility services,
52
informationweek june 2012
and the service provider’s need to efficiently deliver cloud infrastructure services, drives the creation of the virtual data center architecture interconnected with a cloud backbone network. Multiple providers and enterprise customer data centers are connected to enable workload orchestration, traffic generation and flow. The physical walls of individual data centers effectively are broken down to create a virtual data center capacity encompassing multiple physical ones — a ‘Data Center Without Walls’. We use the term ‘Data Center Without Walls’ to describe an architecture that creates a multi-data center and hybrid cloud environment, and functions as one virtual data center to address any magnitude of workload demand. A Data Center Without Walls benefits both — the cloud service provider or carrier and enterprise IT customers by creating seamless workflow movement and greater resource efficiencies. Data Center Without Walls enables effective asset pooling among data centers to deliver resource efficiencies up to 33 percent for service providers, as well as increased resiliency and performance gains over isolated provider data center architectures. This new hybrid cloud architecture delivers improvements in enterprise economics, helping IT to achieve 25 percent reduction in IT services and hardware expenses promised from cloud adoption. In order to benefit from these operational efficiencies, data center providers will need to pay particular attention to inter- and intra-data center connectivity needs, specifically in support of new cloud applications that have dependencies on network bandwidth, scalability, latency and security. The traditional approach to address these network needs has been through bulk network capacity additions onto the network, but this leads to inefficient utilization and
higher costs. A more strategic approach is required to dynamically and intelligently adapt to the rapidly changing needs of cloud-based infrastructure services. Approaches such as a flatter switching-based architecture and dynamically scalable bandwidth that offers lowest latency when possible should be explored. Apart from this, another approach is to adopt a network hypervisordriven architecture that responds to application demands where the network resources ultimately behaves the same way as cloud compute processing power, memory and storage, and can be billed as such. As the Data Center Without Walls tools continue to mature for workload orchestration, the distinction between the enterprise and cloud data centers will continue to blur. The data center of the future will be federated and intimately interconnected with an intelligent network operating as the central offices of the future.
Anup Changaroth
u Anup Changaroth is Director
-Portfolio Marketing, Asia Pacific, Ciena Corporation
www.informationweek.in
Data Center
Feature
Data center automation:
Why it’s time to move Cloud environments are driving data center automation and making it more difficult to do. But if you want to free up resources for other priorities, it’s a must By Michael Biddick
S
hifting spending from IT operations to innovation and meeting business needs more quickly should be key goals for every CIO. One way to do this is with a highly virtualized data center. But IT won’t get the staff efficiency gains and cost savings from virtualization unless it automates more of the work IT does. The message is starting to get through: 36 percent of the 345 business technology professionals we surveyed are already using process automation tools, while 32 percent have an implementation project under way. Faster response time to service requests is the top process automation impact, cited by 25 percent of tech pros surveyed who have fully deployed process automation tools. Others cited reduced errors and shifting staff to new activities — which means freeing IT to focus on higher-value projects. Cash savings, though, can be elusive: Just 11 percent cut staff, and 5 percent cite cost reduction as a key impact of process automation. Despite the lack of hard savings, private clouds are driving a new interest in automation; IT just can’t deliver the highly scalable and more flexible virtualized data centers without it. Public cloud environments, in contrast, add complexity to process automation, as workloads tend to be set up in silos, but emerging cloud-broker technology provides a potential solution. The difficulty IT faces with a process automation project will largely depend on its systems: The more standardization, the easier
automation will be. Lots of homegrown applications and complicated workflows will increase the difficultly. Yet it’s still worth pursuing. IT leaders must get a better grip on strategic challenges like better governance and cloud integration, but tech teams at the same time can identify small-scale projects for quick wins in 90 days or fewer, like integrating hiring and firing processes with Active Directory.
The Governance Mandate
Automation must start with governance — a process for aligning IT and business priorities and then ensuring that execution follows a well-defined process. It’s discouraging that only 45 percent of IT pros who use or plan to deploy IT automation tools say governance is extremely important in process automation. That number should be higher. Governance frameworks such as Six Sigma, ISO 9001, ITIL, and CMMI can provide a significant head start in
helping to standardize and formalize processes. These frameworks won’t meet all of IT’s needs, but they can significantly reduce the time to establish core process areas. Two great candidates for automation are configuration and change management. Automating these can help reduce errors in the infrastructure and eliminate a lot of manual effort to keep systems up and running. Forty-two percent of our survey respondents say they’re automating or plan to automate a combination of operational and business processes. Automation is no longer relegated just to operational and data center processes but is being extended to ones that directly impact the business and customers, such as using an order management system to track customers’ requests for new IT services or changes to existing services. We thought that was good news until, later in the survey, we asked
Are you using any tools that automate IT workflow or process?
No, and we have no plans to use them
Yes, and it’s fully deployed
32%
36%
32% No, but a project is under way Source: InformationWeek 2011 IT Process Automation Survey of 345 business technology professionals, August 20
june 2012 i n f o r m at i o n w e e k 53
Feature how critical it is to automate 10 specific process areas. Topping that list are mostly hard-core IT operations processes: backup and restoration, disaster recovery, service fulfillment, incident management, and data movement. Toward the bottom were higher-value areas, including configuration management and provisioning. Many companies also have a lot of partially automated processes that require some human intervention and don’t provide the full cost savings promised by automation. Businesses must focus their automation efforts on finishing what they’ve started and “not simply focus on the newest problem area or concern,” says Phillip Arsenault, VP of process automation with Navy Federal Credit Union. “The tools available make it easy to leap forward, but they’re simply tools; we need plans and dedication to finish what we started.”
The Cloud’s Impact
Growing use of public cloud computing services has had a significant impact on IT process automation. SaaS-based process automation tools have made it possible to automate elements of the integration and governance framework. For example, integrating premises and cloud-based applications and managing workflow between applications just wasn’t possible before without costly outsourcing arrangements. Our survey found some resistance to using SaaS-based automation tools — 59 percent of those using or planning to use automation are going with on-premises software. That’s most likely because of security concerns in integrating public cloud and on-premises applications. Still, 71 percent are open to a SaaS or hybrid automation approach. Companies using SaaS for critical applications face a number of process automation challenges, though, particularly when they’re from more than one provider. These applications are typically tougher to integrate into the enterprise environment, especially
54
informationweek June 2012
As more applications move to cloud, IT teams are considering cloud broker technology that can enable, in part, process and workflow automation within cloud and enterprise apps areas like identity management and backup, which present vexing data silos outside the management domain. Identity and access orchestration usually top the list of challenges of automating processes that involve cloud applications. Large companies need automation tools to rapidly search, identify, and verify who’s accessing their systems. With even a few SaaS applications in the mix, there’s a much higher risk of not removing people from systems after they leave and of failing to modify permissions as people’s roles change. Why? In large organizations, adding and removing access is typically centralized and automated, using access control systems that have well-established ties into enterprise applications — but often not with SaaS apps. Michael Wash, CIO at the National Archives and Records Administration, is developing a cloud e-mail pilot and using a cloud-based system to streamline the agency’s securityclearance tracking system. Wash is
also moving to a cloud service to use an automated system to concurrently manage and track cases for mediation of Freedom of Information Act requests. While these approaches may cut some costs involved in data center infrastructure operations and maintenance, the cloud likely also will add new wrinkles and barriers in process automation as systems are integrated across different security boundaries. The very nature of the SaaS architecture makes process automation harder than it otherwise would be. Several vendors offer APIs to integrate applications, but they often require a mini-development effort that most companies don’t want to deal with. If you’re considering using SaaSbased e-mail, like Microsoft Exchange, you may not be able to integrate your Active Directory, so your single signon strategies will be less effective. If for cost or other reasons you’re using another provider for SharePoint, you’ll likely have another set of credentials with policies around passwords that
What IT processes are these systems automating or will they automate? 3%
Don’t know
Operational and data center processes
31% 42% A combination of operational and business processes
24%
Business and customer processes
Source: InformationWeek 2011 IT Process Automation Survey of 234 business technology professionals at companies that have deployed or plan to deploy IT automation tools, August 2011
www.informationweek.in
Data Center
intelligence and workflow automation that these services can provide will be what really drives value in terms of automation and data exchange within a cloud environment. The SaaS approach to integration and workflow automation is also appealing from a price perspective. Boomi’s small- and medium-business offering, called Starter, integrates two applications for USD 550 per month. Companies can integrate as many as seven applications and get additional features for USD 5,500 per month. These costs are substantially lower than a traditional enterprise application integration suite even over several years. But you can only take advantage of these tools if your enterprise application supports the vendor’s adapters. If you have custom and unsupported applications, cloud-based brokers won’t be a viable alternative.
might not comply with your company’s policies. Technologies like Security Assertion Markup Language, an XMLbased open standard for exchanging authentication and authorization, may help, but they don’t solve the policy problem and may be more restrictive than the multifactor authentication mechanisms that many companies use. As more applications move to the cloud, IT teams are considering cloud broker technology that can enable, in part, process and work flow automation within both cloud and enterprise applications. This “EAI in the cloud” must utilize web services connections and navigate the security barriers of enterprise firewalls. This approach requires a greater degree of trust in the multiple vendors and providers companies are using than some of them are comfortable with. However, it may provide the answer to enterprise integration and workflow automation in the cloud. Cloud broker software connects into the target systems and applications, and manages the business logic for a wide range of process automation. DBSync, Dell Boomi, Hubspan, IBM Cast Iron, Jitterbit, SnapLogic, and Vordel are a few of the cloud broker vendors. The
The Private Cloud Challenge The biggest new challenge — and new cost driver — for most companies is in the automation of private clouds. These architectures try to mimic the efficiency and flexibility of public clouds inside a company’s own data center; that means the only way to
Top IT process automation limitations Integration with the existing environment is technically too difficult
46% We’ve partially automated some IT process areas, but they still require human intervention
42% Tools are too complicated and costly to implement
42% Tools are too expensive to purchase
39% Tools don’t provide the functionality we need
24% Our processes aren’t mature enough
24% Our organizational structure limits our ability to automate
16% No limitations; we’re automating everything we can think of
7% Source: InformationWeek 2011 IT Process Automation Survey of 234 business technology professionals at companies that have deployed or plan to deploy IT automation tools, August 2011
achieve cost savings is to get as many applications and processes as possible using the private cloud. Public cloud services like Amazon, Salesforce.com and Google have been successful because they have the ability to tap into a massive user community, unmatched in even the largest enterprises. These providers have invested millions in data center process automation, including service requests, provisioning, and alert detection. Unfortunately, many companies are still burdened with legacy applications that aren’t adaptable to virtual private clouds, and IT often lacks a full appreciation of the investment required to automate processes related to those apps. CIOs must wield an iron fist and push for standardization wherever possible— especially in companies where decentralized budgets and power bases prevent the deployment of unified business processes. Business units will often cite their unique business or mission requirements as reasons for not conforming to a standard. Most of this is just resistance to change. One goal of private clouds is to make it easier for users to request services. It’s hard to make things look simple. IT needs a disciplined process from the service request to provisioning and managing services in order to automate these technical tasks and reap the benefits of scale, cost savings, and rapid on-demand provisioning. The best server provisioning, orchestration, and management tools won’t compensate for the lack of process. Process automation in the cloud requires a different paradigm and often a different set of tools. Unfortunately, the management software is often a silo application specifically for the virtualized data center. This means that architects need to determine a way to integrate those applications into the rest of their enterprise management environments if they hope to gain a complete process picture. As resources are provisioned, it’s easy to lose track of them unless IT has solid control over your inventory. Trust
june 2012 i n f o r m at i o n w e e k 55
Data Center
Feature us — virtualization sprawl will be even worse in a private cloud.
Overcoming Challenges
Integration difficulties top the list of IT process automation challenges, with 46 percent of those deploying or planning to depoy automation tools saying that integration with existing environments is technically too difficult. This is likely a result of companies having a slate of disparate legacy applications and an inability to communicate and exchange data within the application. Respondents also say automation tools are too costly to purchase (39 percent), are too complicated and costly to implement (42 percent), and don’t provide needed functionality (24 percent). They also say that their processes aren’t mature enough for automation (24 percent). New delivery platforms are making it easier and less expensive to automate processes across large companies. IT shops need to assess those tools with a clear understanding of how much they’re spending on manual processes such as provisioning, order entry, service, and new user activation and deletion. These, in addition to many other processes that touch dozens of IT systems and people, are probably costing IT much more every year. Maybe it’s time for a new look at the cost-benefit picture.
Automation Success Factors
Before leaping into process automation, you must set realistic expectations with your team, management, peers, and other stakeholders. You’ll need defined requirements, as well as configuration and customization plans. The automation tools aren’t plug and play, and the effort requires a long-term commitment. One of your top priorities needs to be getting everyone to understand that change is expected and necessary. Start the project by considering some key performance indicators that you’ll use to track success. Get baseline metrics on the time and effort needed to complete certain components;
56
informationweek June 2012
that can help identify some quick hits that you can automate first. Having visibility into key metrics will also help build a case for more automation, as management sees the impact of the first round of changes. A frustration we often see is executive leaders who are only partially committed to change. This is often a result of the team not providing sufficient information on all aspects of the project. For example, the CIO may not have been told that a configuration management database or service catalog will be required for successful automation of a process, or that there’s a gap between existing tools and the desired end state. It’s important to arm leadership with all the costs, risks, and benefits, and then
l
Creation of backups and restarting failed jobs; l Data center failover; l Creation of trouble tickets from network management systems; l Server and application compliance scanning; l IT problem identification, diagnosis, and remediation; l Prioritization of IT response to outages based on service impact. Security is another way to make the case for automation. The complexity of complying with multiple laws, regulations, and best practices and the increasing volume of vulnerabilities makes automating IT processes a must. Beyond a configuration management database, process
When considering IT process automation apps, do a quick ROI study and determine how much time you spend on the manual process and see what the impact would be if it were automated seek a clear directive to move forward. Only this level of commitment will lead to long-term results. If you’re looking for a quick win, consider focusing on change or configuration management— two areas that are often in need of automation. Rapidly move through the planning stages to keep up energy and excitement levels, and make sure to provide the flexibility to adjust as the organization grows and changes. When considering IT process automation applications, we recommend automating one process or process area that will best demonstrate the value of the tool. Do a quick ROI study and determine how much time you spend on the manual process and see what the impact would be if it were automated. Here are some places where we’ve seen quick wins: l Integration with Active Directory for new hire and termination processes; l Updating network and server configuration
automation repositories can be used to centrally design, report on, and call execution commands to external systems to enforce policies. This will help IT manage polices across desktops, servers, mobile devices, and even networks. It will also let us create regulatory compliance repositories — very useful when audit time comes around. Aside from audits and meeting best-practice guidelines, process automation can help control end-user devices, reducing downtime and user complaints about poor performance. Malware infections can be radically reduced by ensuring that the right security software is installed and upto-date. Every organization wants to make IT less complex, automate processes, and spend less time on routine tasks. While the full promise of autonomic computing is still far off on the horizon, IT can make considerable progress with the tools available today. Source: InformationWeek USA
www.informationweek.in
Feature
Software-defined networking: A no-hype Go beyond the sudden buzz about softwaredefined networking. Here’s what it is, what it isn’t, and what’s ahead By Kurt Marko
T
he biggest buzzword at this year’s Interop conference in Las Vegas was software-defined networking (SDN). Not only did NEC’s ProgrammableFlow PF6800 Controller win the best of show award, but also, SDN in general and OpenFlow in particular caused near constant debate in the convention center. IT analysts spread the fervor last week, with IDC estimating SDN to be a USD 2 billion market in four years. With all of this hoopla, it’s easy to forget that just a couple years ago, OpenFlow was a Stanford research project and SDN was an unchristened buzzword. But at this early stage in this buzzword’s hype cycle, many IT practitioners are still wondering what all the excitement (and yes, some disdain) is about. Let’s examine the key facts about SDN.
What is SDN? SDN is nothing more than the separation of network data traffic processing from the logic and rules controlling the flow, inspection, and modification of that data. Traditional network hardware, i.e. switches and routers, implement these functions in proprietary firmware partitioned respectively into what is known as the data and control planes. SDN, the OpenFlow project being the most famous example, pulls these apart, such that the traffic-handling features are executed as a distinct software application. The packet processing, i.e. data movement
58
informationweek June 2012
and forwarding, is still handled in hardware, but SDN-optimized switches can be relatively simple and are often built out of commodity ASICs, so-called merchant silicon, not proprietary designs. Some call this ‘virtualizing the network,’ in the sense that each individual hardware switch may be part of multiple Layer 2 and Layer 3 networks and have its configuration and traffic management policies dynamically changed by the master network controller.
Why all the excitement now? Part of the buzz is just the natural tendency for the IT community to create and subsequently jump on technology bandwagons. Just like server virtualization, network fabrics and clouds, SDN is a new approach to solving real problems. However, like all emerging technologies, SDN is immature and so amorphous as to serve as a convenient panacea for all that ails network engineers these days. Whether it’s VM proliferation and the accompanying rise of largely opaque virtual NICs and switches, the increase in server-to-server (so-called ‘east-west’) network traffic and the resultant need for flat, multipath edge networks, or the consolidation of data and storage traffic onto a common Ethernet, you name it, SDN is the (latest) answer. Add in the fact that major equipment vendors from Arista to VMware have been amping up the SDN public relations volume, and you
have a combustible mix.
How does SDN change my network? What’s different? From a topological standpoint, SDN needn’t change your network at all, although it can make wiring up very wide, non-blocking, flat, twotier “fat tree” networks replete with VMs and virtual NICs much easier, since you don’t have to worry about the alphabet soup of multipath networking standards like SPB,TRILL, MC-LAG, VEPA, or EVB. SDN networks look more like FAA-controlled airline traffic rather than autonomous cars and trucks on the Interstate. As such, SDN networks are completely dependent on the controller; if it goes down, traffic can still flow over previously established paths (switches will remember their prior instructions,) but new clients or link failures will wreak havoc.
Is SDN just a switching technology, or is there more to it? The initial focus of OpenFlow has been on software-controlled switching, because the network controller is essentially a server-based application; for example, the Big Switch Floodlight controller is a Java application that runs on Linux or Mac OS X. But SDN enables other forms of applicationcontrolled network traffic. In one example cited in the original OpenFlow research paper, the controller is used to define and enforce network-wide application usage and
www.informationweek.in
Data Center
client admission policies, acting as a sort of a combination application firewall and NAC appliance. Sketching out one usage scenario, the authors write that the controller could check each “new flow against a set of rules, such as ‘Guests can communicate using HTTP, but only via a web proxy,’ or ‘VoIP phones are not allowed to communicate with laptops.’” Thus, the controller not only makes decisions about packet flows based on source and destination port and address, but can also modify flow behavior by user (or group) and application type. Another example that NEC demonstrated at its Interop booth was detecting video requests and automatically redirecting clients from a remote video server to local caching proxy.
What vendors are pushing SDN? The most visible SDN advocates are members of the Open Networking Foundation (ONF), a consortium of organizations founded in March, 2011 and chartered with the development of SDN standards based on OpenFlow. There are currently more than 60 companies in the ONF, everyone from cloud services like Facebook and Google to networking heavyweights like Cisco and Juniper. Of course, OpenFlow, which can use relatively dumb switches built from commodity components, seems to present a threat to the incumbent network providers’ fat profit margins, so their membership in the ONF could presage their taking a page from Microsoft’s playbook to “embrace, extend and extinguish”. In fact, Cisco’s CTO Padmasree Warrior hinted at Cisco’s long-term SDN vision in her Interop keynote, which described its notion of an “Open, Programmable Environment” that brings API, programmability, and control to multiple network devices, applications, and layers, not just switches and flow control. That strategy could be the heart of what Cisco’s super-secret spin-in company, Insieme, is developing.
Are there any standards yet? Yes, for OpenFlow. The ONF has two ‘standards’ (The ONF is an industry consortium, much like the Wi-Fi Alliance, not an independent, internationally recognized standards body like the IEEE or ITU): there’s the OpenFlow Switch Specification 1.2(PDF) and OF-Config 1.0(PDF). The former defines the capabilities switches must support to correctly operate in an OpenFlow-controlled network, while the latter describes a configuration and management protocol for said OpenFlow switches.
What do critics say about SDN? OpenFlow skeptics point out that we’re too early in the technology’s lifecycle to make any useful assessments of its value, impact, or longevity. A substantive critique is given by my colleague at Network Computing, who predicts that OpenFlow Is Dead by 2014. His fundamental argument: Enterprise network operators are inherently conservative and risk averse, and thus very unlikely to ditch sizable investments in proprietary Cisco gear regardless of the potential feature benefits or cost savings from an OpenFlow network. Fratto ultimately believes SDN technology will be absorbed into more sophisticated forms of network configuration and management tools; a notion that’s consistent with Warrior’s call for a holistic view of SDN as something beyond just network flow control.
Where are we in the SDN technology lifecycle? Although SDN is in its infancy, little known outside the research community even a couple years ago, there were plenty of OpenFlowcompatible products on display at Interop. Of course NEC’s controller won best of show, but InteropNet
Just like server virtualization, network fabrics and clouds, SDN is a new approach to solving real problems. However, like all emerging technologies, SDN is immature and amorphous to serve as a convenient panacea for all that ails network engineers these days
(a cornerstone of the Interop event, where innovative vendors and volunteer engineers come together to create a completely interoperable network using the industry’s most cutting-edge technology) had an OpenFlow test lab with switches from Arista, Brocade, Dell/Force10, Extreme, Huawei, HP, IBM, Juniper, NEC, and Pronto Networks. In addition, both Spirent and Ixia have OpenFlow-compatible test equipment, while Big Switch and NEC have commercial controllers, so building a working enterprisegrade OpenFlow network is entirely feasible. Source: InformationWeek USA
june 2012 i n f o r m at i o n w e e k 59
CIO Life
‘My inspirations from life’
I
am who I am, and where I am because of the decisions I made in the journey of 42 years. The decisions were influenced by my experiences and lessons learnt from those who inspired me or those I followed. What you learn in B-school and professional colleges could be subject knowledge or dopes on strategy, planning tools, conceptual/ applicationoriented understanding on people, financials, markets, technology, etc. Being a learner, what I learn from life — through experiencing the moments and inspirations from people — form the school that I choose to never graduate from.
My Early Years
I come from a lower middle class family. Shuttling along with my father across the Delhi-Haryana belt through my childhood, taught me lessons on hard work — that there are no shortcuts in life, one has to earn and deserve every bit of what one gets. I still remember how my father used to travel 20 km on his way to office by bicycle, even in the peak seasons of summer and winter in Delhi to save bus expenses. I saw my father as a standing example of hard-work, discipline, perseverance and commitment. My elder brother and I didn’t have enough luxuries to be sent to the best of schools, colleges and formal training in our hobbies; however we
were happy — a complete family of four, each of us unique and strong individuals. As a growing boy, analyzing differences in people’s thinking and behaviour was an early lesson that I learnt. I was serious about academics and aimed at a successful career. I worked hard, scored ranks and qualified for entrance tests of reputed engineering institutes; I believed that my high scores would take me to where I am today, professionally. But life had something else in store — we couldn’t afford the fees. I put up my case with my dream institute stating, ‘I am an outstanding student!’ The response that I got was, “every student here is outstanding.” How true! I asked myself,
A fast rising performer, N Nataraj, Global CIO, Hexaware and Head (SVP) IMS Business, Hexaware, has been one of the rare CIOs to leverage IT to create business opportunities for his firm. He incubated the IMS service line within Hexaware and now it contributes approximately 5 percent of revenues in a short span of time. He has also been instrumental in developing Hexaware’s private cloud platform, which is opening doors for his firm into several businesses. As President, CIO KLUB, Bangalore Chapter, and winner of several prestigious awards such as the Global CIO award from InformationWeek, N Nataraj’s career and life is inspirational for many IT professionals. He shares his key inspirations from life and the lessons he has learnt from every individual with Srikanth RP. Here is Nataraj in his own words
60
informationweek june 2012
www.informationweek.in
what is my differentiator? I couldn’t find an answer then.
Games teach you more than sports and fitness
I pursued normal education; focused on gaining subject knowledge and found ways to earn as I learnt. More importantly, I pursued interests that didn’t cost much. Chess, the game that teaches you strategy to make the right moves and how important it is to not make the wrong ones, is one such interest that was granted by my grandfather in legacy. My grandfather — a government employee, a man of principles and an ace chess player — has been a great source of inspiration. He also pursued varied interests in homeopathy, gardening, tailoring, music, electronics and mechanics, with passion. I learnt the passion that he personified and patience from him. I was college champion in Chess and represented my team
however one needs to choose the club, which can make the difference. I learnt that rules of games are not the same, sometimes the lowest score means winning. And most importantly, I learnt that it is not only about a talented player who is the best in a sport. Consider the case of Eldrick Tont Woods, is it the player or the player plus his coaches that make him Tiger Woods (the giant of a man). I realized the importance of a coach in one’s life to be the best. Opportunity is not always obvious and success is hugely dependant on what you do next — this is the big lesson I have learnt while playing Chess or Golf.
Know your Roots
History and mythology have interested me over the years, though I am not a religious-ritualistic person, nor into any cult or formal followership. If I had a boon to be granted, Lord Krishna is a role I wish
an example to emulate on how professors can play a role in nation building and how managers as coaches can enable high performance in organizations. Sometimes television can be your teacher; I thank Dr BR Chopra and Dr Chandraprakash Dwivedi for enabling my deeper interests in Krishna or Vishnugupta through their ventures on Indian television — so simplistically directed for every Indian child to relate to. And I will always be envious of Nitish Bharadwaj and Dr Dwivedi amongst few others, for enacting these roles.
Movies influence lives; my cinema icon
No one can be spared from being influenced by Indian tinsel town and its larger than life heroes. Is there anyone living who is larger than life? Well, I have only one image in my mind since adolescence — the silver
My experiences and lessons from life have helped me become a richer human being, and I certainly have not arrived yet. I see the journey as the reward, and that’s what keeps me going in the Khurukshetra University Championship. Golf is another game that has intrigued me over the last decade or so. I played the game of the halves for the first time in Cupertino (California) Golf course in 2005. Till last year, I was playing almost every weekend in Karnataka Golf Association (KGA) and Army Golf Course (AGC) in Bengaluru. I also participated in a couple of championships since 2005 — Cisco Golf tournament in Eagleton, CRY in KGA, ICICI in Eagleton etc. I didn’t gain enough proficiency to win a tournament; however, I managed to win a couple of prizes for the longest drive and putting. I learnt that it’s not the power but the precision, timing and tact that matters in every stroke. It’s the same ball, the hole being the goal,
to play. Is it the epic Mahabharata, or the Bhagavad Gita or Krishna that mesmerizes me more? I don’t know. What I do know is that a portion of my mind and being is engulfed and guided by Him — call it the ‘concept’ or ‘the being.’ We say the means take you to the end but to me it’s been the means to the means; the end is Karmanye Vadhikaraste Ma Phaleshu Kadachana (Do your duty and be detached from its outcome). You have very less say to affect the end, but complete say on everything to do with the means; all that one has in one’s circle of influence and control is what and how one thinks, does and communicates. Chanakya is another guru from history who I look up to. He is an institution of governance, political science and management. He is
screen legend, Amitabh Bachchan! The man behind this actor is another hero of my life. More than the characters he has played on reel, I admire his real character. He to me is many virtues personified; the fighter and pioneer in many areas that he has been has inspired me over the years. How can a hero standing in front of a mike enact a five-minute song with no change in costumes, scenes or dance movements? Isn’t this breaking the rules? This legend has been my inspiration through life’s struggles and I feel child-like excitement even today when my sister Malavika (cousin but more than a sister) refers me as AB or Big B (Ajay Bhaiya or Big Brother). Ajay my name at home (meaning unconquered) was christened by my father and has been a source of subconscious positivity and affirmation.
june 2012 i n f o r m at i o n w e e k 61
CIO Life In reflection, I believe this name has been a source of tremendous resilience through my life’s struggle. Nothing in life has ever come easy, however everything that matters the most has been the best I could have asked for, including my family, friends and professional acquaintances.
Four important women in my life
Did my name (Ajay) have anything to do with my decision to marry Jay, my life partner? Jayasree, my better half is the other woman behind me. The first being my mother — the bold lady with can-do attitude, her leadership traits are the genes that run in me. Jayasree, my companion, is the source of my inner-strength who provides me unconditional support and makes me understand the meaning of life partnership. Living with her, I have strengthened my lessons on humility, equanimity and selflessness. The third lady of my life, our little angel, Kritika helps me re-live my childhood and rejoice the bliss of sheer innocence. While traveling, when I tell Jay that I miss the family, she comments that I miss home food more! Oh yes, Iddlis are my only weakness that I am bereft of in my trips to the West. Last but not the least, the fourth lady — my little sister Malavika has taught me the power of resilience.
Learning from my brother
During my early years, I was a complete introvert and focused more on studies, while my brother gave equal importance to extracurricular activities. To me, anything other than studies was a waste of my time. However, over time, I learnt from my brother the importance of focusing on other aspects of life. I still remember his words, “McDonalds sells more because it is packaged well.” How true! Having content alone is not good enough until it is presented well.
Professional influence
I have learnt from the many individuals I have interacted closely with: my teachers, colleagues and
62
informationweek june 2012
Nataraj’s name at home is Ajay, which means unconquered He has been a college champion in Chess and represented his team in the Khurukshetra University Championship Nataraj is an ardent fan of Amitabh Bachchan and regards man behind the actor as another hero of his life While travelling, along with his family, Nataraj misses home-cooked iddlis the most
seniors — with the likes of Ashok Soota (Founder and Executive Chairman, MindTree); Chandra (VC, Former President Wipro, CEO Mastek, e4e, Aztec), Govi (Founder Member Aztec, CEO, Perfios), Chandra (KBC, Chairman e4e), Subroto Bagchi, RV Ramanan, Samir Bodas (CEO, iCertis), and Atul and Sekar (my current Chairman & CEO). They took the risk and gave me an opportunity to head the IMS business apart from my CIO role and today that is the fastest growing horizontal in Hexaware. Achievers like Gandhi ji, Steve Jobs, Bill Gates and Randy Pausch made a huge impact on my life — several soft aspects of these gentlemen have moved me. Slumdog Millionaire was a story on how every experience in your life can make you a millionaire. My experiences have helped me become a richer human being and with all humility I want to state that I certainly have not arrived; I see the journey as the reward. And that’s what keeps me going. The key to life, I believe, is to
discover oneself and to serve; and this has been my journey ever since ‘the question’ appeared in my mind — Who am I? I feel blessed as I can see and connect with people beyond their degrees, titles, popularity and lifestyles. I am not a bunch of interests and hobbies that I pursue and what I think, say and do (or often don’t); I am an explorer enjoying the journey called life. This is the first time I have been asked to share my personal experiences (not sure if it is of interest to others). As Randy Pausch found his last lecture as an opportunity to leave a message for his children; perhaps, this is my opportunity to thank those (many of whom I have mentioned) who have been catalysts in making me learn the lessons of life. For the person that I am today; I want to thank the many teachers. Here’s a heartfelt “thank you!” (For comments on this article see: http://bit.ly/IQW0rs)
u Srikanth RP srikanth.rp@ubm.com
www.informationweek.in
Feature
Are smarter buildings a reality in India? More and more Indian companies and developers are adopting the smart building concept, which in turn is generating a huge revenue opportunity for vendors like IBM, Cisco and Capgemini By Vinita Gupta
E
nergy shortage is an acute problem in India that affects the economy of the country. Water and energy savings are discussed but are we really doing anything about it? Some Indian companies have actually gone beyond discussions and are doing their bit to address the issue by undertaking initiatives, such as smart and green (or sustainable) buildings. Smart building technologies are believed to save up to 30 percent of water usage and 40 percent of energy usage. It also helps to reduce the building maintenance cost by 10 to 30 percent and guarantees better health and safety. The smart building concept not just looks at buildings but also campuses, towns, cities etc. Today, many companies especially IT firms like UST Global, Infosys, and EMC are adopting the concept.
Some real adoption in India UST Global is building a smart and intelligent campus (about 36 acres) in Kerala (Trivandrum). The company engaged with Cisco during the architectural design phase of the campus to put in place a solution that can support their existing and projected business requirements. The first building of the campus (about 800k square foot) would be completed by November 2012. The scope of the project includes integrated energy management systems that can be delivered over the converged IP network. This also includes centralized monitoring and control of elevators; heating, ventilating, and air conditioning; a Network Operations Centre (NOC), and Security and Facilities Operation Centre (SFOC); operations for physical security, building operations and
64
informationweek June 2012
maintenance. Alexander Varghese, Vice President - Corporate Services & Country Head, UST Global asserts that the smart building concept should be implemented at the design stage itself, as then the equipment and materials purchased are compliant to the concept. He informs, “The smart building solution will help us save on energy (about 15 percent by November 2012), maintenance — as it is IP enabled and will also provide security, and safety for our employees and customers.” Likewise, Smart Building Systems (SBS) have provided Infosys with valuable operational data on a building’s energy consumption, energy demand and comfort, which has helped the company verify the impact of its design choices. “SBS helps us monitor our
equipment and allows us to take predictive maintenance action and control energy loss. In addition to energy, we are also applying SBS to monitor water consumption. We believe that this will provide us useful insights in conserving water in our buildings as it has done for energy. We believe that we can achieve our energy efficiency goal with disruptive designs that use SBS with the support of employee behavior changes,” states Rohan Parikh, Head of Green Initiatives at Infosys. Infosys is working towards implementing SBS in its all campuses. A few examples are Mysore (1.1 million sq. ft), Trivandrum (350,000 sq ft), Mangalore (500,000 sq ft), Chennai (1.8 million sq ft), Hyderabad (1 million sq ft), Pune (1.2 million sq ft), and Jaipur (300,000 sq ft). Similarly, Godrej Bhavan in
www.informationweek.in
Mumbai is a high-performance green building that maximizes operational efficiencies while minimizing environmental impact, as well as lowering energy and operating costs. EMC is another company, which has put together an emergency response system to control the operating costs of building maintenance. Some of the solutions EMC has implemented from Cisco include automated car park system, personal virtual office and office resource management that automatically turns on lights and ACs when required.
Developers building smarter homes
Not just companies but real estate developers like Wave, Sobha Habitech, Mantri Developers are looking at smart building concept as today many people are buying smarter homes. For example, Wave City, a 4,500-acre township in Ghaziabad (Uttar Pradesh) is installing smart systems to improve the quality of life for residents while keeping operating costs low. Wave has engaged with IBM
“The smart building solution will help us save about 15 percent energy by November 2012”
Alexander Varghese
Vice President - Corporate Services & Country Head, UST Global
to provide a roadmap to make its green-field township. The city will create a township where all the amenities, such as clean water, energy, transportation, public safety, education and healthcare, will be integrated and managed by smart devices using sensors and other intelligent communication tools. With all these devices connected to a central command center, the city will be able to record and respond to events much faster and in a more coordinated manner. “In the Wave Township, the users will be able to use smart meters that would help them track their usage pattern. The users will be able to
measure their water, gas and energy usage,” informs Manpreet Singh Chadha, Joint Managing Director of Wave. Likewise, realty major Sobha Developers recently launched its first smart home project, Sobha Habitech, in Bengaluru. All apartments in this residential community will be equipped with a patented smart home automation technology. These homes aim to provide a safer, more energy efficient and a more convenient environment for residents. Along similar lines, Mantri Developers announced its collaboration with Cisco to develop new ICT-enabled real estate models
5 reasons to go for ‘smarter buildings’ Smarter buildings can reduce energy usage up to 40 percent
They can save up to 30 percent of water usage
They can help reduce building maintenance cost by 10 to 30 percent
CO2
Smarter buildings reduce carbon footprints and operational costs
They guarantee better health and safety
june 2012 i n f o r m at i o n w e e k 65
Feature ‘Mantri Connected Communities and Smart Homes’. This collaboration aims to provide connected and sustainable communities and an enriched living experience to its residents. The smart building concept will not just help developers to generate revenues, but it’s also a big opportunity for vendors who are helping companies and developers to implement the concept.
Big opportunity for vendors
According to ABI Research, the market for IT-enabled building automation will reach more than USD 30 billion by 2015. Thus, the vendors are tapping this opportunity by building capabilities and developing partnerships. Smart building is easily a USD 3 billion extension for IBM’s hardware, software and services. IBM recently announced its partnership with Ingersoll Rand to provide remote energy and asset management solutions for the Indian market. According to this partnership, IBM will bring analytics and other solutions to Ingersoll Rand’s clients like Godrej and Le Meridien Hotel. Cisco has around 6-10 customers in India to whom it provides different smart building offerings. Valsan Ponnachath, Senior Vice President, Cisco Services India & SAARC says, “Fragmented real estate industry in India is a major challenge. There are only a few big players and hence there is a need to create awareness about the concept. We have good and large projects including the government projects that will help us to create awareness among small players.”
“With smart meters in the Wave Township, users will be able to measure their water, gas and energy usage”
Manpreet Singh Chadha Joint Managing Director, Wave
Similarly, Capgemini is planning to invest and develop new projects in India. The company has done a pilot project with Tata Power on smart metering in Mumbai that included 1,500 commercial and industrial (C&I) customers of Tata Power. The company will participate in the Smart Grid Pilot projects that are coming up in India. “We started our Smart Energy Services initiative in 2005 and from then the company has added about 50 projects worldwide. We have seen huge growth of about 55 percent in the smart building initiative. Globally we have about 8,600 people in the smart building project team,” shares Perry Stoneman, Vice President and Global Leader, Smart Energy Services, Capgemini. The vendors are offering and the companies and developers are building smart buildings, campuses, cities and homes. But does this concept cost a bomb to the end-user?
What about the cost?
It is true that there is an increasing awareness about sustainability in India but the question is how cost-effective are these buildings. Companies and developers believe that the initial investment for green
“We started Smart Energy Services initiative in 2005. Today, we have added about 50 projects worldwide”
Perry Stoneman
Vice President and Global Leader, Smart Energy Services, Capgemini
buildings is not higher than standard buildings but there is a huge cost saving in the long run. “We have not made high investments compared to the returns. We have made about USD 50 million investments for the first phase, which includes the construction charges, solutions from Cisco etc.,” says Varghese from UST Global. Similarly, Godrej’s project, which costs approximately USD 125,000 has resulted in 15 percent reduction in annual energy costs, representing a simple payback of less than four years. According to Dinesh CS, Director - Technologies at Biodiversity Conservation India, in just 5-7 percent of the base cost one can get efficient systems. He says, “We put a sensor in less than thousand dollars that control the temperature, water, etc. There are many data centers, which are over designed and lead to cost instead of saving. Inefficiency in the design is the root cause of the problem. It is important to blend natural resources (solar energy) and technology together.” Though the cost of building the Wave City is around 150-200 crore, the cost of the flats is not too high. “Usually automated devices (which help residents to control devices like AC from anywhere) cost more but in Wave Township we have used smarter devices instead of automated devices,” Chadha explains. Along with the immense environmental benefits that smart buildings offer, they are a win-win concept for all — the companies, developers, vendors and end users. u Vinita Gupta vinita.gupta@ubm.com
66
informationweek June 2012
www.informationweek.in
Technology & Risks
Can hackers target a pacemaker?
I
Avinash Kadam
Serious investigations and experiments have revealed that medical implants like pacemakers and insulin pumps are vulnerable to cyber attacks that could endanger their users’ lives
http://www.on the web ATM security: Should we be concerned? Read article at:
t seems, there are no devices, which are hackersafe any longer — from ATMs, cars, smartphones to medical devices. Though, it may sound like a plot of a crime thriller, serious investigations and experiments have revealed that medical implants like — pacemakers and insulin pumps are vulnerable to cyber attacks that could endanger their users’ lives. As per the findings published in a paper, Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses, implantable medical devices (IMD) — devices implanted inside the human body — use embedded computers and wireless radios making them vulnerable to hacking. These IMDs include pacemakers, Implantable Cardioverter Defibrillators (ICDs), neurostimulators, and implantable drug pumps, which use embedded computers to monitor chronic disorders and treat patients with automatic therapies. For instance, an ICD that senses a rapid heartbeat can administer an electrical shock to restore a normal heart rhythm, and later report this event to a healthcare practitioner who uses an external device with wireless capabilities to extract data from the ICD or modify its settings. Although advantages of wireless technology in medical devices are manifold, the recent findings have raised serious concerns about the safety of patients using such devices. The possible attack scenarios identified include both — the privacy of the patient, as well as safety and security. Scenarios attacking the privacy of a patient include determining whether the patient has an ICD, type of ICD and serial number of ICD. The hacker can also identify patient’s name and obtain his history. Finally, the most dangerous part, the hacker can change device settings, change or disable therapies and deliver a command
shock. So, there is an actual possibility of not only getting private information about the patient, but also killing the patient. Another attack scenario is to run down the battery and thereby carry out a power ‘denial of service’ attack. Other types of attacks like insecure software updates or buffer overflow vulnerabilities can also be exploited by the hackers. The research team has suggested various zero power defenses to avoid such an attack. Zero-power notification provides audible alerts of securitysensitive events at no cost to the battery. Zero power authentication uses a cryptographically strong protocol that authenticates requests from an external device programmer. Also, sensible key exchange combines techniques from both zero-power notification and zero-power authentication for vibrationbased key distribution that a patient can sense through audible and tactile feedback. Against the backdrop of this article, I read another news item, which talked about how doctors are using iPad to reprogram implanted cardiac devices and real-time monitoring. Obviously the instructions will travel on wireless, Internet or even mobile networks. I assume enough care has been taken to ensure that there is no possibility of either a passive attack or an active attack on the network. The journey of wireless IMDs from the research labs to commercial use has not taken very long. I only hope that the journey from the security research paper discussing the possibilities of an attack to an actual attack does not happen too soon. The designers of the wireless IMDs should keep security of patients as their primary goal and not the convenience of doctors. u Avinash Kadam is an Information
Security Trainer, Writer and Consultant. He can be contacted via e-mail awkadam@mielesecurity.com
june 2012 i n f o r m at i o n w e e k 67
Global CIO
Can software help Infosys out of financial rut?
I
Chris Murphy
A digital marketing platform called BrandEdge is the company’s newest attempt to sell software, not just IT and BPO services. It’s got its work cut out for it
LOGS Chris Murphy blogs at InformationWeek. Check out his blogs at:
68
informationweek june 2012
nfosys derives only about 6 percent of its almost USD 7 billion in annual revenue from proprietary software products. Its goal is to pump that up to one third of its revenue. The move comes as Infosys’s traditional IT services business faces slower growth. To understand how Infosys hopes to get to this new future in software, look at its new software product. Infosys is selling an online, subscription software product called BrandEdge, aimed at helping companies conduct digital marketing campaigns and measure the results. Infosys is pitching the software to chief marketing officers, whose tech budgets are rising as their work moves increasingly to websites, e-mail, mobile apps, and social networks, and as CMOs do more analytics to measure performance. Infosys has been downgraded by several equity analysts after a disappointing fourth quarter earnings report. Most of Infosys’s business today is in providing IT and business process outsourcing services, tapping India’s large and relatively inexpensive labor pool. But every big outsourcer uses that model now, and bigger companies like Accenture, by hiring their own huge global staffs, are seen as matching Indian firms’ costs. Infosys is thus looking for more diversified revenue streams and eyeing the high profit margins that come from software. This software push is “at the core of our strategy,” says Sanjay Purohit, who as Infosys Senior VP of products, platforms and solutions is in charge of getting to that 33 percent of revenue goal for software. That percentage is a long-term goal, with no set target date. Infosys has about two dozen software offerings; about 500 engineers work on that software, and Purohit expects that number to grow to 1,000 this year. Software to drive digital marketing is a hot growth segment, but it’s also
teeming with established competition. IBM plunged in over the past two years, buying Unica, CoreMetrics, DemandTec, and Sterling Commerce to manage digital marketing and support e-commerce transactions. Recently, Marketing automation specialist Marketo bought startup Crowd Factory to give it more tools for managing marketing in social networking. Infosys will try to make the case for having a comprehensive online marketing suite. It talks about having components for the four key processes: building digital assets, listening to customer reaction, understanding performance through analytics, and engaging with customers through myriad channels. Purohit declined to say what Infosys is charging for BrandEdge, but it won’t be a simple per-user, per month fee. Instead, it will be negotiated depending on a mix of factors such as the number of marketing campaigns. Infosys has about USD 350 million in software contracts. Most of it comes from Finacle, which is used by about 150 banks. Infosys’ other software products and platforms have only about 40 customers. Infosys’s marquee customer for BrandEdge is GlaxoSmithKline, a global pharmaceutical and consumer healthcare company. Infosys certainly brings IT operations expertise and credibility to a cloud software model. Can it sell cloud software head-to-head against software veterans such as IBM? Can it build easy-to-use software that people outside IT organizations will embrace? Can it build new relationships beyond IT needed to sell this software? Purohit makes no bones that he’s selling BrandEdge foremost to CMOs, not to the CIOs, who know it much better. This is new territory for Infosys in more ways than one. u Chris Murphy is Editor of
InformationWeek. Write to Chris at cjmurphy@techweb.com
www.informationweek.in
Practical Analysis
Does Microsoft know best?
M
Art Wittmann
Microsoft, RIM and HP are among the vendors increasingly taking a “works best with our stuff” approach for new products. Consider what that means for your IT strategy
LOGS Art Wittmann blogs at InformationWeek. Check out his blogs at:
icrosoft and other vendors are increasingly taking a “works best with our stuff” approach for their new products. It’s time to re-examine what that means for your IT strategy. As the forces of mobility, software as a service, and Big Data reset everything about enterprise IT, many industry behemoths find that the value proposition of their core products is being shaken (sort of like shaking an Etch-A-Sketch). For instance, Windows on desktops and laptops has been worth countless billions of dollars to Microsoft as it has offered applications and other products that rely on the Windows underpinning. But in a world where most end-user computing will happen on handhelds, where there’s virtually no Windows presence, it’s no longer clear just how much of a fundamental advantage Windows is — or if it’s one at all. History is replete with examples of very large companies that had good products but saw them marginalized by a tie to some core product that lost its mojo. Novell’s NDS/eDirectory suffered from a reliance on NetWare. When NetWare tanked, so for the most part did NDS, despite the fact that it had a good head start as an enterprise directory system. More recently, there’s RIM’s BlackBerry Mobile Fusion, a mobile device manager (MDM) that has some promising features but takes a “better with BlackBerry” approach. BlackBerry Mobile Fusion’s not exactly what IT pros are looking for — an MDM system that manages best the devices that users now want least. You can certainly see why RIM would arrive at that approach, but it’s likely to please only the most dedicated BlackBerry shops. Microsoft is extending its “first and best with Windows” approach beyond its on-premises applications to those that run in the cloud. One can only imagine the arguments in Redmond about that strategy. On one side, you have the managers of those cloudbased apps, who must worry that Windows on handhelds will never gain
the market share of iOS and Android. On the other side, you have the folks who believe that if Microsoft keeps up the good fight and brings the best apps in the best way to all versions of Windows first, the company will trounce yet more competitors just as it has numerous others. HP is creating its own “better with HP” approach as it acquires software assets to augment its enterprise hardware products. When HP bought ArcSight, Fortify, and TippingPoint, it also set out to build other products that manage projects and help IT execs monitor the health of their systems. The catch with these systems is that they can take a lot of work to set up and run, but of course that work is lessened if you’re using all HP products. HP never puts “better with HP” as the headline, but it will usually mention it when the issue of product TCO comes up. What’s less clear is whether HP has changed the development priorities for the market-leading products it acquired. Vendors provide a lot of excuses for why they support their own products first and competitors’ later. It isn’t a way to rope customers into buying more of their products, they say, but the natural result of their constrained resources and need to set priorities. And yet the hallmark of an enterprise security management product like ArcSight is that it works with everything. It’s an odd artifact that smaller companies manage to offer broader support than large companies with vested interests. If Microsoft didn’t have a vested interest in Windows, don’t you think it would support every major end-user system in common use? For buyers, it comes down to best of breed versus that painfully easy choice to go with the vendor you’ve gone to in the past. But then there’s that Etch-ASketch-like reset that begs you not to take the easy path. u Art Wittmann is Director of
InformationWeek Analytics, a portfolio of decision-support tools and analyst reports. You can write to him at awittmann@techweb.com.
june 2012 i n f o r m at i o n w e e k 69
Down to Business
Google in decline? Critics protest too much
A
Rob Preston
Google remains well positioned despite its missteps and setbacks
LOGS Rob Preston blogs at InformationWeek. Check out his blogs at:
70
informationweek june 2012
Google executive, James Whittaker, recently left the company for his second tour at Microsoft, and in the aftermath he posted a blog on Microsoft’s site explaining why he left the search company. While promising “no drama” and “no tell-all,” he nonetheless comes down pretty hard on his former employer. The crux of his disenchantment? “The Google I was passionate about was a technology company that empowered its employees to innovate,” he writes. “The Google I left was an advertising company with a single corporatemandated focus.” That single focus? Facebook. As Whittaker describes it, since Cofounder Larry Page took over from CEO Eric Schmidt a year ago, all company divisions were on notice to put Google+ and social networking front and center. Whittaker writes: “Search had to be social. Android had to be social. YouTube, once joyous in their independence, had to be … well, you get the point. Even worse was that innovation had to be social. Ideas that failed to put Google+ at the center of the universe were a distraction.” Under Schmidt, he pines, “ads were always in the background. Google was run like an innovation factory, empowering employees to be entrepreneurial through founder’s awards, peer bonuses, and 20 percent time. Our advertising revenue gave us the headroom to think, innovate, and create.” In other words, the worker bees generated the advertising revenues, giving the intellectuals the freedom to dream about self-driven cars and Google Goggles, “innovations” that had no material benefit to the company. Rob Enderle, prince of pundits, piles on to Whittaker’s post in his column “Is Google Facing The Beginning Of The End?” and urges Google to move past its Facebook envy. Unlike Whittaker, however, he
doesn’t turn his nose up at advertising. “All of its focus should be on finding ways to make ads on the Internet more valuable and being the primary source for managing ad revenue for everyone,” Enderle says. “Its winning formula was monetizing the web, which is actually a super set of ads, but it is clear, institutionally, that Google is in denial about the real source of its success.” I’m not so sure Google is in denial about anything, though I do agree with Enderle and Whittaker that customers aren’t clamoring for another social network. Where I disagree with them is on the depth of Google’s problems. For all the concerns about Google’s social awkwardness, it’s still well positioned in three of the biggest markets of the day: search, mobile, and Web 2.0 collaboration. The reality is that while all tech giants make mistakes, some of them make huge mistakes, and the smartest ones figure things out. Think Apple, IBM, and Cisco rather than DEC, Netscape, and Kodak. Google and its leaders have earned enough credibility here, killing off high-profile products and projects (Wave, Buzz, Gears, Knol, etc). No question, Google is vulnerable. The prices it fetches per paid search click are down, and rival Facebook, as Whittaker suggests, “knows so much more” about its users than Google does, an attribute that appeals to advertisers and media companies. It’s still not clear how Google will monetize its Android franchise, especially with its USD 12.5 billion acquisition of device maker Motorola Mobility. Evidence that Google is playing fast with users’ personal data has led to regulatory probes in the U.S. and Europe. But the beginning of the end? Let’s give Google’s leaders a little more credit. u Rob Preston is VP and Editor in Chief of InformationWeek. You can write to Rob at rpreston@techweb.com.
www.informationweek.in