CIO June 1 2009 Issue

Page 1

leadership Business

Technology

VOL/04 | ISSue/14

plus Six Other Things to Organize for Storage Virtualization Success Page 25

Low Bandwidth Back Ups Page 29

WNS Global had a great idea: virtualize storage and bump up efficiency. But what about the security concerns of its clients? Page 20

How Not To Secure Your Information Page 32

Dangers that lurk in the Cloud Page 38

Dealing with the Added Complexities of Storage Virtualization Page 40

What You Think You Know About Data In Transit Page 44

When It’s Time to Throw Data Away Page 51

june 1, 2009 | Rs100.00 www.CIO. I N


From The Editor-in-Chief

Cut through clutter. Focus. Set a Goal. Work toward it. Victory. That’s typically the

Targeting Failure Be very careful when setting goals.

way that all human endeavors that are successful pan out. From getting into an Indian Institute of Management to ensuring to even the Battle for Tiger Hill, goal-setting has been a keystone for achievement. Numerous studies across many widely varying contexts have proved that setting specific (and quite often challenging) goals can spur us to go beyond our self-imposed limitations. And, it’s not different in a corporate setting. All you need to do is pan across the wide body of management literature — Goals Are Good! But, tarry a while, recent research by a bunch of management boffins, catalogues how goalsetting by corporates can often do more harm than good. The research, which was published in the February issue of the Academy of Management Perspectives, details case after case where setting ambitious goals had overly negative and often catastrophic impact on organizations. One instance that stands out is General Motors’ goal of recapturing 29 percent of the American market. Six years after this was put down as a target, GM is on the doorstep of bankruptcy partly because this Goals that are too specific overwhelming emphasis on market lead to an extremely narrow share didn’t consider the impact it had focus, higher than required on profitability. risk-taking, and plain Among other issues, the authors of unethical behavior. Goals Gone Wild: The Systematic Side Effects of Over-Prescribing Goal-Setting, point that goals that are too specific lead to a debilitatingly narrow focus, higher than required risk-taking, and plain unethical behavior. A lot of what is mentioned in the paper rings true. About three decades ago, the Union Government set out to improve the milk yield of cows in the Khajihar district of Orissa, by the simple expedient of cross-breeding the hardy native cattle with bulls from Jersey. Overly enthusiastic bureaucrats arranged for the neutering of all ‘local’ bulls to ensure that nothing got in the way of the program. The result was a bunch of sickly cows across the district, unable to cope with the arid environment, with milk yields lower than before and, no possibility of the local cattle strain being revived. While stating that the practice is “overused” and with many endeavors ill-suited to it, the authors observe that goal-setting is apt in only those activities where tasks are “routine, easy to monitor and very easy to measure”. What do you feel about goal-setting? Write in and let me know.

Vijay Ramachandran Editor-in-Chief vijay_r@cio.in

2

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Content,Editorial,Colophone.indd 2

Vol/4 | ISSUE/14

6/3/2009 12:02:28 PM


c JUNE 1 2009‑ | ‑Vol/4‑ | ‑issUE/14

Cover Story

20 I Pooling it together Storage Virtualization I Faced

with the spiraling cost of running separate storage for each of it’s 200plus clients, WNS Global decided to turn to virtualization. But convincing its clients that virtualized storage was secure would call on all of Sanjay Jain’s skills of persuasion. Feature by Kanika Goswami

20

25

32

40

51

FeatureS

25 I Virtualization sYMPhonY

38 I hole in Your Cloud

Strategy I Storage virtualization is going mainstream, but you’ll want to avoid the common pitfalls and ask the right questions before rolling out your own. Here are six key issues to consider as you prepare to get all the pieces together. Feature by Jon Brodkin & Thomas hoffman

Cloud Computing I Amazon, Nirvanix, and others promise easy data storage and infinite scalability for Web companies, but perils such as service outages and security breaches can’t be overlooked. Feature by Jon Brodkin

29 I double trouble

CoVEr: dESI gn By PC An oo P & AnIl VK

BaCk up I Twice is nice for most things — except storage. How smart back up solutions — that operate on narrow bandwidth — can save your organization time, money and effort. Feature by omair Siddiqui

32 I Minding Your data SeCurity I Blindsided! These companies thought they had their stored data locked tight, but they were wrong. You can avoid a similar fate with these steps. Feature by mary Brandel

4

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

40 I Virtual headaChes Virtualization I Storage virtualization is hot, and for good reason. But its benefits bring added layers of complexity. Feature by Gary Anthes

44 I six MoVing data MYths it management I Storage needs are growing by the day and organizations — with movable media storage — are keen to make data available anytime, anywhere. Debunking these six myths will ensure that your organization’s in-transit data is secure. Feature by Gary Anthes

51 I shredding data ComplianCe I Data explosion has taken organizations by surprise and with less and less storage space, IT leaders are contemplating eliminating unwanted data. But when exactly is the right time to purge? Feature by mary Brandel

Vol/4 | ISSUE/14


content

(cont.) departments Trendlines | 9 Survey | Recession or Not, Storage Thrives Data Loss | No Data, No Business Internet | Say Cheese on Facebook System Management | Money Crunch? Go Green Strategy | IT Budget: Quick Fixes Components | Mainframe Folks Are a Step Ahead Datacenter | Virtualization Threatens the ‘Pizza box’ Devices | In High-Density Mode

Essential Technology | 56 Hardware | Wipe Fast, Wipe Clean

Feature by Robert L. Mitchell Pundit | Your Storage Security Game Plan Column by Jim Damoulakis

From the Editor-in-Chief | 2 Targeting Failure

By Vijay Ramachandran

1 6

NOW ONLINE For more opinions, features, analyses and updates, log on to our companion website and discover content designed to help you and your organization deploy IT strategically. Go to www.cio.in

c o.in

Applied Insight Bridging Land and the Cloud | 16 The days of all-local or all-cloud data storage are numbered. A new breed of providers are offering storage products and services as a local-online hybrid. The aim is to get around the fact that some people don’t yet trust Internet data storage.

5 6

Column by James E. Gaskin

Vol/4 | ISSUE/14

Content,Editorial,Colophone.indd 5

REAL CIO WORLD | J U N E 1 , 2 0 0 9

5

6/3/2009 12:02:39 PM


Governing BOARD

Advertiser Index

Alok Kumar

Publisher Louis D’Mello

Global Head - Internal IT, TCS

Associate Publisher Alok Anand

Anil Khopkar

Editor ial Editor-IN-CHIEF Vijay Ramachandran

Correspondents Sneha Jha,

Anjan Choudhury

CTO, BSE

Chief COPY EDITOR Sunil Shah Copy Editors Deepti Balani,

Shardha Subramanian Product manager Online Sreekant Sastry Des ign & Production Creative Director Jayan K Narayanan Lead Visualizer Binesh Sreedharan Lead Designers Vikas Kapoor, Anil V K

Vinoj K N, Suresh Nair Girish A V (Multimedia) Unnikrishnan A V Sani Mani (Multimedia) Designers M M Shanith, Anil T

President & CIO, IT Applications, Reliance Industries Atul Jayawant President Corporate IT & Group CIO, Aditya Birla Group

Photography Srivatsa Shandilya

Production Manager T K Karunakaran

DY. Production Manager T K Jayadeep Ma rk eting and Sa l es VP Sales Sudhir Kamath GENERAL Manager Nitin Walia Senior Mananger Siddharth Singh, Assistant Manager Sukanya Saikia Bangalore Kumarjeet Bhattacharjee, Arun Kumar, Manoj D. Delhi Aveek Bhose, Gagandeep Kaiser, Punit Mishra Mumbai Parul Singh, Hafeez Shaikh, Suresh Balaji, Dipti Mahendra Modi Japan Tomoko Fujikawa USA Larry Arthur; Jo Ben-Atar Events VP Rupesh Sreedharan Senior Manager Chetan Acharya Managers Ajay Adhikari, Pooja Chhabra

Canon

BC

EMC

14 & 15

IBM

12

Krone

35

Microsoft

IBC

Donald Patra CIO, HSBC India Dr. Jai Menon

Molex SAS

1 IFC

Director Technology & Customer Service, Bharti Airtel & Group CIO, Bharti Enterprises Gopal Shukla

P C Anoop, Prasanth T R

18 & 19

Ashish Chauhan

SENIOR Designers Jinan K Vijayan, Jithesh C C

Brocade

3

GM (MIS) & CIO, Bajaj Auto

assistant editors Gunjan Trivedi, Kanika Goswami

APC

Symantec Wipro

6&7 11& 13

VP - Business Systems, Hindustan Coca Cola Manish Choksi Chief Corporate Strategy & CIO, Asian Paints Manish Gupta

This index is provided as an additional service. The publisher does not assume any liabilities for errors or omissions.

Director-IT, Pepsi Foods Muralikrishna K. Head - CCD, Infosys Technologies Navin Chadha CIO, Vodafone Pravir Vohra Group CTO, ICICI Bank Rajesh Uppal Chief General Manager IT & Distribution, Maruti Udyog Sanjay Jain CIO, WNS Global Services Shreekant Mokashi Chief-IT, Tata Steel Sunil Mehta

All rights reserved. No part of this publication may be reproduced by any means without prior written permission from the publisher. Address requests for customized reprints to IDG Media Private Limited, Geetha Building, 49, 3rd Cross, Mission Road, Bangalore - 560 027, India. IDG Media Private Limited is an IDG (International Data Group) company.

Printed and Published by Louis D’Mello on behalf of IDG Media Private Limited, Geetha Building, 49, 3rd Cross, Mission Road, Bangalore - 560 027. Editor: Louis D’Mello Printed at Manipal Press Ltd., Press Corner, Tile Factory Road, Manipal, Udupi, Karnataka - 576 104.

Sr. VP & Area Systems Director (Central Asia), JWT T.K. Subramanian Div. VP-IS, UB Group V. K Magapu Director, Larsen & Toubro V.V.R Babu Group CIO, ITC

8

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Content,Editorial,Colophone.indd 8

Vol/4 | ISSUE/14

6/3/2009 12:02:40 PM


new

*

hot

*

unexpected

Recession or Not, Storage Thrives storing X-rays digitally, while media companies convert music and movies to digital format, he notes. The need for increasingly powerful data analytics is helping storage vendors too, he says. "People have increasingly been buying storage not just to store data, but also because they want to parse out components [for analysis]," Villars says. "People are buying storage for lots of different reasons." The lagging economy still has some impact on storage buying, however. Customers are realizing they don't need maximum performance for every type of data, so many businesses are purchasing cheap, high-capacity disks to store information that's not mission-critical. The goal is to provide tiers of storage options, some optimized for performance, others optimized by price. —By Jon Brodkin

ILLUSTRATIo N by MM S HANITH

s u r v e y The storage market is thriving despite a tough economy, analysts say, as exploding digital data is forcing customers to add more capacity and upgrade to newer storage systems that are faster and more efficient. "It's a lot easier for customers to put off the purchases of servers, some software and apps, and even desktops than it is to put off storage purchases," says Charles King of the Pund-IT analyst firm. "Once a company moves into the digital world ... information just piles up. You've got to have some place to put it."

Worldwide disk storage shipments will double in capacity every two years through 2012, IDC predicts. Spending on disk storage is expected to top $34 billion (about Rs 170,000 crore) by 2012. The most important force behind this growth is "the emergence of contentrich business applications in areas such as telecommunications, media and entertainment, and Web 2.0," IDC says. Customers are finding new storage systems hard to resist partly because read/write performance has improved significantly more than microprocessor performance for several years, King says. "Customers can get great bang for their buck with new storage systems," he says. There are also other factors fueling growth in the storage market, notes IDC analyst Richard Villars. Customers are shifting from tape to disk-based backup for faster recovery times. Hospitals are

No Data, No Business Disasters, by definition, strike with little or no warning. Whether it's an extended power outage, a devastating storm, or some other unforeseen disruption, the most nervewracking part of owning a business is the unknown. A solid DR plan can mean the difference between a business bouncing back from a catastrophe or closing for good. A US National Archives and Records Administration study found that 25 percent of companies experiencing an IT outage of two to six days went bankrupt immediately, with even more following in the longer term. The core of almost all DR plans is data replication in some form — duplication and storage of vital data in a safe, secure place where you can retrieve it if some catastrophe destroys or damages the primary location. There are essentially two different data replication strategies: host-based and controller-based. If your organization has not committed to either yet, keep in mind that it's very difficult to switch from host-based solutions to controller-based solutions because the two aren't

Data L o s s

VoL/4 | ISSUE/14

Trendlines-WITH AD.indd 9

compatible. Each is handled differently and uses different components. Host-based solutions usually are recommended for small businesses as they are the most cost-effective and easiest systems to adopt. This type of implementation occurs at an organization's operating level by pairing two separate servers that will each save data, ensuring redundancy. Servers in a host-based system can be paired at a one-to-one level, or with multiple servers-to one location, depending on the needs and capabilities of the organization. Controller-based data replication, by comparison, is typically used by larger organizations and involves replicating data at the byte-level onto a storage-area network (SAN) that connects remote storage devices to servers while appearing locally attached. When planning for your worse case scenario, think about your most critical data needs and how data could save you from suffering the consequences of an emergency. —Mike Lapetino REAL CIO WORLD | J U N E 1 , 2 0 0 9

9


Say CheeSe on FaCebook

Money Crunch?

Go Green

management Information technology leaders are hoping that moves to expand the use of virtual technologies and so-called green products can help them better cope with decreasing IT budgets. The budget crunch is by far the top challenge facing IT storage pros today, at least according to half of the 70 attendees who responded to a poll at the Storage Networking World (SNW) conference. Dave burhop, CIo at the Virginia Department of Motor Vehicles, said the agency expects that deploying virtualization technology will let it reduce its server total from 3,000 to 1,000, significantly cutting its power needs. The agency also cut the number of contractors it uses and asked its IT workers to "suck it up" and work 12-hour days to offset a 3 percent IT budget cut, burhop added. Herbalife International of America has already rolled out VMware software to its 700 servers, said M. Andy Hansen, principal engineer for IT and server operations. The company's 2009 IT budget is flat. "We're doing a lot of virtualization, and I know we're going to have a positive power impact because of that," Hansen said. of the 91 SNW attendees who responded to another poll question, 41 percent said they have started or completed storage virtualization projects, while 25 percent said they expect to begin substantial rollouts of the technology over the next year. Green computing initiatives also figure prominently in cost-saving plans, according to SNW polls. A quarter of 91 attendees said they plan to roll out an energy-saving project this year, while 30 percent said they have already started such initiatives. Companies are also looking beyond technology and personnel to cut IT costs. For example, Herbalife is saving $70,000 (about Rs 35 lakh) annually just by eliminating the purchase of Styrofoam coffee cups for IT workers, said Hansen. system

trenDLines

i n t e r n e t Needing to deal better with 50 billion files worth of photos, engineers at Facebook are installing a new photo storage system they say is 50 percent faster than traditional systems. The storage system, dubbed Haystack, has been under development in-house for the past couple of years and Facebook has been rolling it out in limited test versions to parts of the network for the past few months. Haystack is more than 50 percent faster than traditional photo storage systems, says Jonathan Heiliger, VP of technical operations at Facebook. The company expects to use Haystack to store all Facebook photos soon, according to Bobby Johnson, director of engineering at Facebook. "In terms of cost, if it's twice as efficient, we can have 50 percent less hardware," said Johnson. "With 50 billion files on disk, the cost adds up. It essentially give us some [financial] headroom." Johnson and Heiliger said they began building the new storage system to better handle the growing number of photos Facebook has to store. Many of their 175 million to 200 million users share photos of everything from their pets to vacations, weddings to days at the beach. That means that users are posting and calling up their own photos, as well as their friends' and family members' photos, as well. Keeping those trains running efficiently and on time was a growing challenge. Johnson noted that Facebook deals with 15 billion photos — not including all of the replications. Its user data grows by 500GB a day. And it has 50 million requests per second to its backend servers. Johnson said the new system is so much faster than the previous one because of changes made to its setup. Haystack is tailored for small files that don't change very often, instead of a small number of large files that are changing all the time. Traditional file directories also need file names and a lot of resource cost goes to just finding the files. With the new system, however, he said they don't have to deal with directory structures or file names. They use ID numbers instead of names. That mapping is very small. —By Sharon Gaudin

10

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Trendlines-WITH AD.indd 10

—by Lucas Mearian Vo L/4 | ISSUE/14

6/3/2009 12:08:30 PM


Storage Budget: Quick Fixes

Trendlines-WITH AD.indd 11

The most popular costreduction techniques are those which IT can change behind-the-scenes without entering extensive negotiations with business. "A SAN extends the useful life of servers as it allows for rapid migration of data and applications to new servers and reduces the risk of data loss from failed disk drives on aging servers," states the report. Some tactics are also presented, but unlike the other techniques, they involve upfront costs. The tactic most often overlooked by IT departments is purchasing refurbished equipment for servers, networks and workstations, said Jennifer Perrier-Knox, senior research analyst at Info-Tech. According to a recent Info-Tech survey, 68 percent of IT respondents have not considered refurbished gear as a costcutting opportunity. "That was a surprise because there are excellent deals to be had from vendors who sell refurbished equipment. Some stuff hasn't even come out of the box and these things are at a 50 percent discount," said Perrier-Knox. Another technique enterprises tend to ignore is downsizing the application portfolio, she pointed out. Enterprises have so many apps and a lot are redundant and some aren't used very often, she said.

The most popular cost-reduction techniques tend to be those which IT can change behind-the-scenes without entering extensive negotiations with the business, while those requiring interaction with individuals outside of IT seemed to be less popular, she said. "Areas that are not being explored very much carry what we would call a 'high hassle factor,'" said Perrier-Knox. According to the report, IT leaders should avoid redeploying the wrong servers, taking an ad-hoc approach to virtualization and harboring false hopes for virtualization — such as expecting to save on licencing costs. Harbouring false hopes for virtualization is a definite concern, according to Dave Pearson, senior analyst, storage, IDC Canada. "A lot of people hear virtualization and immediately think that's a cost reduction technique." But in the short term, virtualization generally requires more resources in terms of people and dollars, he explained. Companies that are able to either retire or no longer replenish a lot of equipment because they are able to virtualize those servers can certainly see immediate cost savings, said Pearson. Another technique is the adoption of lower cost technologies, Pearson continued. "A lot of people are putting off Fiber Channel SAN purchases because they can't invest in the overall infrastructure required to support it. Maybe in the short term they're going to invest in a NAS or an iSCSI solution," he said. —By Jennifer Kavur

t r e n dl i n e s

St r at e gy Servers and storage services take up a significant portion of most IT department budgets, according to Info-Tech Research Group. The London-based firm released a report outlining specific techniques IT executives can use to reduce server and storage costs and improve performance of technologies already in place. The report, Reducing Cost-to-Serve: Server & Storage Services, has recommendations organized according to short-, mediumand long-term goals. Immediate tactics include wrapping up in-progress server consolidation projects, applying quick-fixes to cooling costs and extending the life of servers. "For organizations that have not started to virtualize, do not replace old servers before their useful life is over. If network bandwidth or storage throughput is bottlenecked, but the CPU operates at less than 80 percent utilization, then this server does not need to be replaced," states the report. Another short-term suggestion is "removing extended warranties and maintenance agreements on non-critical servers," which can save US$577 to $800 per server (about Rs 28,800 to Rs 40,000), according to Info-Tech. Mid-term tactics focus on hardware purchases and vendor contracts. Recommendations include purchasing equipment that is good enough for business needs and opting for refurbished hardware that can save "up to 80 percent of the price of new gear."

6/3/2009 12:08:31 PM


Mainframe Folk Know More Than You Think

trenDLines

C o m p o n e n t s There is often a stark contrast between how mainframe storage systems are managed versus how storage connected to distributed systems platforms such as UNIX and Windows are managed. In the older mainframe environments, everything was always planned for in advance. Capacity planners forecast storage growth based on current trends and performance. Storage engineers and architects understood the new storage systems that they currently used or brought in and it was just a given that everyone understood how applications would utilize the storage. Contrast this with how distributed systems storage networks are often managed. Companies have a hard time of even keeping track of what servers they have attached to the storage network, much less quantifying what applications they have running on them or how they are using their assigned storage. In spite of these broken configurations, storage vendors are now actively pushing new protocols such as iSCSI, fiber channel over ethernet (FCoE) and even InfiniBand as a means to grant more LAN-attached servers that use direct attached storage (DAS) access to storage via low-cost corporate storage networks. This is done under the guise that they will save money by implementing these new storage networks. It is true that companies will resolve some of their storage connectivity and data migration costs. However, companies may find they are ill-equipped to manage the new complexities that this wide-scale implementation of storage networks introduces. Companies are never going to replace their distributed systems storage networks with mainframe storage networks. But the sooner that companies figure out that their mainframe folks can help with the discipline and planning aspects of storage networking; the sooner companies will realize the benefits of today's storage networks and minimize many of their problems.

Virtualization threatens the 'pizza box' Virtualization may spell doom for the 1U ‘pizza box’ server, as the ability to pack multiple virtual machines onto physical hosts has customers choosing larger standard servers and blades. The iconic pizza box servers can't provide the VM density of blades, nor do they offer the levels of memory and CPU power found in larger 2U and 4U form factor servers, Nemertes Research analyst John burke said. That doesn't mean pizza box servers are being thrown by the wayside. but in IT shops that make extensive use of hypervisors, "the 1Us are losing," burke said. "We expect that will continue." Nemertes surveys show most customers will not re-purpose old servers for virtualization, they buy new boxes that are more ideal for the partitioning technology. burke shared several other findings from Nemertes Research benchmark surveys of IT pros from a mix of small, medium and large enterprises. Ninety-three percent of surveyed IT pros are already using virtualization, though not necessarily in production. Nearly four out of five have virtual servers hosting customer-facing applications, while on the whole 38 percent of workloads are virtualized. About half the IT shops have seen quantifiable benefits from virtualization. Disaster recovery is one of the main benefits. After virtualizing, 70 percent of respondents can fail systems over in less than an hour, while 26 percent can do so in five minutes. A third of respondents are able to failover from a primary to a back up datacenter in less than an hour, while 10 percent are able to do so in less than five minutes. At first glance, virtualization seems to improve manageability. on average, nine workloads are assigned to each administrator prior to virtualization, while after virtualizing more than 60 workloads can be handled by a single IT pro, burke's research indicates. but the management tools IT is accustomed to aren't designed for virtual systems. "A lot of this is happening without robust support from the management tools they're used to relying on," burke said. Virtualization makes spinning up new applications seem so easy that many IT shops go overboard, until they learn their lesson. "Virtualization is like crack and people go crazy with it — for a while," burke said. —by Jon brodkin DataC e n t e r

—By Jerome Wendt 12

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Trendlines-WITH AD.indd 12

Vo L/4 | ISSUE/14

6/3/2009 12:08:31 PM


In High-Density Mode

Trendlines-WITH AD.indd 13

surface, following concentric tracks. This limits storage density to 100 to 200Gbit/ in2 or so. Perpendicular recording, introduced commercially in 2005, puts data bits in a vertical magnetic alignment that is perpendicular to the disk surface; it's as if the data bits are standing up rather than lying down. Normally, the amount of magnetic material used to record a bit must be sufficiently large to retain its polarity so that it can't be accidentally reversed. Perpendicular recording allows the use of finer-grained material in which it's more difficult to reverse the magnetic orientation. Thus, perpendicular recording permits physically smaller bits; theoretically a density of 1Tbit/in2 would be possible. Still in the research stages, heatassisted magnetic recording or HAMR uses a laser to heat the storage medium while writing to it. It uses a different type of recording medium than conventional magnetic technology. That new medium is often an iron-platinum alloy. This allows much higher storage densities (potentially up to 50Tbit/in2) but requires the application of heat to change the magnetic polarity in the area that delineates each bit. Holographic storage has been just around the corner for several years, but in 2009 it may finally come to market from InPhase Technologies Three-dimensional holographic images store more information in less

space by recording not only on the surface of the storage medium, as do other technologies, but also within an entire volume of space in the medium. (Technically speaking, we should properly measure holographic storage density in units of bits per cubic inch, but that's not yet a common usage.) For holographic storage, holograms are created by splitting a single laser light into two beams: One is a reference and another carries the signal. Where the reference beam and the data-carrying signal intersect, the interference patterns are recorded in a light-sensitive storage medium. (Initially, this physical storage device will be a DVD-size disk.) Because multiple beams can be used at different angles, hundreds of unique holograms can be recorded in the same volume of material. In one sense, this is similar to a dual-layer DVD, except that it contains hundreds of layers, each recorded at a different angle so that they are not parallel to one another. The stored information is reconstructed by deflecting the reference beam off the hologram and projecting it onto a detector that reads an entire data page (more than 1 million bits) at once. The first commercial units are expected to use disks with a capacity of 300GB. A real advantage of holographic storage is that its transfer speed (160MB/ sec.) is far higher than the speeds other optical media can deliver. —By Russell Kay

t r e n dl i n e s

d e v i c e s The first storage media — paper tape and punched cards — were inefficient, slow and bulky. These gave way to magnetic storage: core memory, drums and, finally, hard drives. For backup, there were removable media: magnetic tape reels and cartridges, floppy disks and removable hard drives. Then optics (CD-ROM and DVD drives) supplanted magnetism for archival uses. Today's computers need to store more data than ever. The most recent storage generation replaces moving parts with solid-state electronics. Through all this evolution runs a constant thread: storage got faster and it got smaller, packing more data into less space. This storage density is measured (also called areal density) in units of bits per square inch (bit/ in2). The increase in density over time, particularly with magnetic media, has been remarkable; the cost-effectiveness is astronomical. A hard drive with a density of 329Gbit/ inin2 was just announced by Seagate Technology. For perspective, researchers believe that 1Tbit/in2 represents the theoretical limit for current magnetic storage, and we may approach that limit in just a few years. What happens when we hit the wall? Where do we turn for more storage? A number of technologies that could help are under development. In longitudinal recording, magnetic data bits are aligned parallel to the disk

6/3/2009 12:08:32 PM


Trendline_Nov11.indd 19

11/16/2011 11:56:19 AM


James E. Gaskin

Applied Insight

The days of all-local or all-cloud data storage are numbered.

T

he early adopter ‘cloud crowd’ makes the most headlines, but they're only the tip of the small business iceberg. Looking at various data storage vendor customer numbers has convinced me 90 percent of small businesses are still mostly land (or LAN) based. Some people don't yet trust Internet data storage, and some like to wear data storage suspenders with their data storage belts. The good news for both? Options for combo cloud and local storage hybrids continue to grow. What do you need to do with your files? Create them, share them, change them, back them up, and archive them in regulated industries. You can do all these things locally or online. Some early adopter small companies do everything online, but most small businesses still do everything locally. Smart business now requires a mixture of local and online data storage to add offsite backup in case of a disaster, and easy data sharing with remote co-workers and business partners.

The Best of Both Worlds Il lustration by unnikrishn an AV

The best local and online combination I've seen, for the last three years, is FileEngine, a small company in Indianapolis. Custom hardware, in fire engine red (the owner jokes it's "file engine" red) provides local user file storage much like a Windows Server without the Web and e-mail server cost and complexity. Local users store their files on the FileEngine rather than on their own computers, which reduces file loss risk (remember you can't trust users to protect their files). The FileEngine saves two weeks of file changes locally on the 16

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Coloumn - 01_Bridging Land and the Cloud.indd 16

Vol/4 | ISSUE/14

6/1/2009 4:17:03 PM


James E. Gaskin

Applied Insight

server, like a network ‘undelete’ key for your files. Thirteen months of data files, sent automatically every night, are stored on the big FileEngine in the sky (their dedicated backup servers). Many small business owners still prefer data storage they can touch, and they can touch the FileEngine. They rarely need to, because FileEngine handles management remotely, but it makes people feel better that their files are local for faster restore. And it makes them feel better again that more than a year of file changes are safely backed up online. All this hardware, software, and service starts at $1 (about Rs 50) per hour, or $8 (about Rs 400)per day. I'm amazed FileEngine's great idea hasn't been copied for a couple of years, but one new company is coming from the cloud down to local rather than from local up to the cloud. Egnyte offers an "on demand file server" that's strictly cloud based. You treat the Egnyte service like a local file server, but it's all online. While Egnyte wrongly calls itself the first hybrid cloud/ on-premise file server solution (FileEngine beat them by three years), it may be the first online storage service to provide a way to tie local hardware to your online backup files. Rather than provide the hardware as does FileEngine,

each client computer so they can grab every change to every file almost as soon as it happens. The software agent watches for changes to the file system, then sends just the changes to the appliance, which forwards the changes up to safe online storage. You have to add software to every client, but you save every file change immediately, rather than waiting for a scheduled backup sweep of changed files. Finally, you can now synchronize files between multiple computers and back them up at the same time. BeInSync, now owned by Phoenix Technologies, added online backup to their successful folder synchronization service. I don't accept folder synchronization as a backup method, because if you delete a file by accident, you also delete it in the synchronized folder. Oops. Backup means an accidental deletion of a file can be restored from a safe copy elsewhere. That said, when BeInSync added their online backup component, you got the best of both worlds. Folder synchronization works great for many reasons, especially for scattered teams that must keep all documents current. You can choose which folders are shared, and which are also backed up. Remote sales teams can always have the latest product information, contracts, and reports.

Local and online hybrid storage products and services erase the boundaries so many people have come to expect, but that's good. You no longer have the "local versus online" argument complicating your data storage plans. Working together always beats confrontation. Egnyte provides software to coordinate online files with your own off the shelf storage device (Maxtor, Simpletech, Western Digital, Toshiba, etcetera). You can also use your local computer hard drive. Think of this as a local copy of your online backup files, so you can access them if you can't reach the Internet. People want that, as evidenced by all the trouble Google has gone to so some users can keep many of their Google Mail files on their own hard disk. While we're still talking backup, another new company has joined the local appliance connected to hosted online backup lineup. Axcient offers local hardware that pulls data from personal computers and servers without needing to load software on them. Then the files are organized and sent to Axcient's data storage facilities for online backup. You get local backup and remote backup. The hardware/software service starts at about $100 (about Rs 5,000) per month. SonicWall brought out something similar a couple of years ago with their SonicWall CDP Appliance Series. CDP stands for Continuous Data Protection, and SonicWall (or LassoLogic, the company they acquired) puts software on

Vol/4 | ISSUE/14

Coloumn - 01_Bridging Land and the Cloud.indd 17

Committees no longer have to e-mail versions of amended documents back and forth. Your important files are all synchronized, and better than that, they're automatically backed up. Win-win. These local and online hybrid products and services erase the boundaries so many people have come to expect, but that's good. You no longer have the "local vs. online" argument complicating your data storage plans. Working together always beats confrontation, and now your local file storage fanatics and your online file storage fanatics can both be happy. CIO

James E. Gaskin is an author, writer and speaker. Send feedback on this feature to editor@cio.in

REAL CIO WORLD | J U N E 1 , 2 0 0 9

17

6/1/2009 4:17:03 PM


Cover Story | Storage virtualization

Faced with the spiraling cost of running separate storage For each oF it’s 200-plus clients, Wns global decided to turn to virtualization. But convincing its clients that virtualized storage W Was secure would call on all oF sanjay jain’s skills oFF persuasion. By KaniKa Goswami

A slowdown

in over half the world’s economies should have tolled a death knell for organizations like BPOs whose existence depend on them. But as if to prove a point, the Mumbai-based WNS Global Services posted healthy revenues in 2008's fourth quarter and that financial year — and it seems prepared to go on. At least part of the reason for its ability to keep the slowdown at bay is the company’s technological competence. The Rs 2,700-crore company’s virtualization initiative is a prime example of that IT prowess.

Reader ROI:

Why storage virtualization can help meet the slowdown How to get stakeholder buy-in Ways around storage virtualization’s security challenges

20

Cover Story.indd 20

The way WNS handled its diverse global client base, all two hundred of them, was fertile ground for financial and technological risks. Because each client insisted on the security of having separate file servers for themselves, WNS had plenty of under-utilized servers on their hands — and faced a sea of technology complexities. While this appraoch gave WNS an edge in its market, it also created a continuous stream of challenges, especially when WNS took on new clients. There was the perennial need, for example, to reduce IT’s total cost of ownership (TCO), manage operational efficiency, meet SLAs and still manage uptime for customer-centric technology infrastructure, ensure compliance and uphold security standards. And then there was scalability, which would, in time, become its biggest storage challenge. As its business grew, WNS’ information databases took increasing effort to process, store and protect. Besides, the storage requirements for 200-plus customers were difficult to maintain: data needed to be secured in repositories across segregated networks and active directories with varied types of storage requirements. It was also proving to be financially unviable.

J U N e 1 , 2 0 0 9 | REAL CIO WORLD

Vol/4 | ISSUE/14

IllUSt ratIon by p c an oop

too Many to Handle


Cover Story.indd 21

6/1/2009 4:12:49 PM


Cover Story | Storage virtualization Arvind Sood, CTO, WNS says the WNS faced two challenges. “As we grew, we essentially set up fairly customer-centric infrastructure in that we replicated a lot of kits across multiple customers. This led to two situations: a potentially large replacement cost as infrastructure aged. Second, there was the problem of managing diverse infrastructure. The whole environment was slightly cumbersome,” he says. The obvious answer was a centralized storage architecture. But WNS wasn’t sure it’s customers would be comfortable. Typically, clients ask for separate boxes to ensure that no one can access their data. If WNS wanted to virtualize customer data, it would need to play a balancing act between security and accessibility to maintain zero visibility between customer environments.

But WNS’s IT team was confident that technologically it could be done. Convincing their clients, however was another matter. Change management would prove to be the biggest challenge in front of them, says Sanjay Jain, CIO, WNS Global Services. Along with Sood, Jain and a hand-picked team for the storage virtualization project knew they had a herculean task ahead. With the obvious constraints, it could have been extremely difficult to maintain their security standards and still meet the objectives of being cost effective. But the advantages were too many and too obvious, and Jain decided to take up the project. “We started in January 2008,” Jain recalls. “We went through a thorough vendor evaluation. The deciding factors were the corporate relationship WNS already had

with the vendors we chose, their price and the capability of their products,” he says. The core operation team had 10 server administrators responsible for deploying boxes and helping users migrate. Then there was a 130-man team headed by Sood at the desktop and front end that dealt with customer management and field-side deployment support. These staffers dealt with day-to-day service issues, including the deployment and other niggles that could arise. A group of service delivery managers interfaced with business units and customers and ensured smooth communications and got the necessary authorizations in place. A two-pronged approach was adopted for the storage virtualization project: one for the company’s internal applications, and the other for client applications.

inside outside The internal corporate systems were moved to a SAN and the file servers were moved to a NAS which could provide a virtualized environment. The SAN provided about 10 TB of space. It hosts WNS MS SQL clusters and Oracle databases which can be used by various internal and customer applications, and affords not only a cost advantage, but also availability and scalability. On the client side, the process started with individual file servers of each customer being moved to centralized boxes using a combination of SAN and NAS, running virtualization. It would then appear as a distinct network in specific VLANs within their respective business environments. As the transition progressed, the existing active directory environments were reduced due to the migration of user objects, computer accounts and security groups, and regrouped into client-specific organization units within a single global active directory. These are LANs (in three locations: Mumbai, Pune and Gurgaon) with about 21 terabytes of storage each. They were used for file storage, replacing the earlier client file storage

“My operational expenses last year were lower than the years before because of virtualization — despite the growth of the company. ” — sanjay Jain, cIo, WnS Global Services

22

Cover Story.indd 22

J U N e 1 , 2 0 0 9 | REAL CIO WORLD

Vol/4 | ISSUE/14


Cover Story | Storage virtualization servers, and allocated to specific customer projects and business processes. “We have been able to provide security by creating a dedicated storage area and connecting them with extended VLANs, so for every client we run a dedicated VLAN,” Jain says. Among other things, this solution ensures more manageability and improved efficiency. It also helps reduce TCO. Having a storage solution at all three locations ensured a standardized implementation, and also allowed for snap mirroring. The needs of each client’s group security policy helped push them into specific directories. Each location’s directory was configured for replication with other locations to ensure business continuity in the event of a disaster. This process was fully supported by the connectivity between locations on a fully redundant MPLS L2 WAN. Inter-VLAN route blocking on the core switches ensured data and customer segregation and also helped secure a client’s firewalls. Anti-virus, patch management and other ancillary services were also moved to a centralized environment. To ensure secure access to client data, each solution was moved to an isolated VAN, and kept in virtual folders which can be accessed only by specified users, whose identities are authenticated by the active directory. Plus, NAS de-duplication was rolled out to ensure optimum utilization of space on file shares by identifying redundant data.

Winning stakeHolders over Jain insists that he met with no technological issues during the migration. The only challenge, he says, was change management. “The biggest challenge we faced was the fact that our operation team and our clients were used to having their own servers. Before we moved client data to central storage we had to take their approval. The idea that their data was being virtualized and centralized with others was hard to swallow,” he says. “Obviously, the foremost question was the safety and security of data, which we were able to demonstrate adequately by the extension of the VLAN we created.” However, Sood realized that there was another issue that was just as important as security: most of the data they had was process-related. “Performance matrix,

Vol/4 | ISSUE/14

Cover Story.indd 23

If WNS wanted to virtualize customer data, it would need to play a balancing act between security and accessibility to maINtaIN zero vISIbIlIty between customer environments.

reporting, documentation manuals, etcetera… we had a huge amount of data more specific to WNS’ processes than endcustomer data and this resided on customermanaged servers,” he says. It was only after the team had got customer buy-in that the second step of migrating the huge volumes of information from the legacy systems to the centralized storage could be taken. “The process went through multiple cycles,” Sood recalls. “Generally, I had to get through to the operations group, the customs operations group, then the customer technology and security group. This was the three-step process that we followed in every migration.” Adds Jain, “the biggest potential pitfall was bad planning and poor change management. We needed to make sure that we identified effective user groups and stakeholders to ensure that we communicated with them regularly as changes were taking place. Also, at the end of the project, we needed to make sure that they saw the value of the project.” There were change management request at the client end, because they needed to justify WNS’ move to their own business users and IT departments. It forced Jain and team to spend a lot of time on the phone. “It was a lengthy process, since many of our clients are large companies who work at their own speed,” Jain recalls. “After we went to them, and there were processes they needed to go through at their own end. They were concerned about the safety of their data.” It took more than convincing clients that only people with appropriate rights would be able to access their data. “Documentation of the processes was required. We were subjected to our client’s security audit. We were ready for that, but it was a time consuming process,” Jain says.

Sood, as the hands-on head of the project, had another challenge. “A significant issue was trying to keep various processes up and running. For example, if we had some documentation on our system, we could not leave that information out in the cold during the migration. Timing was everything. A lot of these processes we migrated over weekends or in very narrow windows of time. That itself added a lot of complexity.” Then there was the need to create an adequate level of comfort among users. “We used multiple modes of communication to make sure people understood the minimally invasive nature of the project. We’ve virtualized storage across various customer networks without really sacrificing security. So each customer’s data is still segmented and secure, it’s just that physically, it is a single platform.” Sood’s responsibility as team lead did not end with client buy-in. Internal change management was also needed. “What was required was a slight and subtle change in behavior: things like what users looked at and how they accessed documentation changed. These changed in a very minor way. In many cases, change management centered more around ensuring that the customer’s security team understood the implementation. We operated a group of engagement managers who deal specifically with the customer. Those people had to be equipped with the right information to essentially sell this proposition,” Sood says. “And we needed to ensure that after the implementation, they knew how to access the new sources of information.” To someone on the outside, there was hardly any internal change. “The users did not even realize there was a move. Their dedicated networks were mapped to a REAL CIO WORLD | J U N e 1 , 2 0 0 9

23


Cover Story | Storage Virtualization central storage. The move occurred at night and when users came in the next morning, they did not even notice the change.” And even the jobs of those who administered the legacy system were not affected. “We are a growing company. There is always enough work. Most of the people were re-allocated to other jobs.” Apart from being CIO, Jain also heads the program management team, so he plays an active part in adopting new technologies. “The organization was really quick to adopt the project management aspect of the change and the change management aspect of the new implementation,” he says.

Combined Benefits There were multiple advantages to the storage virtualization project, but the biggest was the increased ease in manageability

and the number of storage boxes that it released. “We are effectively able to manage approximately 75 TB of information which should be able to cater to growth for next five years,” Jain says. “We freed up a number of servers and that actually helped us to bring down our costs. That in turn, brought down all kinds of costs. Archival, retrieval and back up costs are all in control, thanks to the SAN infrastructure. That has resulted in a significant drop in opex, both from a manpower and AMC point of view. We have gotten significant cost savings and about a 100 servers are in inventory.” Although Jain cannot share figures, he maintains that the project is paying for itself. “I had specific ROIs to achieve. My operational expenses last year were lower than the years before, because of the virtualization and despite the growth of the

company. People costs have not changed much since we were already effectively staffed, but the cost of AMC was a big thing. We didn’t have to pay AMC for so many servers and this was the biggest, most direct cost savings we managed. It paid a significant dividend,” he says. Being able to keep up with the company’s expansion quickly and cost effectively is another huge advantage. “Today, we are able to provide really fast response time to business growth,” Jain says. “And instead of buying new servers every time there's a need, we can grow fairly quickly. We may not be able to identify all the benefits in numbers but these certainly are sizeable benefits.” “We’ve achieved all we wanted to from the program,” adds Sood. “I’ve had very little pushback from customers; an almost zerocomplaint situation. We actually touched more than 15,000 people with the program but had, maybe, one odd incident and even that was resolved quickly.” Sood agrees that WNS accrued savings in multiple areas, mostly in terms of recurring support and infrastructure management costs. “The largest saving that we have is that future growth can be more effectively planned for since it’s cheaper for us to add to this platform than to the earlier ones.” WNS also has a business rationale behind the implementation. “We got benefits at lower costs,” Jain says. “While we did go ahead with the capital expenditure of buying infrastructure, it actually resulted in significant opex reductions that has helped me build the business.” At a time when organizations are designing their corporate policies to allow for falling business revenues, WNS has harnessed the power of virtualized storage to expand its business. It will also help the company retain its 19th position in the Global Outsourcing 100, which will help attract more business — like the 50 customers it added last year. CIO

“We’ve virtualized storage across various customer networks without sacrificing security. Each customer’s data is still segmented and secure, it’s just that physically, it’s a single platform. ” — Arvind Sood, CTO, WNS Global Services

Kanika Goswami is assistant editor. Send feedback on this feature to editor@cio.in

24

Cover Story.indd 24

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Vol/4 | ISSUE/14

6/1/2009 4:12:53 PM


Storage virtualization is going mainstream, but you'll want to avoid the common pitfalls and ask the right questions before rolling your own out. Here are six key issues to consider as you prepare to

Illust rat ion by MM S han it h

get all the pieces together.

Vol/4 | ISSUE/14

Feature_1_Strategy.indd 25

By Jon Brodkin and Thomas Hoffman

If you're an IT executive, chances are you're already

thinking about storage virtualization. Nearly one-quarter of companies with at least 500 employees have deployed storage virtualization products already, and another 55 percent plan to do so within two years, according to a Gartner survey. Storage virtualization is an abstraction that presents servers and applications a view of storage that is different from that of actual physical storage, typically by aggregating multiple storage devices and allowing them to be managed in one administrative console.

Reader ROI:

Virtual storage’s staffing challenges What to watch out for Creating a holistic view

REAL CIO WORLD | J U N E 1 , 2 0 0 9

25

6/1/2009 4:13:17 PM


Strategy CIO VIEW

Do the benefits of storage virtualization outweigh its inherent challenges?

“Though I see a lot of value in storage virtualization, I feel several challenges, such as lack of the right skillsets are problems in its adoption.’’ sanjeev goel CIO, Hindalco

“The challenges are mostly associated with the implementation. If deployed properly, enterprises can reap numerous benefits, which definitely outweigh the challenges.’’ rajesh munjal

Head-IT, Carzonrent

The technology is emerging fast onto the enterprise scene for good reasons: In many cases, it can reduce the management burdens associated with storage; and offer better models for datacenter migrations, back up and disaster recovery. Enterasys Networks reaped these benefits rewcently when it moved a datacenter from Boston into its headquarters in Andover, Massachusetts. "In days gone by, before storage virtualization, that might have been an all-day, if not an all-week kind of process," says Trent Waterhouse, vice president of marketing, Enterasys. "Because of the storage virtualization technologies, the entire move happened in less than 30 minutes." There are still common pitfalls that storage administrators should ponder, as well as questions they should ask before they roll out a storage-virtualization project. Here's a look at some of the top issues.

managing capacity

“The benefits surely outweigh the challenges of storage virtualization, as it allows organizations to harbor larger volumes of data inexpensively and scale seamlessly on demand.’’ rajesh chopra

VP-IS, Samsung India

"Make sure you size it correctly and really understand how much horsepower [your applications need]," Smith says. These concerns are especially true when it comes to thin provisioning, a component of virtualization technology that lets an IT administrator present an application with more storage capacity than is physically allocated to it. This eliminates the problem of storage over-provisioning, in which storage capacity is pre-allocated to applications but never used. With thin provisioning, more than 100 percent of storage capacity can be allocated to applications, but capacity remains available because it won't be consumed all at once. You can play it safe by allocating small volumes that never exceed the physical storage, or allocate as much as you want to each application, then monitor your systems closely, says Themis Tokkaris, systems engineer at Truly Nolen Pest Control. It's best if you can find a happy balance between those two extremes. "You have to monitor your pool so you don't run out of space, because that would really crash everything," Tokkaris says.

With storage virtualization, allocating storage is easy — perhaps too easy. "You have the ability to affect more systems in the whole fitting in server virtualization forest if you do something," says Jonathan Smith, CEO of A common question is whether it makes sense to virtualize ITonCommand, who cautions fellow IT shops to pay close storage if you're not also using server virtualization. The short attention to both the storage and performance needs of each answer is yes — though it's true you won't get as much flexibility application. "You just didn't have that power before. Now all of as IT shops that virtualize both servers and storage. a sudden you can do whatever you want." Nevertheless, there are benefits to just virtualizing storage. Smith, who is using LeftHand Improved disaster recovery, Networks virtualization on availability and data migrations HP storage, says an IT pro can all be gained without might see a lot of empty space having virtual servers, says in a given storage volume product marketing manager and be tempted to fill it up. Augie Gonzalez of storage To manage the increasing volume of data Overusing a resource, however, virtualization vendor DataCore can decrease performance if Software. In addition, storage To manage complex and fragmented storage the storage is allocated to a virtualization by itself can infrastructure database or some other I/Oprovide thin provisioning, intensive application. as well as the simplified

Why CIOs Turn to Storage Virtualization

26

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Feature_1_Strategy.indd 26

Vol/4 | Issu E/14


Strategy management structure that comes with pooling storage devices and managing them from a central console. On the flip side, virtualizing servers without virtualizing storage is problematic. It doesn't make sense to have multiple virtual servers on a physical machine that aren't able to share data, says Enterprise Strategy Group (ESG) analyst Mark Peters. "You can gain tremendous benefits from storage virtualization, even without server virtualization. It's harder the other way around," Peters says.

virtualization in a Heterogeneous environment Given that virtualization is designed to combine multiple storage devices, it's not immediately obvious why it makes sense to virtualize your storage if it all comes from a single vendor. There are compelling reasons, however, says storage analyst Arun Taneja. "A lot of people think storage virtualization has a prerequisite of heterogeneity, that it only comes into play when storage from three companies is involved," he says. "I say, forget it, it has value even if you are stuck with a single vendor." The storage market is more proprietary than just about any other IT space, and this creates problems even if you have just one storage vendor, Taneja says. Say you're an EMC customer with two Symmetrix DMX boxes, and "you just want to combine the power of those two boxes and manage it as one," Taneja says. "[Without storage virtualization] you can't do it. That's how ridiculous the world of storage is." This ridiculous level of exclusivity in the storage market obviously takes on a new dimension when you're managing storage from multiple vendors. That leads to the next issue.

cHoosing a vendor Enterprises' primary procurement dilemma is whether to purchase storage-virtualization products from a storage vendor or a third party. If your true objective is flexibility, especially if you're planning major data migrations, a third party is the way to go, Taneja says. Such vendors as FalconStor Software and DataCore are capable of managing storage from multiple vendors simultaneously, whether they are EMC, HP, IBM or Hitachi. Truly Nolen chose a third party even though the company uses only HP storage. The company evaluated virtualization vendors including HP, EMC, and Dell EqualLogic, but settled on their vendor because it was less expensive and offers the flexibility of using whichever hardware vendor it likes, Tokkaris says.. The major storage vendors promise to be able to manage a heterogeneous environment. Examples include IBM's SAN Volume Controller, NetApp's V-Series, and EMC's Invista. As a general rule, though, vendors support their own storage products first and others second, if at all. "They always support their own systems first," Taneja says. "That means EMC's Invista supports DMXs and Clariions, and they might support some other foreign devices; but the support for foreign devices always lags, and support for foreign devices

Vol/4 | Issu E/14

Feature_1_Strategy.indd 27

is always incomplete. The whole idea is don't support your enemies' boxes." Peters predicts that as storage virtualization becomes more common, market pressure will force vendors to do a better job supporting their rivals' technology. If you get storage from just one vendor, however, the solution is simple. "I say to the IT people I talk to, if you're a Hitachi Data Systems customer and you like working with them and you're stuck with them, just buy their virtualization to make life more manageable within Hitachi product," Taneja says.

AnAlyze SkIllS In FOur STepS STep 1: Clearly understand what you're trying to achieve with storage virtualization. Is this project part of a broader virtualization deployment strategy? or is it designed for a specific use, such as tiered storage, disaster recovery or basic resource management? Make sure you fully understand what you want to achieve (or overcome) by deploying storage virtualization. STep 2: assess your current skills and identify gaps. look across your It staff for relevant and related skills. Is your storage specialist virtualization-savvy? Do you have It workers with years of relevant mainframe or system virtualization experience? look at your virtualization project to see whether any specific platform integration will be required (for example, hypervisors, clustering or data sharing). STep 3: Evaluate independent training and certification. before conducting vendor analysis, make sure you've addressed potential skills gaps in order to make an assessment of different approaches to virtualization and how they might fit into your organization's existing infrastructure. STep 4: Consider vendor-specific training. storage virtualization approaches vary from vendor to vendor, so if you have selected a new vendor or are expanding work with an existing vendor, you will likely need some custom training to ensure that you're taking advantage of all the features the vendor provides. source: storage networking Industry association, san Francisco

REAL CIO WORLD | J U N E 1 , 2 0 0 9

27


Strategy

As companies dive deeper into virtualized storage, IT leaders are getting a better understanding of the staff skills they need to make their projects succeed. remember: dIFFerenT ApprOACheS TO STOrAge VIrTuAlIzATIOn demAnd dIFFerenT SkIllS.

sifting tHrougH tHe Hype By most accounts, storage virtualization is a no-brainer. Who wouldn't want to manage multiple storage devices from a single console, and gain data mobility that makes disaster recovery a breeze? Storage virtualization will be about as common as automatic transmissions in automobiles within a couple of years, ESG's Peters thinks. "There are certain technologies that are just smarter and better than people doing it manually," he says. Even storage virtualization vendors, however, can admit there are instances when the technology isn't a fit. Storage virtualization is not for everyone, says Kyle Fitze, an HP director of storage marketing. Virtualization actually adds a layer of complexity, he argues. You have to manage the individual storage devices, as well as the virtualization layer, he notes. Despite virtualization, you still have to perform such tasks as re-configuring devices after adding physical disks to storage arrays, he adds. As a general rule of thumb, the more complicated your storage environment, the more benefit virtualization brings. "There's a complexity/benefit tradeoff," Fitze says. "If their current environment is difficult to manage and complex . . . adding a virtualization layer can simplify that complexity. If it's a small, efficiently managed environment without data-protection challenges, then virtualization just for virtualization's sake is probably not a good idea."

finding tHe rigHt staff As companies dive deeper into virtualized storage projects, IT leaders are getting a better understanding of the staff skills they need to make those projects succeed. The exact talents required depend on the type of storage implementation, but most employers say they're in the market for two kinds of IT worker: technicians with vendor-specific SAN or NAS knowledge, and systems administrators and IT architects who understand the complexities and interdependencies among applications, operating systems and I/O, all of which affect storage requirements. But the different approaches to storage virtualization demand different skills. For example, IT organizations that have created virtual server farms have typically relied on storage professionals who are knowledgeable about the types of platform being used and how best to allocate storage for those configurations, says Vincent Franceschini, chairman of the Storage Networking Industry Association (SNIA). 28

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Feature_1_Strategy.indd 28

That's one reason why IT leaders and industry observers say systems administrators and IT architects have skills that can help organizations manage storage virtualization efforts. Workers with such backgrounds are typically adept at configuration management and understand how storage, or "block," virtualization interrelates with disciplines such as disaster recovery planning and server clustering, says Irwin Teodoro, director of engineering, systems integration at Laurus Technologies, a systems integrator. What's needed is targeted instruction in how virtualization works. For example, IT professionals who want to get involved with storage virtualization "need to know how the operating systems treat disk or what the disk limitations are to be successful in this environment," Teodoro says. Plus, systems administrators "are familiar with some form of data storage layout, and what you find is that 80 percent to 90 percent of storage administrators have backgrounds in systems administration," he adds. The importance of those technical and process interrelationships in storage virtualization efforts also helps explain why there's strong demand for IT professionals who have ITIL process-transformation experience, says Brian Brouillett, vice president of data center services at HP. Meanwhile, IT organizations crafting their own virtualized storage environments often use their existing SAN or NAS technologies and draw on IT staffers who are experienced with them, says Rick Villars, an analyst at IDC. Employees who are adept at tuning system performance and optimizing system utilization can help make those technologies more cost-effective in a virtualized environment, he says. That might also be a more financially prudent approach, according to David Foote, chief research officer at management consultancy Foote Partners LLC. Foote says employers prefer to develop their own IT staffers with virtualization skills — including those with sought-after security and networking acumen — instead of hiring contractors. That's partly because contract workers with such skills "don't come cheap," he notes. Even though some IT organizations have been involved with various types of virtualization for a few years, storage virtualization is still a brave new world. And there's still a lot to learn. CIO

Send feedback on this feature to editor@cio.in

Vol/4 | Issu E/14


Back Up

Twice is nice for most things — except storage. How smart back up solutions — that operate on narrow bandwidth — can save your organization time, money and effort. By Omair Siddiqui Your business

is only as redundant as the integrity of the data that you have stored on your servers. For companies that service customers in the cloud, if you can't offer 99.9999 percent uptime and absolutely ensure data backup and restoration, you might as well not be in business. There are a few issues at hand here. Not only must you ensure that the data is accurately and securely backed up whereby every packet and byte is accounted for, but you must also ensure that when the time comes, the data is ‘clean’ enough to be plugged back into the system without a hiccup. It's the hiccup that companies need to avoid which is why they look for ways to backup their data to begin with, however they aren't always as proactive as the results they expect. There has to be a process to acquire backups. Recent advancements in network and backup technologies have improved performance, making it easier to backup data over the network.

Vol/4 | ISSUE/14

Feature_2_Back Up.indd 29

Reader ROI:

How data de-duplication makes back ups easier Why it can help with disaster recovery The financial and security benefits REAL CIO WORLD | J U N E 1 , 2 0 0 9

29

6/1/2009 3:42:58 PM


Back Up CIO VIEW

How can CIOs cut cost with smart backups?

“We use the right devices, the right software and a connected network to store data at the right locations to manage data backups smartly.” c. subramanya

Sr. VP & Global CTO, HTMT

“The best way is to identify critical and non-critical data. Non-critical data can then be maintained on low-cost storage. CIOs should adopt a tiered storage approach.” rajiv seoni

CIO, Ernst & Young

The traditional process involves using tape drives at branch servers, where an end-of-day-tape backup is usually made and physically sent to the head office. There are obvious problems with physically moving data — the unavailability of qualified technical resources to take properly handle backups, verification of the backed up data before shipping it to the head office, the risk of damaging the tape, data loss or theft during transportation; the list is quite endless. More importantly, because backup in the current scenario, is often done on an ad hoc basis, when an IT administrator tries to sync the data into the datacenter, they find that it was not a successful backup to begin with. But most of these issues are usually revealed beyond the point of no return — when the restoration or a DR (disaster recovery) drill is performed. By virtue of the technological advancements, there are solutions available which support a wide variety of operating systems and applications which can take optimized backups over WAN links.

fast and lean

10%

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Feature_2_Back Up.indd 30

yogesh Zope

VP-IT, Bharat Forge

datacenter. This minimizes WAN bandwidth requirements and allows scalability because of the reduced storage capacity requirements. A storage pool on the remote site optimizes the data over all the branch's clients by identifying unique contents and storing backup data locally. This shortens backup and restore tasks, and enables synchronization with central locations. Instead the files can be backed up over the WAN to a central storage. The solution is also able to deal with one of the most important aspects of data security over the WAN by encrypting every file segment that it sends to the storage. The data in transit, as discussed earlier, comes to life in this segment of the solution whereby the data is encrypted before it is sent over the WAN to the storage, ensuring that the data is secure during the communication. This architecture eliminates the risk of accidental loss of tapes and unauthorized access of data, both in transit and at rest. The solution has a very tangible return on investment. It is cost effective because the alternative would be to deploy separate tape drives, media, backup software, and technical support at each distinct site. There are then additional costs involved in the administration of each site and management of the off-site storage of the tape media. Since the solution removes the need for on-site tapes, the customers who have deployed this, have been able to justify their investment in a relatively short span of time.

The solution in this case is actually a paradigm shift making use of modern data protection technologies to cope up with the everincreasing backup data vis-à-vis bandwidth. It uses fingerprint technology to distinguish unique file segments and maintain a check on all the redundant data in the remote sites. The solution that addresses these requirements includes a storage pool at each remote location which then replicate the data over the WAN. These agents are thin software deployed in the remote servers. The The amount storage software makes sure to send only new unique equipment and segments of file to a local storage pool, which management software automatically reduces the size of the transfer. represents in the average It then becomes the job of the pool to check IT hardware budget. the uniqueness of the file across all local Source: Forrester agents. It only replicates unique file segments to the main storage located in the centralized 30

“I think virtualization and de-duplication help doing this effectively and efficiently. We have virtualized all our existing servers and it has helped cut costs.”

casHing on it Innovative Integration recently deployed the solution at Bank Islami, ensuring that they had scalable, high performance data protection architecture for the bank's Linux environment. Their environment includes more than 100 servers and a centralized pool of Network Appliance Storage. The problem the bank had was consolidating the data from its more than 100 branches across

Vol/4 | ISSUE/14


Back Up

What to look out for in Data De-DupliCation DAt AtA At tA REDunDAnCy y is a primary contributor to the explosive growth in data. Initially, de-duplication focused on eliminating data redundancy in specific cases like full backups, e-mail attachments, and VMware images. over time, however, customers have noticed the pervasiveness of duplicated data. Test and development data multiplies across an organization: replication, backup, and archiving create multiple data copies scattered across your enterprise, and sometimes users simply copy data to multiple locations for their own convenience. Studies estimate that multiple copies of data require organizations to buy and administer two- to 50-times more storage than they need. Given that impact on the bottom line, organizations are recognizing that de-duplication needs to become an integrated and mandatory element in their overall IT strategy. But how do you know it's time to invest in it? Candidates for de-duplication The best candidates for de-duplication solutions are mid-size or enterprise customers experiencing issues with: Exponential growth of data, resulting in out-of-control storage costs. Shrinking or inadequate backup windows. longer recovery times, especially for older data not on the primary backup media. Cost, risk and complexity of sending tapes to disaster recovery sites. Slow throughput on both backup and archiving systems. e-Discovery, compliance and SlA requirements. Bottlenecks in expensive lANs and WANs. What to look for in a de-duplication solution The ability to scale without expensive hardware upgrades. More recovery points and with shorter recovery times. Point-and-click de-duplication management. Built-in reporting of de-duplication across vendors, data types, sources and platforms. Tight integration with all necessary apps to minimize downtime. Single solution simplicity for ease of deployment and administration. Ability to rapidly and securely recover business-critical data across all locations, applications, storage media and points-in-time. D2D2T-optimized for backup performance and reliable data recovery. Fast, comprehensive search to aid in recovery. Data integrity and security features. Built-in DR capabilities. Data classification. Cost-effective and timely eDiscovery. Use of a common technology platform. Single point of management. — Paul McClure

the major cities, located all over Pakistan. The institution was relying on a local data protection solution, built on an Open Source backup software. Almost all of the PCs and servers operated by the bank are running a SuSe Linux environment, which extended to the branch network as well. "Each Bank Islami branch contains a file server which holds files from the specific branch users," says Asad Alim, head-IT at Bank Islami. "At that time, there was no option other than to place a tape drive in each of the branches and use conventional scripts to perform the requisite backup." Using the traditional method, it could take up to a day to manage a recovery. As long as the tape was readable, it could always restore successfully providing the tapes were acquired from the remote location, locating the right tape containing the right block of information and then getting down to the restoration before having to send the tape back to the site. "Now none of this hassle is involved," explains Alim. Talking about bandwidth constraints Alim says, "We were concerned about the pressure the backup would put on the network, but despite of the 256Kbps bandwidth connectivity, the performance has proven to be stable". Changes in data is first compressed at source and then sent across WAN to the central storage where it is decompressed and then stored in a duplicated fashion. It then employs an intelligent algorithm for data transfer. In case a backup fails during the process due to link failure or connectivity loss, the backup will resume from the point it was interrupted. "The cost savings are a great endorsement but what is most important is that the branch data is now secure. You have to remember that in the past, we could never be 100 percent certain that we could restore a lost file. Now we are. Today, we are reducing our reliance on tapes for disaster recovery with secure replication of the data we backup," says Alim. CIO

Send feedback on this feature to editor@cio.in

Vol/4 | ISSUE/14

Feature_2_Back Up.indd 31

REAL CIO WORLD | J U N E 1 , 2 0 0 9

31


Blindsided! These companies ThoughTThey had Their sTored daT daT Ta a locked TighT, T but they were wrong. here's how you can T, avoid a similar fa faTe. By Mary Brandel

IllUSt ratIon by an Il t

Think you can guess

32

the No. 1 threat to the security of your stored data? If you said hackers, or even trouble-making insiders, you'd be wrong. While malicious threats are an ongoing concern, it's your well-meaning employees who are more likely to unknowingly expose your company's stored data through, say, a file-sharing network or a misplaced laptop. In fact, a recent Ponemon Institute study found that negligent insiders are by far the biggest threat to data security, accounting for 78 percent of all breaches.

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Feature_3_Security.indd 32

Reader ROI:

Why you should beware of negligent insiders How to avoid common storage mishaps What is at stake

Vol/4 | ISSUE/14


Security Costs: By June 2006, the VA was burning Data breaches, unfortunately, have become a through $200,000 (about Rs 1 crore) a day to way of life for corporate America. According Who’s Stealing operate a call center to answer questions about to the Identity Theft Resource Center Your Data? the breach. It also spent $1 million (about Rs 5 (ITRC), 2008 saw a 47 percent increase in crore) to print and mail notification letters. It documented data breaches from the year was given permission to re-allocate up to $25 before. And those are just the ones that made million (about Rs 125 crore) to pay for those the news, says Craig Muller, an identity theft costs. Class-action lawsuits were also filed, expert and founder of Identity Doctor. "I get of data breaches in 2008 including one demanding $1,000 (about Rs e-mails constantly telling me of breaches," occurred as a result of 50,000) in damages for each person affected. he says. hacking, insider theft and After the 2007 breach, the VA set aside an The public is definitely feeling the pain. In subcontractor breaches. additional $20 million (about Rs 100 crore) a 2008 study by the Ponemon Institute over Source: Identity Theft Resource Center for breach-related costs. And the department half (55 percent) of 1,795 adult respondents recently agreed to pay $20 million (about Rs across the US said they'd been notified of 100 crore) to current and former military two or more data breaches in the previous personnel to settle a class-action lawsuit. 24 months, and 8 percent said that they'd Blinders: Lost or stolen equipment accounts for the largest portion received four or more notifications. of breaches — about 20 percent in 2008, says the ITRC. According But companies are still not sure how to protect themselves. In a to Bart Lazar, a partner in the Chicago office of law firm Seyfarth Ponemon survey released in January this year, only 16 percent of Shaw LLP, incidents involving lost or stolen laptops make up the the 577 security professionals who responded said that they were majority of data-breach cases he works on. confident or very confident that current security practices could Eye-openers: Lazar recommends restricting the placement of prevent the loss or theft of customer or employee data. personal identifying information on laptops. For instance, don't One way to gain confidence is to examine actual breaches and tie customer or employee names to other identifiers, such as Social learn from them. Here's a look at five common types of breaches, Security or credit card numbers; alternatively, you can truncate with advice about how to avoid similar mishaps. those numbers. Also, consider creating your own unique identifiers by, for example, combining letters from an individual's last name Stolen equipment with the last four digits of his Social Security number. In May 2006, personal data on 26.5 million veterans Second, require personal information on laptops to be was compromised when a laptop and a storage encrypted, despite the potential cost of $50 to $100 per laptop disk were stolen from the home of a subcontractor (about Rs 2,500 to Rs 5,000 per laptop) says Lazar. This needs working for the US Department of Veterans Affairs. Both items to be accompanied by consciousness-raising, says Blair Semple, were recovered, and arrests were made. The FBI claimed that no storage security evangelist at NetApp and vice chairman at the data had been stolen, but the incident prompted sweeping reform Storage Networking Industry Association's Storage Security at the department. However, in January 2007, another breach Industry Forum. "I've seen situations where people had the occurred when a laptop was stolen from an Alabama medical capability to encrypt but didn't," he says. "Scrambling the bits facility, exposing personal data on 535,000 veterans and more than is the easy part; it's the management and deployment that's 1.3 million physicians.

41%

1

CIO VIEW

Which secures data better from insiderer threat: encryption or role-based access?

“Role-based access is a sure-fire way of managing data security. Defining a role and mapping data into that role is the ultimate security measure.” Vishnu gupta

CIO, Calcutta Medical Research Institute

Vol/4 | ISSUE/14

Feature_3_Security.indd 33

“Encryption is most crucial for ensuring security of critical data because it prevents hacking. Even if encrypted data is hacked it is very difficult to decipher.”

“Role-based access is crucial because it helps define authority and authorization. It ensures that the right person gets access to the right data.”

Chandan Sinha

Atul luthra

CIO, GHCL

Head-IT, PVR

REAL CIO WORLD | J U N E 1 , 2 0 0 9

33


Security hard." Third, Lazar recommends policies requiring very strong passwords to protect data on stolen devices.

2

inSider threAt

In November 2007, a senior database administrator at Certegy Check Services, a subsidiary of Fidelity National Information Services, used his privileged access to steal records belonging to more than 8.5 million customers. He then sold the data to a broker for $500,000 (about Rs 250 lakh) , and the broker resold it to direct marketers. The employee was sentenced to over four years in jail and fined $3.2 million (about Rs 160 crore). According to company officials, no identity theft occurred, although affected consumers received marketing solicitations from the companies that bought the data. In another high-profile case, a 10-year veteran scientist at DuPont downloaded trade secrets valued at $400 million (about Rs 2,000 crore) before leaving the company in late 2005 to join a competitor in Asia. According to court records, he used his privileged access to download about 22,000 document abstracts and view about 16,700 full-text PDF files. The documents covered most of DuPont's major product lines, including some emerging technologies. The scientist did this while in discussions with the competitor and for two months after accepting the job. He was sentenced to 18 months in federal prison, fined $30,000 (about Rs 15 lakh) and ordered to pay $14,500 (about Rs 7.2 lakh) in restitution. Costs: In DuPont's case, the estimated value of the trade secrets was more than $400 million (about Rs 2,000 crore), although the government pegged the company's loss at about $180,500 (about Rs 90 lakh ) in out-of-pocket expenses. There was no evidence that the confidential information was transferred to the competitor, which cooperated in the case. According to Semple, theft of customer information is nearly always more costly than theft of intellectual property. In Certegy's case, a 2008 settlement provided compensation of up to $20,000 (about Rs 10 lakh) for certain unre-imbursed identity theft losses for all class-action plaintiffs whose personal or financial information was stolen. Blinders: Nearly 16 percent of documented breaches in 2008 were attributed to insiders, says the ITRC; that's double the rate of the year before. One reason for this increase is that employees are being recruited by outsiders with ties to crime — a trend that accounts for half the insider crimes committed between 1996 and

2007, according to the CERT Coordination Center at Carnegie Mellon University. Insiders commit crimes for two reasons, CERT says: financial gain (as in the Certegy case) and business advantage (as in the DuPont case). In the latter, criminal activities usually start when the employee resigns, CERT says, but the thefts typically occur after they depart, having left secret access paths to the data they want. Insider threats are among the hardest to manage, Semple says, especially when the workers use privileged access. Eye-openers: A good precaution is to monitor database and network access for unusual activity and set thresholds representing acceptable use for different users, CERT says. That makes it easier to detect when an employee with a particular job designation does something beyond his normal duties. For instance, DuPont discovered the illegal activity because of the scientist's unusually heavy usage of its electronic data library server. If you suspect that a breach has occurred, CERT says it's important to act quickly in order to minimize the chance of information being disseminated and to give law enforcement agencies a chance to start investigating the case. Companies should also implement role-based access-control tools to maintain a high level of accountability over who is accessing valuable assets, Lazar says. Databases containing customer or employee information should allow very limited access. "How many people, on a daily basis, need to review Social Security numbers and addresses without permission?" he says. "Personal information should be protected at the same level as trade secrets." Muller recommends using data loss prevention tools to restrict personal data from being e-mailed, printed or copied onto laptops or external storage devices. Some of these tools provide alerts that inform administrators when someone tries to copy personal data and create a log file of such an event. "In a lot of cases, companies don't have proper audit trails in place," he says. It's also important to strengthen internal controls and audit measures by, for example, implementing iterative checks on network and database activity logs, Semple says. It's not enough to keep detailed logs; you also need audit measures in place to see if anyone has modified a log or illegally accessed it. "Unless there's some way to verify the log information wasn't tampered with, it's hard to know it's of value," he says. But in the end, technology isn't enough. "You need to find a way to ensure users you trust are worthy of that trust," Semple says.

According to an April 2008 Ponemon Institute study, 31 percent of a company's customer base and revenue source

termInAteS ItS relAtIonShIP WIth An orgAnIzAtIon folloWIng A DAtA breAch.

34

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Feature_3_Security.indd 34

Vol/4 | ISSUE/14


Trendline_Nov11.indd 19

11/16/2011 11:56:19 AM


Security

3

External Intrusion

In January 2007, retailer The TJX Companies reported that its customer transaction systems had been hacked. The intrusions — which occurred between 2003 and December 2006 — gave hackers access to 94 million customer accounts. Stolen information was found to have been used in an $8 million (about Rs 40 crore) gift-card scheme and in a counterfeit credit card scheme. In the summer of 2008, 11 people were indicted on charges related to the incident, which was the largest hacking and identity theft case the US Department of Justice has ever prosecuted. Costs: TJX has estimated the cost of the breach at $256 million (about Rs 1,280 crore). That includes the cost of fixing computer systems and dealing with litigation, investigations, fines and more. It also includes payments to Visa and MasterCard for losses they incurred. The Federal Trade Commission has mandated that the company undergo independent third-party security audits every other year for the next 20 years. However, others expect that costs may rise to $1 billion (about Rs 5,000 crore), which would include the costs of legal settlements and lost customers. According to an April 2008 Ponemon study, 31 percent of a company's customer base and revenue source terminates its relationship with an organization following a data breach. And in its recently released annual Cost of a Data Breach study, Ponemon found that breaches cost companies $202 (about Rs

Out in the Open

Since 2006, the number of documented data breaches* has risen by over 40 percent annually. Years

Documented Breaches

Records Exposed

2006

315

20 million

2007

446

128 million

2008

656

36 million

*To qualify, breaches must include personal identifying information that could lead to identity theft, especially the loss of social security numbers. Five categories of data-loss methods were tracked, including breaches of data on the move, accidental data exposure, insider theft, subcontractor breaches and hacking. Source: Identity Theft Resource Center

36

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Feature_3_Security.indd 36

10,100) per compromised customer record last year, compared with $197 (Rs 9,850) in 2007. Costs associated with lost business opportunities represented the most significant component of the increase. The average cost of a data breach in 2008 was $6.6 million, compared with $6.3 million (Rs 31.5 crore) in 2007. Blinders: According to a 2008 Ponemon study, data breaches by hackers rank a distant fifth in terms of security threats. Indeed, about 14 percent of documented breaches in 2008 involved hacking, according to the ITRC. That doesn't mean companies shouldn't be wary, however. In TJX's case, hackers infiltrated the system by 'war driving' and hacking into the company's wireless network. TJX was using subpar encryption, and it had failed to install firewalls and data encryption on computers using the wireless network. This enabled the thieves to install software on the network to access older customer data stored on the system and intercept data streaming between handheld price-checking devices, cash registers and the store's computers. Eye-openers: According to Muller, the WEP encryption that TJX used on its wireless network was insufficient — weaker even than what many home users have. "If from the parking lot you can gain access to the database, you need a higher level of data security and data encryption," he says. TJX had also stored old account information instead of permanently deleting it, Muller says.

4

Negligent Employees

The spouse of a telecommuting Pfizer employee installed unauthorized file-sharing software on the worker's company laptop, enabling outsiders to gain access to files containing the names, Social Security numbers, addresses and bonus information of about 17,000 current and former Pfizer employees. An investigation revealed that about 15,700 people had their data accessed and copied by people on a peer-to-peer network, and another 1,250 may have had their data exposed. Because the system was being used to access the Internet from outside of Pfizer's network, no other data was compromised. Costs: Pfizer contracted for a support and protection package from a credit-reporting agency, which includes a year's worth of free credit-monitoring service for those affected and a $25,000 (about Rs 12.5 lakh) insurance policy covering costs that individuals might incur as a result of the breach. Blinders: Careless insiders — not malicious ones — are the No. 1 threat to data security, according to a recent Ponemon study, in which IT professionals said 88 percent of all breaches involved negligent insiders. "If there were more employee awareness about security, the number of breaches would come way down," Muller says. In Pfizer's case, the employee's spouse had configured the software so that other users of the file-sharing network could access files the spouse had stored on the laptop, but that gave people access to Pfizer files, too. Combine negligent users and file-sharing software, and you've got a dangerous mix. Although most companies have outlawed P2P file sharing on their corporate networks, according to a 2007 study by Dartmouth College, many employees install it on their remote and home PCs.

Vol/4 | ISSUE/14

6/1/2009 5:52:51 PM


Security

clueleSS About SecurItY

a survey revealed that SMb organizations are not clued in on basic security and storage measures. FInDIngs from Symantec's 2009 Global SMb Security and Storage survey indicated that organizations in asia Pacific and Japan (aPJ) region lack basic security and storage measures. the report noted the low adoption of basic security measures in the region with 56 percent of SMbs not having an end-point protection solution and 53 percent without a desktop back up and recovery solution. the report also indicated that 70 percent of these SMbs are extremely concerned with back up and recovery of data, followed by disaster recovery planning and strategy (64 percent), and archiving data and e-mails (56 percent). y yet, 53 percent of SMbs have not deployed desktop back up and recovery solutions, and 45 percent perform back up on a weekly or less frequent basis. these gaps in basic levels of security, despite an awareness of the current internal and external threats among

the region's SMbs, are driving an increase in security breaches, with the most common causes being system breakdowns and hardware failure, human error and improper or out-of -date security solutions. lack of budget (41 percent) and employee skills (40 percent) were cited as the main barriers to securing the SMb environment. the report also indicated that 52 percent of SMbs have previously suffered a security breach and the risk of suffering a recurring security breach is higher than other regions around the world. the survey revealed that 84 percent of It spending by SMbs in aPJ will increase (57 percent) or remain the same (27 percent) this year despite the current economic uncertainty. the survey also revealed that the top three concerns of SMbs are viruses, data breaches, and loss of confidential or proprietary information through USb and other devices.

Eye-openers: First off, IT needs to either ban P2P software entirely or set policies for P2P usage and implement tools to enforce those policies. "[Pfizer] should have done a better audit of their systems to stop employees from loading any software," Muller says. "You can take away their admin rights so they can't install anything." Also important is training, he says, so users understand the dangers of P2P, what makes a good password and other standard security practices. "There's a huge need for education so employees understand we're not trying to make things difficult but that bad things could happen," Semple notes. "It's having them understand, 'I can't do this, and here's why.' "

5

SubContrACtor breACheS

In November 2008, the Arizona Department of Economic Security had to notify families of about 40,000 children that their personal data may have been compromised following the theft of several hard drives from a commercial storage facility. The drives were password-protected

Vol/4 | ISSUE/14

Feature_3_Security.indd 37

"Small and medium businesses usually have limited time, money and expertise to secure and manage their information from external and internal threats. often, more pressing business needs will take precedence over security, back up and recovery for computer and network systems, leaving businesses vulnerable to data and system losses and causing serious damage and business interruption," said bernard Kwok, Symantec's senior vice president for asia Pacific and Japan. "by automating key processes such as back up and recovery SMbs can improve cost efficiencies and streamline manageability that allow them more resources and time to focus on their core businesses," he said. the survey covered 10 aPJ countries including Japan, South Korea, India, australia, new Zealand, China, t taiwan, Hong Kong, Singapore and Malaysia. —by Jack loo

but not encrypted. The agency says no information was used to commit fraud. Costs: Subcontractor breaches are more costly than internal incidents, averaging $231 (about Rs 11,550) per record compared with $171 (about Rs 8,550), according to Ponemon. Blinders: According to Ponemon's annual cost study, breaches by outsourcers, contractors, consultants and business partners are on the rise, accounting for 44 percent of all cases reported by respondents last year. That's up from 40 percent in 2007. In the ITRC study, 10 percent of breaches were associated with subcontractors in 2008. Eye-openers: Companies need to create service-level agreements that are airtight and specific, and then ensure that subcontractors are in compliance and penalize them if they aren't. In cases that involve the use of back up tapes or disks, Semple says, insist on encryption and password protection. CIO

Send feedback on this feature to editor@cio.in

REAL CIO WORLD | J U N E 1 , 2 0 0 9

37


Cloud Computing

Amazon, Nirvanix, and others promise easy data storage and infinite scalability for Web companies, but perils such as service outages and security breaches can’t be overlooked. By Jon Brodkin

Michael Witz, founder of online file-sharing site FreeDrive, knows the horror

of that proverbial middle-of-the-night call: "The site is down." He lived the nightmare last fall when a fiber link between a Web server and 6TB of stored customer data went poof. With no access to customer data, Witz knew the California company needed a better storage strategy. "Building your own storage system — it's not easy. It requires special knowledge. It's not in our core competency," Witz says. "We had a choice. We could build out our storage infrastructure, outsource it to a datacenter, make a capital investment in our hard disks, or we could outsource it to [a cloud service provider] like Nirvanix or Amazon." Deciding on the latter option was "really easy," Witz says. Indeed, cloud storage — or, as he calls it, "an Internet hard drive for companies" — is emerging as an attractive storage option for an increasing number of companies that depend on delivering services over the Web. That's because with cloud storage, data resides on the Web, too, located across storage systems rather than at a designated corporate or hosting site. Cloud-storage providers balance server loads and move data among various datacenters to ensure that information is stored close — and so delivered quickly — to where it is used. Cloud-storage users typically don't know where their data is stored at any given time.

Big Clouds The best-known cloud-storage service is Amazon's two-year-old simple storage service (S3). Cloud storage also is available from start-up Nirvanix, which launched in October 2007; and now through Mosso, a Rackspace company that unveiled its offering early recently. Amazon remains tight-lipped about its cloud infrastructure, but Nirvanix says it uses custom-developed software and file-system technologies running on Intel-based storage servers at six locations 38

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Feature_4_Cloud Computing.indd 38

Reader ROI:

Why cloud storage isn’t mature yet How moving to the cloud can expose your data What it takes to trust cloud storage providers

Vol/4 | ISSUE/14

6/1/2009 4:09:02 PM


Cloud Computing CIO VIEW

Would you trust your information to the cloud?

“No. I don’t think I would be comfortable using someone else’s ISP and opening up my company’s sensitive data to the cloud. It can be very risky. ” hanuman Jayaram

Head-IT, Manipal Cure & Care

“We deal with confidential data and would not like to expose it to the cloud. Also, we would have to put in place more barriers and this would be a huge expense.”

“Trusting storage to the cloud is putting it in a demilitarized zone. Security should be policy-based so that its implementation is tight.”

Mohan shenoy

Chief Group Manufacturing IT Reliance Industries

Head-Technical, Unilog Content Solutions

sumangal J. Bhatwadekar

on the United States' East and West coasts, as well as in Asia and Europe. By year-end, the company expects to expand that number to more than 20. Mosso initially is delivering its storage cloud from Rackspace's Dallas datacenter, with another site in the United Kingdom likely to be added soon, the company says. Geoff Tudor, Nirvanix co-founder, compares cloud storage to electrical service. After all, he says, when you turn on a light switch, you don't know exactly from where each individual electron originates. The same applies to stored data in the cloud. FreeDrive's Witz has been using the Nirvanix cloud storage since November. He says he likes that Nirvanix can convert videos to flash format automatically and send data directly from the cloud to a FreeDrive customer. Without cloud storage, all data would have to be relayed through FreeDrive's own Web server. After he lived through numerous outages with hosted provider Media Temple, scalability is what drew Theron Parlin, CTO of personal-finance social-networking site Geezeo, to cloud storage. "We can store 50 times our current 251GB, and never have problems. And, the price will stay low," he says of the company's reliance on Amazon S3 (as well as on Amazon's Elastic Compute Cloud). Amazon and Nirvanix are the leading cloud-storage providers, but many of the technology world's biggest names are focused keenly on cloud storage and cloud computing in general. Google appears ready to launch an online storage service informally known as GDrive. EMC reportedly is preparing a massive storage cloud with technologies code-named Hulk and Maui. IBM, meanwhile, has several cloud-computing initiatives under the Blue Cloud umbrella.

Caution in the Clouds All this activity has turned cloud storage into something of a buzzword; if used too broadly the term could end up referring to any type of Internet-accessible storage, analysts caution. Enterprises should think of cloud computing as massively scalable IT capabilities delivered to external customers

Vol/4 | ISSUE/14

Feature_4_Cloud Computing.indd 39

using Internet technologies, and cloud storage as that which is allocated to cloud computing-applications, says Stan Zaffos, a Gartner analyst. In addition, enterprises have to remember that as compelling as cloud storage is, it isn't without potential problems. Amazon S3, for example, suffered an outage for several hours in February that resulted in numerous customer Web applications going offline. Amazon attributes the outage to elevated numbers of authentication requests, and has addressed the incident by adding significant amounts of capacity to the authentication service and improving the system that monitors the proportion of requests that are authenticated, says Adam Selipsky, a vice president for Amazon Web Services. Amazon hopes to avoid a recurrence, but notes that no data was lost as a result of the outage because Amazon stores multiple copies of every object in multiple locations. Enterprises also must consider the possibility that data could be stolen or viewed by people who are not authorized to see it. "Any time you let the data out of your computer room you're asking for trouble from a security point of view," says Joel Snyder, senior partner with Opus One. Potentially the biggest worry is that a company's data could be stored next to a competitor's, says Andrew Reichman, an analyst with Forrester Research. "The storage industry isn't fully there yet with really secure multi-tenancy offerings," he says, adding that data on which many enterprise applications depend probably shouldn't be allocated to cloud storage. "Cloud-storage providers recognize they need to do this, but it's going to take some time for the technology to catch up," he adds. In the meantime, analysts say, cloudstorage users need to be proactive about of Cios say security is security, encrypting sensitive data, and their greatest concern in securing data transit with such technologies as SSL. CIO moving to the cloud.

45% Source: CIO Research

Send feedback on this feature to editor@cio.in

REAL CIO WORLD | J U N E 1 , 2 0 0 9

39


Illust ration by BINESH SR EEDHARAN

Storage virtualization is hot, and for good reason. But its benefits bring added layers of complexity. By Gary Anthes

40

There's an age-old choice in IT

— whether to adopt a best-of-breed strategy for the power and flexibility it can bring, or go with a single vendor for accountability and simplicity. J. Craig Venter Institute (JCVI) believes in best of breed. The genomic research company runs Linux, Unix, Windows and Mac OS in its datacenter. For storage, it draws on technology from EMC, NetApp, Isilon, DataDomain and Symantec.

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Feature_5_Storage Virtualization.indd 40

Reader ROI:

The pitfalls of storage virtualization Ways around the challenges

Vol/4 | ISSUE/14

6/1/2009 5:56:50 PM


Storage Virtualization "It's quite a heterogeneous environment," says computer systems manager Eddy Navarro. "Thankfully, we have a very talented staff here." And a talented staff was just what was needed to master the many flavors of storage virtualization, which can make multiple physical disks look like one big storage pool. Like JCVI, many organizations are enjoying the lower costs and added flexibility of storage virtualization. But the benefits can come with some headaches. Five IT leaders who have led successful storage virtualization projects offer advice for relieving the pain.

29%

of indian cios say that they will spend a bulk of their storage budget on storage virtualization. Source: CIO India Research

HeadacHe 1

managing multiple Vendors For several years, JCVI had employed software-based virtualization in the form of Red Hat's Linux Logical Volume Manager, which allows logical partitions to span multiple disk drives. More recently, the company added hardware-based virtualization in the form of NetApp's V Series system to create a single virtual pool of storage consisting of EMC Symmetrix disks and legacy Clariion disks. The Clariion drives, which came into the datacenter from a corporate merger, were being poorly utilized, Navarro says. Now, the NetApp V system reformats data going to and from the EMC disks, "and then you carry on just as if it's another NetApp system," Navarro says. That enabled JCVI to wring better performance from the legacy disks. Each of JCVI's vendors makes its own unique contribution to a powerful and cost-effective storage architecture, Navarro says. But the diversity comes at a cost. "When you are talking about multiple vendors' hardware — and they compete with each other — it may not be the easiest thing to get support when something goes wrong," he says. "So you have to ensure compatibility first and foremost, and you have to know in advance something is going to work."

How to cope: Study the documentation, do your homework, and ensure that your approach has been tried before and is certified by the vendors, says Navarro. And if you don't have experienced technical staff, he adds, be prepared to hire some outside professional help.

HeadacHe 2

dealing With extra technology layers

Even organizations with not-so-complex technology environments report that although virtualization can ultimately simplify storage administration, putting it in place and tuning it is a demanding job. Lifestyle Family Fitness, a rapidly growing chain of 60 health clubs is a Microsoft shop built around SQL Server and .Net development of Web applications. For storage virtualization, it uses IBM's SAN Volume Controller (SVC), disk arrays from IBM and EMC, and IBM Brocade SAN switches. IBM DS4700 disks provide 4Gbit/sec. Fibre Channel connections for the company's online transaction processing applications, while the Clariion drives handle less-demanding jobs like backups. The IBM SVC was brought in to resolve an I/O bottleneck. The high-speed Fibre Channel drives and cache on the SVC appliance opened up the bottlenecks almost like an I/O engine would, says Mike Geis, director of IS operations. Moreover, the setup allowed Lifestyle Family Fitness to use its new IBM-based SAN while continuing to use its old EMC SAN. "In the past," he says, "you'd bring in a new SAN and have to unload the old one." Geis says the SVC architecture promises vendor independence. He says he has a great relationship with IBM, but if that ever changed, he could easily bring in drives from another supplier and quickly attach them directly to his storage network. "We aren't held hostage by the vendor," he adds. But the advantages come with some difficulties, Geis notes. "You are adding complexity to your environment. You add

CIO VIEW

Is vendor lock-in a barrier to storage virtualization?

“"No, I don't think so. The performance and quality of the delivery of data across platforms can be impediments to the successful utilization of storage virtualization.’’

“No. In fact, when a storage virtualization solution kicks in, the first thing it does is to control the vendor lock-in enterprises might have.’’

amit gupta

AVP & CIO, K Raheja

VP-IT, Fidelity Business Services

Vol/4 | ISSUE/14

Feature_5_Storage Virtualization.indd 41

subodh dubey

“Yes, vendor lock-in as a barrier to storage virtualization. This is due to lack of standardization and the inability of vendors to produce a long-term, robust roadmap of solutions.’’ sebastian Joseph

VP-IT, Mudra Communications.

REAL CIO WORLD | J U N E 1 , 2 0 0 9

41


Storage Virtualization overhead, man-hours of labor, points of failure and so on. You have to decide if it's worth it." How to cope: "Pick strong partners — both vendors and implementation partners — and make sure you are not their guinea pig," Geis advises.

HeadacHe 3

scheduling maintenance and Backups Ron Rose, CIO at travel services company Priceline.com, takes a holistic view of virtualization. In fact, he speaks of a "virtualization sandwich" consisting of integrated technologies for server virtualization, storage virtualization and server provisioning. He uses 3PAR InServ S400 and E200 tiered disk arrays for storage, BladeLogic tools for provisioning, and 3PAR Thin Provisioning and other products for virtualization.

Monitoring it OnE Of thE IDEAs behind virtualization is to ‘abstract’ the physical layer in It t from the software layer, to in essence mask hardware boundaries from the application and the application’s users. but the benefits of hiding the physical resources — greater flexibility, better utilization and potentially easier administration — come at a price, says Eddy navarro, computer systems manager at J.Craig Venter Institute. “So you have this abstracted area of storage, and you have a performance issue.” he says. “In the traditional model, it’s a straightforward deduction to say that this area maps to these disks so that must be where the hot spot is. but with virtualization, if things are running slowly and there’s this amorphous pool of storage, where exactly is the problem? y you want to make sure you have the proper tools to tell you where the problems are.” Storage virtualization vendors have tools for performance monitoring and troubleshooting. “but with these enterprise tools, it’s a matter of installing agents everywhere, and it can balloon out of control,” navarro warns. “the agents themselves can cause this giant admin task. So is it really worth it to have this huge application monitoring things, or do you want a little bit of smarts and do some in-house work to write some custom scrips to tell you what’s going on?” JCVI has chosen to apply carefully targeted smarts via some homegrown software. “Fortunately,” navarro says, “we have the technical expertise to do that. If you don’t, it’s not easy to set that up.” —Gary anthes

42

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Feature_5_Storage Virtualization.indd 42

Rose says most companies could reduce their server and storage footprints by 20 percent to 40 percent using a virtualization sandwich. "And not only are there cost savings; there are green benefits. It's good for the planet," he says. But like most practitioners of storage virtualization, Rose says there is no free lunch. "You have to plan your architecture more thoroughly and look at all your applications. The more systems you have running on the box, for example, the more challenging it is to schedule maintenance. If you have 10 applications running on that chunk of infrastructure that you are going to do maintenance on, you have to schedule it and move the apps to other machines in an orderly manner." He says 3PAR has powerful tools that can hide much of the complexity of virtualization, but the kind of maintenance scheduling needed "is not a system or tool issue; it's a process and discipline issue." Similarly, ensuring reliability requires extra care, Rose says. "As with maintenance, you don't want to get too many eggs in each basket," he explains. Priceline keeps critical files on three machines — what it calls "tri-dundancy." How to cope: "Think of your entire virtual environment, not just storage," Rose advises. "You will get better ROI in aggregate if you think through all three layers of the virtual sandwich. And getting a little consulting from real experts early in the process will help you anticipate the entire environment."

HeadacHe 4

setting up management tools Like Rose, Jon Smith takes a very broad view of virtualization. "For me, a server is no different from a hunk of data storage, and I can move it wherever I want," says the CEO of ITonCommand, a hosted IT services provider. "Whether it's running the operating system or it's just data, it's all storage." Smith says that eventually virtualization technology will enable any data to go anywhere — on direct-attached storage when high performance is needed, or somewhere on a SAN when speed is less critical and a higher level of redundancy is required. ITonCommand uses HP BladeSystem c3000 disks for directattached storage, and LeftHand Networks Virtual SAN Appliances and LeftHand's SAN/iQ software on an HP StorageWorks array for storage virtualization on its iSCSI SANs. The company is now standardizing on Microsoft's Hyper-V hypervisor, part of Windows Server 2008, for server virtualization and on Microsoft's System Center Virtual Machine Manager for administration. The glue that holds everything together, Smith says, is Microsoft's new Virtual Machine Manager for provisioning and managing physical and virtual computers. "With VMM on a display, a system admin can look at all the virtual servers' hypervisors across my whole environment, all in one spot, and adjust them," he says. "It's pretty cool stuff." It's cool when it's set up, but getting there isn't so easy, he acknowledges. "System Center is new, and so is [Hyper-V]. It

Vol/4 | ISSUE/14


Storage Virtualization took us a while to figure out how to connect all our old virtual machines into the hypervisor. It's not the easiest setup out of the box." Smith says continued virtualization at ITonCommand will result in a true "utility computing" model for his clients. "It will take a while, but people will stop thinking of physical boxes running one operating system. Hardware will be non-existent to the end user. It's just going to be, 'How much horsepower and storage do you want?'" How to cope: "Find an expert who knows virtual technology and knows Microsoft System Center," says Smith.

HeadacHe 5

getting the right gear Babu Kudaravalli, senior director of business technology operations at National Medical Health Card Systems, gives this definition of storage virtualization: "The ability to take storage and present it to any host, of any size, from any storage vendor." He's pursuing those goals with three tiers of storage, each supported by a different HP StorageWorks product. The technology used in each tier is chosen for the mix of cost, performance and availability it offers. Kudaravalli uses high-end HP XP24000 disk arrays for the most demanding and mission-critical applications, lower-cost Enterprise Virtual Array 8000s for second-tier applications, and Modular Smart Array 1500s for archiving, test systems and the like. His five SANs hold 70TB of data, of which about 35TB in the EVA and MSA tiers is virtualized, he says. Kudaravalli says there are several things to be careful about when buying storage virtualization products. First, be aware that vendors typically certify their products to work with the latest versions of other vendors' products. If you don't have those exact versions, your interfaces might not work. He says this is a good reason to think about replacing your old gear when you go to a heterogeneous storage environment — or at least to keep current on the latest releases. Second, Kudaravalli says that although virtualization should ultimately simplify storage management, setting up a virtual system is complex. Careful planning and an understanding of the limitations of products are crucial. A few years ago, vendors had very different definitions and standards for virtualization, says Kudaravalli. "But now they seem to be coming together," he observes. "They are trying to offer similar features and capabilities, but it is not completely mature." How to cope: Although storage virtualization is often undertaken to better utilize existing resources, it may have a perverse impact, says Rick Villars, a storage analyst at IDC. "The whole point of virtualization is to make it easier to provision or move a resource, to create a new volume or another snapshot, or to migrate data from one system to another," he says. "But when you make something easy to do, people are induced to do it more often." According to Villars, volumes, snapshots, data sets and even applications can needlessly proliferate. "You can go from being

Vol/4 | ISSUE/14

Feature_5_Storage Virtualization.indd 43

Virtualization p pain relieVers 1 be realistic: this is going to be complicated. 2 assign someone on staff who really knows the technology, or hire a consultant, at least at the beginning. 3 Do your homework. read the documentation and understand the pieces and their interfaces. 4 be sure that your gear and their interfaces are certified by your vendors for the versions/releases that you have. 5 Consider upgrading your old storage gear when you go to storage virtualization. 6 Make sure you have thoughtful policies and procedures for maintenance and backups. 7 Guard against virtual sprawl of both storage and servers. 8 ask yourself: Do I really need this?

more efficient to more wasteful. It's just what can happen with virtual server sprawl." Preventing that is a matter of policies, procedures and good business practices, not technology, he says. Users agree that there are many technical details to master when pursuing storage virtualization. But Navarro suggests starting with a basic question: why am I doing this? "Virtualization is a hot word, a big thing. But is it really necessary? There are benefits, but ask yourself if you are doing it for the right reasons, or just because you want to be on the cutting edge. It's very easy to get swept up in these groupthink movements." CIO

Send feedback on this feature to editor@cio.in

REAL CIO WORLD | J U N E 1 , 2 0 0 9

43


IT Management

Storage needs are growing by the day and organizations – with movable media storage — are keen to make data available anytime, anywhere. Debunking these six myths will ensure that your organization’s in-transit data is secure. By Gary Anthes Every few months, there's another horror story about lost tapes or

stolen laptops, and we're left wondering if the information stored on the missing media will be put to some nefarious use, thereby adding personal injury to a public relations insult. The importance of protecting these media has become a no-brainer. But managers are often hampered in their efforts because they buy into one or more of the following six myths of movable media:

Myth 1

Tapes are obsolete The humble magnetic tape, a seeming relic of the mainframe and batch-processing era, has given way in some instances to disk-to-disk back ups to remote sites over networks. But for rapid and efficient back up, archiving and restoration of large quantities of data, there's no beating tape. Iron Mountain offers both data back up over a network connection and tape storage at its sites. "In a disaster scenario, when time is of the essence, there is nothing more efficient than putting a collection of tapes in a vehicle and driving it to a recovery site," says Ken Rubin, a senior vice 44

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Feature_6_IT Management.indd 44

Reader ROI:

Separating the facts from the myths of movable storage How technology can prevent data loss Why encryption is not the best technical solution

Vol/4 | ISSUE/14

6/1/2009 4:10:52 PM


IT Management CIO VIEW

Should security policies be a priority, given that they are difficult to enforce?

“It is the most important concern, especially for SMBs, because they focus on educating people in information security rather than technology itself. ” subhasish saha

“Creating policies addresses the issues of encryption and trusting people with tapes. Once there is a policy in place the tools and mechanisms to ensure data security will follow.”

CTO, Appejay Surrendra

s.V. konkar

VP-IT, Reliance Industries

president at the information protection and storage company. "And the bandwidth limitations on transporting terabytes or petabytes of data over the line make that impractical." Still, some users want to move on. "We are trying to get out of the tape business because of the threat of physical loss," says Christopher Leach, chief information security officer at Affiliated Computer Services. He says ACS is setting up a service to send encrypted data back ups to clients via a Web browser if the files aren't too big.

“CIOs over-estimate policies because they safeguard them from audit. But these policies should be effectively implemented, communicated and adhered to by users.” ajay kumar meher

VP- IT, Set India/ Sony Entertainment Network

team, a business conduct team, internal auditors, external auditors and an information protection law group — all working together," she says. Leach says keeping up with state and federal regulations on data protection and retention demands human expertise, but it's such a daunting task that he gets automated help via risk and compliance management software from Relational Security.

Myth 2

proTecTing Tapes anD lapTops is a job for Technical people

Myth 3

losing a Tape is primarily a securiTy problem

It can be a security disaster, to be sure, and it will certainly be a PR nightmare if the public finds out. But there are other equally harmful, if less dramatic, possibilities. The protection of information technology is, of course, a job for "I don't think so much about losing employee information [such IT. But there is a big and often overlooked role for others in the as social security numbers], although that is certainly important," organization as well. says Brian Lurie, IT vice president at medical products maker New York State CIO Melodie Mayberry-Stewart draws on a Stryker, "What keeps me up nights is the possibility of losing a 12-person legal team to research best security practices, especially tape and then having to produce data for the FDA for a lawsuit. I in the financial industry. Some of those people specialize in areas worry about liability to the company from losing information that such as encryption and telecommunications, she says. we, by law, must retain." In addition, she has a separate team of technologists who While the law requires that some information be kept for seven specialize in security and risk management. Mayberry-Stewart years, Stryker must retain data on customers who have Stryker says the lawyers negotiate "painstakingly detailed" contracts products in their bodies for as long as they and "memoranda of understanding on live, Lurie says. service levels" with companies such as Iron Although the company mirrors its disks Mountain that transport and store the state's Are They Listening? at a remote disaster recovery center, after a tapes — some 4,000 per month — from four certain amount of time, some data will exist mainframe datacenters. only on tape transported and stored remotely At Sun Microsystems, tapes are created at by Iron Mountain. seven datacenters around the world. While of indian cios say 75 to 99 Lurie periodically sends auditors to Iron each center manages its own data-retention percent of their users are Mountain's facility to inventory Stryker's processes, "they don't get to make up all their compliant with is policies. tapes. He says regular audits are part of a threeown rules," says Leslie Lambert, Sun's chief part tape-protection program that also includes information security officer. So where do the Source: CIO Research carefully crafted contracts and working with a rules, policies and procedures come from? reputable tape-storage vendor. "We have a very vigilant legal team, a privacy

39%

Vol/4 | ISSUE/14

Feature_6_IT Management.indd 45

REAL CIO WORLD | J U N E 1 , 2 0 0 9

45


IT Management

Procedures And conTroLs ThAT Are weLL ThoughT ouT, T Au T, A TomATed where PossibLe, and tested are the best way to limit losses from wayward tapes and laptops, experts say. but technology can be a big help.

Experts say thefts of tapes followed by illegal usage are so rare as to be almost a non-issue. Loss of tapes through simple human error, causing processing disruptions down the line, is by far the most common problem.

Myth 4

There are no Technology soluTions; iT's all abouT TighT conTrols Procedures and controls that are well thought out, automated where possible and tested are the best way to limit losses from wayward tapes and laptops, experts say. But technology can be a big help. The primary tool remains data encryption. While the technology doesn't address Lurie's concerns about lawsuits over unrecoverable data, it's nice to be able to tell lawyers, reporters and the police that the bad guys can't do much with that laptop because the hard disk is encrypted, or with those tapes because they are unreadable. All employee desktops and laptops at ACS are required to be "whole-disk encrypted," Leach says. "Once the disk is encrypted, we monitor it and track it, and if you try to decrypt your hard drive, we know it and we notify your manager." ACS has more than 1 million tapes at its tape library in Dallas, and its standard practice is to encrypt their content. But, Leach says, some clients don't want to incur the cost and effort of decrypting the back up tapes they receive from ACS, so they request that the content be kept in the clear. "For those tapes, we have very strict packaging, signing and tracking at every step, almost like a chain of custody in a legal case," he says. "Tapes go into turtle boxes that are locked and unlocked at each end." In addition, he says, "we insure them for a high amount, not because the tapes or CDs are worth a lot of money, but because that triggers tighter processes and closer scrutiny by the shipper." Users report that they are studying new technologies to supplement or substitute for encryption. The state of New York is looking at thumbprint scans to protect laptops and tape cases. And ACS is examining prototypes of three magnetic devices that will erase the contents of tapes inside a locked case if it is broken open. Iron Mountain says the best automated help of all may come from a tape inventory-control system to help eliminate the No. 1 cause of lost tapes — human error inside the company.

Myth 5

encrypTion is a silVer bulleT While encryption is often considered the best technical solution, it has drawbacks. For example, if you retrieve a tape but have lost the keys to decrypt it, you might be out of luck. Also, encrypting data before writing it to tape, a laptop hard drive or removable media can take copious computer resources. Finally, at many companies, encryption is optional or a requirement that can be circumvented. For these reasons, Stryker doesn't encrypt laptop hard drives unless there's sensitive data on them. Sensitive information that remote users may need stays on protected servers, where it is accessed only when needed and not retained locally. Lurie acknowledges that this isn't perfect because it requires voluntary user compliance. Lurie says his chores will be eased when Stryker moves to Windows Vista, because the operating system offers options for automatically encrypting data. "But it's a burden — you need additional memory, and it slows down the machine," he adds.

Myth 6

if you proTecT your Tapes anD lapTops, you can feel secure News stories have focused attention on lost tapes and laptops, but there are a number of other devices walking out your company's door every night. Lurie says mobile devices such as BlackBerries are protected at Stryker. "I have the ability to remotely wipe them out," he explains. "If lost, we send a signal to it immediately to clear the memory." But flash drives, CDs and DVDs are more problematic, he says. Lurie's solution: "If it's not encrypted, we just discourage the downloading of sensitive information to them." Lurie says he even worries about the humble cell phone. "We don't allow cameras in our building, but there are lots of people who have them on their phones," he says. "If someone takes a photo of someone or something and posts it on the Internet, we've got a potential liability. I'm not sure how to deal with that yet, but I've been giving it a lot of thought." CIO

Send feedback on this feature to editor@cio.in

46

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Vol/4 | ISSUE/14


EvEnT REPORT

Presenting Partner

Data that SeepS out

unnoticeD IT leaders discuss how security gaps can impact the stability of organizations — and the best practices to combat them.

L

urking hackers looking for security loopholes, an endless list of databases, and external and internal threats: enough and more reasons to give CIOs sleepless nights. With the slowdown inviting itself to the IT world and layoffs becoming the order of the day, organizations are more vulnerable than ever before to insider threat and external intrusion. But there are ways, and plenty of them, to contain security leaks. Some conscious IT leaders gathered to discuss how their organizations can be made immune to information leaks. Here are their thoughts.

threatS anD chaLLengeS Pointing at the threats that the current slump has brought, Satish Das, CISO & director (ERM), Cognizant, said that the hacker community is releasing bugs on the net and in the last few

Event Report_ WEBSENCE.indd 39

6/1/2009 6:08:34 PM


EvEnT REPORT

FROM LEFT: J. Vanchinathan, Head-MIS, ITC Infotech, ViJay Mehra, Group CIO, Essar Group, alok kuMar, VP & CIO, TCS, S. hariharan, Sr. VP-Infrastructure Solutions & Services Group, Oracle Financial, c.V.G. PraSad, CIO, ING Vysya Bank, n. GaJaPathy, Sr. VP-Technology & CIO-Asia, Aditya Birla Minacs.

months the number of attacks has increased significantly. Drawing attention to internal threats, n. nataraj, CIO, Hexaware Technologies, said, “networking outside the organization may prove to be a big threat as information sharing can happen and that may increase in the current economic condition.” Throwing some light on the IT needs of an organization staying on a growth path, despite the slowdown, Sudhir K. Reddy, CIO, MindTree, said, “The name of the game is innovation; users demand mobilization, and you have to innovate and keep your company secure.” n. Kailasnathan, vP, CIO & head business excellence & knowledge management, Titan Industries, said that an organization's culture is a bigger challenge. Through regular audits, he said, one will find that policies are not implemented properly. Anil Punjwani, head-IT, Philips Innovation Campus, said that apart from the easily detectable thefts there are others that are not immediately noticeable. For example, when data is copied, he said, its detection is impossible as the source remains intact. Subramanyam narayanan, head-infrastructure & security, Hindustan Unilever, said that an organization can have pockets from where data can leak, so he pointed out that it’s important to identify the points of seepage.

StartS at Zenith Giving a holistic view, Gene Hedges, CEO, Websense, said that security is a board-level issue and requires the involvement of the business. He said it’s a give-and-take policy where

Event Report_ WEBSENCE.indd 40

everyone learns as they go forward. He said companies must not be afraid to give information and take it. When someone violates policy — fire the person, he said, and the message will get across. He also pointed out that technologies have advanced in terms of identifying information coming in from outside and scanning it in real-time.

MoDuS operanDi

vijay Mehra, group CIO, Essar Group, put forth his approach to security when he said, “It's best to isolate anything that is confidential and has to be strictly secured because no measure can provide complete security.” Talking about the essentials of data security, C. Prakash Lohiya, DGM - IT, Tata Motors, said, “Even if prevention is not completely possible, deterrence has to be there 100 percent.” Discussing his approach to security Alok Kumar, vP & CIO, TCS, said, “We need to make our employees, partners and vendors realize that everything happening in the company is being monitored.” Tarun Pandey, vP-IT, Aditya Birla Financial Services, said, “One can’t prevent people from passing on information. To secure information you have to have a formal process which is continuously reviewed, while providing continuous education to users.” “Security is a board-level issue Manish Choksi, chief corporate and requires the involvement of strategy & CIO, Asian Paints, said, the business as well.” “We prefer to give access on a needto-know basis. For us, our users come Gene hedGeS, CEO, WEbsEnsE first and we sensitize them about the compulsions and the hierarchy of

6/1/2009 6:08:46 PM


EvEnT REPORT

accessing information.” C.v.G. Prasad, CIO, InG v vysya Bank, shared the approach his bank uses to secure sensitive data. “We rely on technology and policies to secure our data," he said. "But this alone won’t solve the problem completely. Organizations need to have strong sanitation techniques that render data useless to anybody outside the organization." Sandeep Phanasgaonkar, president & CTO, Reliance Capital, explained that in the financial sector, organizations need to share data with agents, intermediaries and third-party agents, so the whole ecosystem has to be disciplined to ensure the secure movement of data.

Sorting Data Arun Chatterjee, CISO, WnS Global Services, called attention to the importance of data classification and said, “Security of unstructured data is a troublesome issue, so unless information is classified, one can’t do much about security.” J. vanchinathan, head-MIS, ITC Infotech, said, “We make our employees aware of the policies, and try to know their requirements to access data and only then do we classify it.” He said that it’s important to identify all the elements you need to control. S. Hariharan, Sr. vP Infrastructure Solutions & Services Group, Oracle Financial, said, “While classifying data we realized that it was static and was to be made dynamic. We classified data such that it gave us an idea on what to focus on and where the risks lie. Apart from classification, we also rely on technology and automation to secure data.”

i.t.’S Bearing on Security Sharing his data classification experience, n. Gajapathy, Sr. vP technology & CIO Asia, Aditya Birla Minacs, said that a clash between IT and business users over restricted data usage was resolved by involving business leaders who helped sort the information required by different sets of users. Sharing the same opinion, Alok Kumar, Sr.

“Policies to secure data will not work unless they are linked to risk reward." SatiSh daS CISO & Director (ERM), Cognizant

“There is a very thin line between security and denial." alok kuMar Sr. VP-IT, Reliance Industries

“Security flows from the top, and going down it holds enterprisewide ownership." r.k. uPadhyay DGM-IT, BSNL

“To secure information one must have a formal process in place which is continuously reviewed." ttarun Pandey VP-IT, Aditya Birla Financial Services

FROM LEFT:arun chatterJee, CISO, WNS Global Services, nadeeM QuraiShi, CISO, Tata Motors , aJay kuMar Meher, VP-IT, Sony Entertainment Television India, c. PrakaSh lohiya, DGM-IT, Tata Motors, P.V. raMadaS, VP-Technology, HCL Technologies, Sharad dole, Practice Manager-Systems Integration & Networking, Tata Technologies.

Event Report_ WEBSENCE.indd 41

6/1/2009 6:09:00 PM


EvEnT REPORT

FROM LEFT: Jayanta lahiri, Head-IT, Accenture, n. kailaSnathan, VP, CIO & Head Business Excellence & Knowledge Management, Titan Industries, nandita J. MahaJan, Chief Privacy Officer & VP-Security & Compliance, IBM Daksh, SandeeP PhanaSGaonkar, President & CTO, Reliance Capital, n. nataraJ, CIO, Hexaware Technologies.

“Users demand mobilization, and you have to do that and keep your company secure." Sudhir k. reddy CIO, MindTree

“We need to sensitize users to the compulsions and the hierarchy of accessing information." ManiSh chokSi Chief-Corporate Strategy & CIO, Asian Paints

“There are various points at which data can leak and it's important to identify them." SubraManyaM narayanan Head Infrastructure & Security, Hindustan Unilever

“There are special kinds of thefts that go unnoticed, as the source remains intact." anil PunJwani Head-IT, Philips Innovation Campus

Event Report_ WEBSENCE.indd 42

vP-IT, Reliance Industries, said, "There's a very thin line between security and denial. Some IT guys may not know what to restrict. So, data categorization is something that has to happen at the top level.”Expressing his views on involving IT in data classification, R.K. Upadhyay, DGM-IT, BSnL, said, “Security flows from the top, and going down it holds enterprisewide ownership. We must follow the enterprise security framework rather than relying on an IT security framework." P.v. Ramadas, vP-technology, HCL Technologies, said that in order to make a system secure, traceability should be made strong. “There may be a lot of loopholes in a system and by removing some controls one can know who's done it,” he said. Ajay Kumar Meher, vP - IT, Sony Entertainment Television India, agreed with the idea and said, “Too much security can shackle a company. Some levels of risk should be acceptable.”

iMpLicationS of aMenDMentS to the it act 2000 Refering to the amendments, Sharad Dole, practice manager systems integration & networking, Tata Technologies, said, “Breaches in security are an act of terrorism, and I think it should be treated as such in order to practice any penalties attached to it, and I hope the recent amendments made in IT Act will take care of this.” Sharing his views on the same issue, nadeem Quraishi, CISO, Tata Motors said, “It’s the organization's responsibility to prevent a breach, and make its users understand the technology and the legalities attached to it. With the changes made in the IT Act, going forward, things are going to be different. So, we make sure our users know about the legalities and implications of the changes made in the IT Act and the privacy Act.” To this Chatterjee of WnS Global, added, “The amendments will make the world perceive Indian organizations as a more secure stake.”

6/1/2009 6:09:12 PM


Data explosion has taken organizations by surprise and with less and less storage space, I.T. leaders are

Il lustratio n by unn ikrishnan AV

contemplating eliminating unwanted data. But when exactly is the right time to purge? By Mary Brandel

Vol/4 | ISSUE/14

Feature_7_Compliance.indd 51

A funny thing happened

on East Carolina University's journey to creating a data-retention strategy. As part of a compliance project launched one and a half years ago, Brent Zimmer, systems specialist at the university, was working with attorneys and archivists to determine which data was most important to keep and for how long. But it soon became clear that it was just as important to identify which data should be thrown away. Zimmer was aware of the importance of being able to quickly produce required information during litigation, "but the thing we never thought about was keeping data too long," he says. The risk is

Reader ROI:

When is the right time to let go of data What you need during e-discovery The importance of content valuation

REAL CIO WORLD | J U N E 1 , 2 0 0 9

51

6/1/2009 4:12:07 PM


Compliance CIO VIEW

Who should drive an e-discovery program: IT or legal?

“IT should drive e-discovery because it understands systems well and can put in controls, traps and alerts to monitor it. Hence, it can also take care of the legal aspect.”

“It’s a shared responsibility. They should jointly develop a well-defined process to understand and manage e-discovery plans and policies.”

n. nataraj

Sr. VP & Head-IT, Reliance Gas Transportation Infrastructure

CIO, Hexaware

sudhir Bahuguna

“The overall responsibility lies with the business owners but the in-house legal department can be an excellent source for the CIO to build IT strategies around data access and retention.” shailesh Joshi

AVP-IT, Godrej Properties

(about Rs 6 crore) for five years of storage. And considering keeping data that you wouldn't otherwise be required to produce, but that many companies maintain multiple copies of data, thanks as long as it's discoverable, it could be used as evidence against you. to test data, operational data and disaster recovery copies, Like many organizations, East Carolina had its share of data to not to mention back ups, "there's an explosion of data in most purge. "We never made anyone throw away anything unless they companies," Merryman says. ran out of space on their quota," Zimmer says. Some users, he says, Aside from the costs, keeping all those records indefinitely is a had e-mail dating back to 1996. gold mine for attorneys looking for evidence, he adds. East Carolina is not unusual; many organizations hang on to more One way to address this problem is to set retention policies that data than they need, for much longer than they should, according reduce exposure to legal problems. But don't try to boil the ocean, to John Merryman, services director at GlassHouse Technologies, a Merryman advises. Instead, create policies from the application storage services provider. One reason is fear. "Companies are really or business level down, rather than looking across the whole data sensitive because there's a perceived underhandedness to purging landscape and letting policy bubble up. Also, create black-anddata," he says. "People might wonder, 'Why aren't you keeping all white rules that are easy to deal with. your records?'" For instance, roll all data types — such as e-mail, application and Another is the low cost of storage. Organizations have historically file data — into 10 to 30 categories of big-picture policies rather preferred to buy more disks than spend time and resources sorting than hundreds of granular ones. "You need broader rules like through what they do and don't need. "Many people would prefer 'accounting data needs to be retained six years,' not 'this annual to throw technology at the problem than address it at a business report needs to be retained [for] five years,'" he says. level by making changes in policies and processes," says Kevin According to research from Enterprise Strategy Group, the Beaver, founder of Principle Logic. average required retention period for files, e-mails and databases But thanks to e-discovery risk and burgeoning data volumes — is on the rise. Most companies retain data for four to 10 years, says 20 percent to 50 percent compound annual growth rate for some Brian Babineau, a senior analyst at ESG. companies — the tide is starting to turn, according to Merryman. East Carolina University started with the low-hanging fruit, The average cost companies incur for electronic data discovery setting retention and purging policies for ranges from $1 million to $3 million (about e-mail, medical records and security video. Rs 5 crore to Rs 15 crore) per terabyte of data, It archived that data on a new system based according to GlassHouse. on Symantec's Enterprise Vault storage A recent report from Gartner concurs. It management software and EMC's Centera states that the current explosion of data is the average cost content-addressed storage (CAS) array. outpacing the decline in storage prices, even companies incur for E-mails from the chancellor or dean are before the resource costs for maintaining data e-discovery per terabyte saved for seven years, Zimmer says, while are taken into account. of data is between rs 5 faculty and staff e-mail gets purged after Estimating that the average employee crore and rs 15 crore. three years. might generate 10GB per year, at a cost of Meanwhile, security video is archived for $5 (about Rs 250) per gigabyte to back it Source: GlassHouse Technologies 30 days — a good thing, since university police up, Gartner says a 5,000-worker company collect a terabyte per day. Patient records from would face annual costs of $1.25 million

What’s at Stake?

52

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

Vol/4 | Issu E/14


Compliance the medical school need to be kept for 20 years after the patient is deceased. Beyond that, the job will get more difficult, Zimmer acknowledges. "There's a lot of other stuff that we don't know the retention [requirements] for, so that will be trickier," he says. The key to reducing data volumes, Gartner says, is a process called content valuation, which involves examining factors such as authorship authority, usage patterns, nature of content and business purpose. According to Gartner, there are many ways to approach content valuation, including electronic records management, content management, enterprise search to identify what's a record and what's not, legal preservation software and policy management.

archiving on the rise Partly because of increased data retention activity, companies are increasingly implementing disk-based archiving tiers in their storage architectures. This is a better place to retain data than tape back up systems, Babineau says, because the data is indexed, searchable and stored in single-instance format, all of which makes it easier to find what you need during e-discovery. According to Robert Stevenson, managing director of storage research at The InfoPro in New York, archiving tiers have seen a 54 percent annual growth rate among users surveyed vs. 20 percent for tier 1 monolithic storage and 40 percent growth for tier 2 modular storage. In the past three years, e-mail archiving has grown, with 48 percent of survey respondents saying they use it today vs. 39 percent two-and-a-half years ago. Database archiving is also up, with 36 percent using it vs. 21 percent two-and-a-half years ago. Another reason for archiving growth is that companies are relying less on back up tapes for retention and more on disk-based storage. "Discovery is a difficult task, and if you have multiple copies in the back up environment, it's extremely expensive to retrieve, index, search and take it through the pre-production process of culling and narrowing down results," Merryman says. "It can turn discovery into a multi-million-dollar project."

the urge to purge The seemingly simplest way to reduce data volumes is to delete the data you don't need. But this is much more easily said than done. The fact is, according to Merryman, outside of e-mail, the status quo is to do nothing. "Most legacy applications have never purged data, and new applications are rarely designed to accommodate purging," he says. Not to mention, he says, deleting production data is complicated. In addition, the issues associated with legal, compliance and operational risks are often ambiguous, and few organizations have a process to accommodate a web of requirements for data retention. "If you look at legacy data outside the application world, a lot of people have no idea what it is, but they're scared of getting rid of it," he says. At one large bank in New York, Merryman says, he ran across hundreds of file extensions that no one knew about, as well as data inaccessible by currently maintained applications or interfaces.

Vol/4 | Issu E/14

Feature_7_Compliance.indd 53

The important thing is to start setting purging policies now rather than trying to apply them to old data. "If you address high-risk, high-volume applications and databases, you'll address 90 percent of the risk," he says. "If you target all 700 applications in your environment, you'll never get it done." In fact, in a tiered storage environment, Merryman says, the business case is much better when you purge data rather than simply archiving it on lower cost disk. "The cost of perpetually managing and refreshing huge amounts of data that's never been culled or purged is extremely high," he says. "So if you come up with a strategy to tier 70 percent of your data to cheap storage, and then you factor in the cost of managing, backing up and protecting it for disaster recovery, it's expensive."

The issues associated with legal and compliance riSkS are ofTen ambiguouS, and few organizations have a process to accommodate a web of requirements for data retention.

Unfortunately, he says, most companies that develop tiering strategies figure they'll purge at some time in the future. "But that's the problem with purge," he says. "It's always 'later,' like cleaning out the basement." Another difficulty with purging is the lack of a guarantee that you've deleted all instances of the data set. You might think you deleted all your old e-mail, but it may be stored on tape from two years ago, so it still exists. "Some companies figure if you can't delete it consistently, don't delete it at all because it's probably somewhere that no one knows about," Babineau says. Still, he says, "if you invest in technology that helps you retain data, why not invest in technology that helps expire data when you don't need it anymore?" For instance, all archiving systems have a 'delete' function, Merryman says, but no single product can purge data across all data types, such as messaging, unstructured and structured data. Merryman's advice: First identify vendors with proven technologies, and then look at emerging vendors. Second, he says, see if the vendors support or plan to support SNIA Archiving Standards being developed by the 100-Year Archive Task Force. "This body of standards is young," he says, "but it's the only industrywide effort to standardize archiving methods." CIO

Send feedback on this feature to editor@cio.in

REAL CIO WORLD | J U N E 1 , 2 0 0 9

53


EVEnT REPORT

Presenting Partner

Trim CosT and

ProsPer IT leaders share their experiences and concerns over tactfully working around resources to get the maximum out of what's available.

W

ith the downturn settling in, it's time managements turn to technology and save the day by adapting to today’s new reality and fueling business in the rebound that will follow. CIOs discussed how the slowdown has affected their company's strategic plans and driven them towards cost-effective and flexible solutions that help them manage downturn and still stay ready for recovery. Pointing at the change that the recession has brought, Arun Gupta, customer care associate & CTO, solutions & technology team, Shopper's Stop, said that now the retail giant is opting for short-term projects that promise quick ROI. Sharing his positive approach towards the downturn, Gopal Rangaraj, VP-IT, Reliance Life Sciences, said, "We are being extra cautious with investments in the slowdown, we are

convinced that each rupee that we are investing now is going to pay back." Shashi Kumar Ravulapaty, CTO, Reliance Consumer Finance, said, “We don’t refrain from taking up long-term projects but this doesn’t mean we will start spending right away. We would rather stagger the entire spending plan. This way there's no dearth for funds and process efficiency.” nirmalendu Jajodia, chief, technology & operations, nCDEX Spot Exchange said, “I don’t think we are trying to cut any costs right now. What we are trying to do is get more value by doing things that a normal manager would do any day. But the major cost issue lies with the linear cost model that big software vendors propose and this has come up as a challenge as the license cost is an issue. Provided that almost everything is negotiable in the current times the linearity of cost for software is not

justified. “It is not all about cost cutting... it is about how to derive maximum value from all the initiatives we have put to accomplish and support organizational goals in a cost effective way,” said V. Subramaniam, CIO, Otis Elevator Company. Speaking about IT-business alignment, Sunil Mehta, Sr. VP & area systems director (Central Asia), JWT, said CIOs should not look at a process from a technology point-of-view but, “start thinking technocommercially rather than technically." “Innovation is a continuous process and no matter what happens it cannot stop,” said n. Muralidharan, CEO, nSE Infotech Services. He said that despite the slowdown, innovation continues and the project rollouts cannot be stopped, all one can do is induce speed, quality and reliability in operations to keep things under control. Regarding his experience with Open

FROM LEFT: ARun GupTA, Customer Care Associate & CTO, Solutions & Technology Team, Shopper's Stop, GOpAL RAnGARAj, VP-IT, Reliance Life Sciences, ShAShi KuMAR RAvu A LApATy, CTO, Reliance Consumer Finance, SuniL MEhTA, Sr. VP & Area Systems Director (Central Asia), JWT, SuMiT DuTTA ChOwDhuRy, CIO, Reliance Avu Communications, pRADEEp KALRA, EVP & CTO, Yes Bank, n. MuRALiDhARAn, CEO, NSE Infotech Services

Event Report_RED HAT.indd 50

6/1/2009 5:14:12 PM


EVEnT REPORT

Source use in his organization, Muralidhran said "My risk management system is running on Linux, which is handling millions of transactions and the good thing about this is that you can have a complete control of what you want to do, once you get on to it. Having the same opinion, Rajesh Doshi, director-IT, national Securities Depository, said that it is fairly secure to work on Linux and it makes them feel confident. Another reason to adopt Open Source is because it is cost-effective and its variable cost is low. Sharing his experience of using Open Source, Jajodia of nCDEX Spot Exchange, said "The driver for us to adopt Open Source was that you get to choose what you want. In an Open Source kind of environment you get the chance to mix and match the elements of your stack." Giving his stance on outsourcing today, Pradeep Kalra, EVP & CTO, Yes Bank, said, “The outsourcing vendors do the leg work and we concentrate on building a strategy and designing the way forward.” Sharing the same view, Shirish Gariba, CIO, Elbee Express, said, “It gives us enough time to

FROM LEFT: BihAG LALAji, VP-IT, Ambuja Cements, RAjESh DOShi,Director-IT, National Securities, Depository, niRMALEnDu jAjODiA, Chief, Technology & Operations, NCDEX Spot Exchange, v. SuBRAMAniAM, CIO, Otis Elevator Company, ShiRiSh GARiBA, CIO, Elbee Express

think about the business and help internal and external customers, and the more time we get to support our customers, the more we are making our organization scalable and efficient.” Sumit Dutta Chowdhury, CIO, Reliance Communications, shared the benefits of rolling out Open Source to his customers. He said, “Removing all the applications that we don’t need, would reduce maintenance hassles and the overall cost of operations and risk.” Regarding this, Bihag Lalaji, VP-IT, Ambuja Cements, said, “While using

Open Source, we had a help desk to assist us in all the problems. We trained our users and they learnt to manage and use it, so it worked out pretty well for us.” Doshi of national Securities Depository, said, “We largely use Open Source in our organization and this gives us multiple project advantage, so we try to bring projects on a common platform and leverage them for the peak time.” Jajodia of nCDEX Spot Exchange, said, “Open Source comes at a cost in terms of developing your own capability and managing your standard stack.”

Bring innovation to Business open source software presents an opportunity to escape sunken investments be great if you have driven the cost to the lowest it can go. The idea People started using Open Source for one simple reason and of linearity is that of perfect scalability. The ability to provision the that was when they realized that they were running servers that way you want is a great idea. Linearity means high predictability were way too expensive and if they switch to software that is and it is one of the biggest advantages that Open Source can much cheaper, then they could actually save. offer. Another advantage of using Open Source software is that it Joel presented a case where, some upcoming projects in Intel presents an opportunity to escape sunken investments. Open required them to procure more hardware which would need more Source software lets one jump to new technologies without space to accommodate it. Intel figured out if they could cut down having to deal with the sunk cost. Migrating is not an issue as the manpower to do the new job, they could delay or defer buying one doesn’t have to pay much for the licenses and can cancel more space to accommodate the hardware. They started the subscriptions whenever one wants. You just pay for using blades which weren’t cost effective then and tried ongoing subscriptions, updates, support, documentation to save power by shifting jobs on blades and managing and integration with vendors. the frequency and all this boils down to operating system. Joel Berman, Sr. director, global field marketing, Linux helped them break that curve and save. Red Hat, explained that integration of software with Open Source brings innovation to business, whether hardware turns out to be much easier with Linux as it one uses that innovation to save costs for itself in the offers support to a large choice of network adaptors, jOEL BERMAn thus making it more flexible. Also, most people have an Sr. Director, Global Field period of recession or develop innovative products to beat Marketing, Red Hat its competition. opinion that linear cost is a bad thing. But linearity can

Event Report_RED HAT.indd 51

6/1/2009 5:14:22 PM


Essential

technology Illustration by MM Shanith

From Inception to Implementation — I.T. That Matters

Solid-state disks, especially those used in the military, allow for fast erases and could be useful to secure laptops — but they are also harder to restore.

56

Essentisl Tec.indd 56

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

By Robert L. Mitchell

| As the pilot ejects inside enemy territory, the fighter jet triggers an automatic data-destruction sequence. Within 15 seconds, the highly classified mission data on the solid-state disk has been wiped out. The storage device in this scenario didn't just burn up like the voice recorder in Mission Impossible. Instead, the system's manufacturers simply took advantage of a key property of the flash memory chips that make up solid-state disks: Data can be erased much more quickly and thoroughly than it can with a magnetic, spinning hard disk. Solid-state disks, or SSDs, don't require six or seven passes to erase all traces of the bits on every track and sector. Once the bits have been reset in every flash memory cell, that data is gone forever, although meeting the most stringent government disk-sanitization requirements may still involve two or more passes. The process is quick and efficient. "You're talking about seconds," says Gary Drossel, vice president of marketing at SiliconSystems, a manufacturer of SSDs used in government systems. With a typical hard disk, just the process of getting every block on a drive of that size to spin under the read/write head would take almost an hour and

hardware

Vol/4 | ISSUE/14

6/1/2009 4:18:01 PM


essential technology

a half, and the entire process could take three to four hours on a fast eSATA drive, according to experts at Texas Memory Systems and Kroll Ontrack.

How Clean is Clean? Hard disks can leave behind a magnetic residue that, theoretically — because of something called the hysteresis effect — could be used to reconstruct the hard disk data. Whether in fact that's practical with today's disk drives is a matter of debate. But does this same potential weakness apply to solid-state drives? No, says Jamon Bowen, enterprise architect at enterprise

can be missed during the process. SSDs have extra memory cells beyond what is allocated to the file system. These are used by a ‘wear-leveling’ feature that distributes data across that larger area to extend the life of the SSD. Those cells may be swapped in and out of the area used by the file system. "Just using software that was designed for hard disk drives to overwrite data is not a viable data-destruction solution for flash-enabled storage," says Barry. Any secure erase that meets government disk-sanitization requirements must be executed at the controller level.

Manufacturers hope that SSD’s‘fast erase’features will help the technology catch on.For example,computers with sensitive data need to be scrubbed before they can be disposed of or taken out of service for maintenance. SSD maker Texas Memory Systems. If normal disposal processes are followed, your SSD data will be irretrievably erased, he says. "With NAND [flash], you're storing a full amount of electrons on a floating gate, so there's no real way of telling what the value of that transistor used to be. Once you fully erase the drive, there is no ability to re-create the data," says Drossel. "To my knowledge, a re-programmed cell does not contain any previous data," says Sean Barry, senior data recovery engineer at Kroll Ontrack. But not everyone agrees. BitMicro Networks claims that a hysteresis effect can also affect SSDs and has created a hardwarelevel secure-erasure function to deal with that. For all practical purposes, however, if data-recovery firms can't get the data back after a single secure-erase pass, most non-government users needn't worry about it. Assuming, that is, that the erasure pass got everything. Unfortunately, some cells

Vol/4 | ISSUE/14

Essentisl Tec.indd 57

For government users, disk sanitization is about policy as much as it is about technology. While some government standards call for six passes, SSDs really only require two, says Marius Tudor, director of business development at BitMicro. Other vendors say one is sufficient. No matter: government standards for SSDs are still based on what it takes to sanitize a hard disk drive, so four or five extra passes may be required just to satisfy the specification, Tudor says.

Instant Erasure While ‘fast erase’ features are available today for military use, SSD manufacturers hope that the technologies will catch on for business applications such as backend SSD storage and executive laptops. For example, computers containing sensitive data need to be scrubbed before they can be disposed of or taken out of service for maintenance. "With SSD, you can do that very quickly with little power,"

60% The rate at

which the price of solidstate disks have been falling year over year. Source: In-Stat

says Patrick Wilkison, vice president of marketing and business development at STEC. While SSDs can typically be erased more quickly than magnetic media, the devices designed to meet government standards have been optimized to further speed up erasure. "We've created internal circuitry so that the host can send one command — either in software or a push button — and the drive will erase multiple chips in parallel," says Drossel. For example, it takes about 15 seconds to clear all of the chips on a 16GB SSD, he says. Vendors have also created other schemes to meet government security requirements. BitMicro Networks offers a removable SSD with backup power that allows it to be erased up to six hours after removal from the host system.

Magic Bullet for Laptop Security? Could so-called fast-erase technology someday appear in notebooks? Intel's Anti-Theft PC Protection already provides hardware-level protection that disables a computer when certain conditions are met, such as after a series of failed password attempts. By combining that with a third-party service such as REAL CIO WORLD | J U N E 1 , 2 0 0 9

57

6/1/2009 4:18:01 PM


essential technology

Absolute Software's CompuTrace, some Lenovo laptops can be reported as stolen and then remotely disabled the next time the computer connects to the Internet. So why not allow secure erasure as well by adding rapid-erase or data hardwaredestruct features used in SSDs built for secure government applications? "If your laptop went missing, you could take it to any Internet hot spot to report it, and it would disable that at the first point when it was connected to the Internet," says Jim Handy, an analyst at semiconductor market research firm Objective Analysis. But that's unlikely to appear in commercial applications, according to Intel. A spokesperson says vendors would probably add full disk encryption to protect the drive itself, since that renders the data inaccessible, even if the SSD is moved to another computer. In contrast, SiliconSystems' fast-erase

sensitive customer or sales data. "The data may be gone, but at least it's not in the wrong hands," Drossel says.

More Costly Recovery The flip side of the level of security SSDs offer is the fact that recovering data from them can be more difficult and expensive than for other media. Each SSD vendor has its own proprietary method for mapping data from the file system to individual memory cells. "If you don't have the mapping table that records where everything is kept, you have random data distributed throughout the chips," says Bowen. "Everyone follows their own data-placement schemes. Without knowing the details of that, it would be next to impossible to piece all of that together." That may be true for a hacker, but not for data recovery specialists, who can pull data even when an SSD has sustained

The flip side of the level of security SSDs offer is that recovering data from them can be more difficult and expensive than for other media in the event of a physical failure,such as a broken circuit. feature requires power, but disconnecting the drive won't kill the process: Erasure continues the instant that power is restored. "There's no way to stop it," says Handy. The technology can be applied to the whole drive or a pre-configured secure zone on the SSD that's also protected by a password. SiliconSystems also offers an SSD self-destruct feature that applies an ‘overvoltage’ to each of the flash chips, physically destroying them. The destruction can be triggered via software or a physical switch, says Drossel. SSDs can also be designed to self-destruct or erase if they are stolen and inserted into any unauthorized machine. In the private sector, rapid-erasure techniques could be used in point-ofsale systems or kiosks that might contain 58

Essentisl Tec.indd 58

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

physical damage. "Kroll Ontrack has developed methods to recover data without the controller chip available," says Barry. "We've been successful in discovering a number of data layouts for different manufacturers." Another drawback is that data on SSDs can be far more costly to recover in the event of a physical failure, such as a broken circuit. "When an SSD becomes damaged, it's more difficult to get the data off the raw chips. We've had jobs go as long as three or four months," Barry says. Costs go up if the data is needed quickly and additional staffers are assigned to the project. "That jumps up the service level," he says, "and they pay accordingly." In the future, changes in how the Windows file system interacts with

SSDs may improve both security and performance for end users. Today, when a user deletes a file on a Windows computer, the file system removes the pointers to the locations on disk where the data that makes up the file resides, but the data itself remains until the space is allocated to another file. Only then is it overwritten. That's why erased files can be recovered on hard disk drives, and SSDs operating under Windows are no different. But that may be about to change. Writing data to an SSD is a two-step process. Every flash-memory cell must be erased first, before the file system can write to it again, and that slows down write performance. To remedy that, Intel is working with Microsoft. to come up with a way to erase the cells associated with a deleted file in the background, as the system has processor cycles available. That will improve the performance for subsequent writes, but it has security benefits as well, since the data associated with deleted files will be overwritten sooner, says Troy Winslow, director of marketing at Intel's NAND Solutions Group.

Encryption: SSD's Missing Link As with hard disk drives, hardware-based, full disk encryption hasn't gained much traction with SSDs. Most manufacturers don't offer full disk encryption yet, although several, including Intel, say they plan to offer it at some point in the future. That may seem a bit strange, since the government market is focused so tightly on security. Indeed, the need for fasterase and SSD destruct schemes would diminish with the introduction of full disk encryption using strong, 256-bit or even 1,024-bit AES algorithms. "If the disk does have encryption and the person who has stolen the disk doesn't have the key, then it might as well have been erased," says Handy. Samsung Electronics has taken the lead in this area, with the introduction

Vol/4 | ISSUE/14

6/1/2009 4:18:01 PM


essential technology

of a 128-bit AES option for its latest SSD offerings. In this scheme, the SSD controller encrypts the key, which is stored in the flash chips. One reason why more manufacturers don't offer SSD is that, while encryption algorithms themselves are standardized, there are no standard implementation methodologies for full disk encryption, so each vendor must roll its own proprietary solution. That adds to the cost. Drossel estimates that encryption would add 10 percent to the cost of its SSD products today. The standards situation could change with the rollout of the Trusted Computing Group's Opal specification, which was finalized in January. However, that specification may not be stringent enough to meet the requirements of government users — the customer base most interested in security, says Matt Bryson, an analyst at Avian Securities. Another issue is that most original equipment manufacturers — vendors that integrate SSDs into their systems — don't take advantage of the feature even when it's offered to them. "I don't know any large OEMs who have implemented SSDs with encryption. The functionality is there, but nobody is using it yet," says Wilkison. That's because, outside of government, most users aren't demanding it yet. "It's still a limited interest. People would like to have it, but they're not willing to pay for it," says Tudor. Wilkison says in the enterprise-class market, full disk encryption isn't always a requirement, even when sensitive data is involved. "Hardware-level encryption may be unnecessary because data is being encrypted upstream," he says. On the end-user side, most people simply don't care about full disk encryption — on hard disk drives or SSDs. "On the consumer side, it's about reliability and ruggedness. On the military side, they want to encrypt data or destroy it immediately. There's no middle ground," says Bowen.

Vol/4 | ISSUE/14

Essentisl Tec.indd 59

SSDs: Whirring Down Do all solid-state disks (SSDs) slow down with use over time? The answer is yes — and every drive manufacturer knows it. One thing you can be sure of is that the shiny new SSD you just bought isn't likely to continue performing at the same level it did when you first pulled it out of the box. That's important to know, given the speed with which SSDs have proliferated in the marketplace amid claims that they're faster, use less power and can be more reliable — especially in laptops — since there are no moving parts. They also remain more expensive than their spinning-disk hard drive counterparts. Here's the SSD rub: Drive performance and longevity are inherently connected, meaning drive manufacturers work to come up with the best balance between blazing speed and endurance. And since SSDs are fairly new to the market, users are finding that while they do offer better speed in some ways than hard disk drives, questions remain about how much of that speed they deliver for the long haul. "An empty [SSD] drive will perform better than one written to. We all know that," said Alvin Cox, co-chairman of the Joint Electron Device Engineering Council's (JEDEC) JC-64.8 subcommittee for SSDs, which expects to publish standards this year for measuring drive endurance. Cox, a senior staff engineer at Seagate, said a quality SSD should last between five and 10 years. The good news is that after an initial dip in performance, SSDs tend to level off, according to Eden Kim, chairman of the Solid State Storage Initiative's Consumer SSD Market Development Task Force. Even if they do drop in performance over time — undercutting a manufacturer's claims — consumer flash drives are still vastly faster than traditional hard drives, because they can perform two to five times the input/output operations (I/Os) per second of a hard drive, he said.

— By Lucas Mearian

Eventually, full disk encryption may be offered with all SSDs. But it won't sell at a big premium, Wilkison predicts. Rather, as competition intensifies, he predicts that manufacturers will add it as just another differentiating feature. But don't look for full disk encryption any time soon. While some vendors say we'll see some implementations in 2009, others say they don't expect to see systems with SSDs that offer full disk encryption until 2010 at the earliest.

Shredding the Evidence While a fully overwritten drive is unrecoverable, the best way to ensure complete data destruction when disposing of SSDs is to physically destroy them, says Barry. "If you ran that through a grinder and completely chopped that up in quarter-inch chunks ... that is by

far the best way to make sure the device is unrecoverable." But every flash chip must be destroyed, and existing shredders may not be up to the job. "Shredders for disk drives might not be adequate for SSDs because the chips are so much smaller [than disk drive platters]," says Bowen. SSDs have arrays of tiny flash chips — anywhere from eight to 30 per device. Any that are missed by the shredder would still be readable by data-recovery specialists such as Barry. CIO

Send feedback on this feature to editor@cio.in

REAL CIO WORLD | J U N E 1 , 2 0 0 9

59

6/1/2009 4:18:02 PM


Pundit

essential technology

Data security is a zero-sum challenge — which is why there’s no such thing as a short-term win. CIOs need to create an overall strategy to persevere over the long-term. By Jim Damoulakis Security | How much progress is really being made in securing storage? For several years now, pundits have sounded the alarm about a range of security risks associated with storage. That includes everything from a lack of fundamental network security practices for SANs to the ever-familiar problems associated with handling off-site media. Regarding the latter, hardly a week goes by that some organization isn't reporting the loss or theft of laptops or tapes containing confidential information. Yet, aside from those corporate victims that have been forced to make improvements, it seems that the state of storage security has been advancing very slowly.

A widely reported study from the Identity Theft Resource Center found a 47 percent increase in data breaches in 2008 compared with 2007. Of these breaches, 20.7 percent involved ‘data on the move’ — on laptops or tapes, for example. However, twice as many incidents (41 percent) occurred through a combination of hacking, insider theft and subcontractor breaches. Yet even the goal of securing off-site media hasn't been successfully addressed. Consider, for example, the lack of wide-scale adoption of encryption. Only 2.4 percent of the lost media in the above study was encrypted. Why? In the case of tape, it's not because of a lack of awareness or misunderstanding the

obstacles to addressing those risks. What's required is understanding that different entities within an enterprise access, manage, control and own responsibility for data. An effective strategy considers the security needs of all constituents. A strategic approach to storage security not only would weigh additional risks beyond things like off-site media encryption, but would also consider identifying which data needs to be encrypted and at what level. Perhaps if data is encrypted at the application level to protect against unauthorized access, it might not need to be re-encrypted at the tape level. If a centralized key-management function, with associated policies and

The focus isn't on a strategic approach to securing the overall storage infrastructure, but on the pain point du jour and the desire to avoid being the next headline. Furthermore, many so-called storage security initiatives should be more accurately labeled as off-site tape security initiatives. In other words, the focus isn't on a strategic approach to securing the overall storage infrastructure, but on the pain point du jour — in this case, the desire to avoid being the next organization to make headlines for the wrong reason. Certainly, the desire to close this particular security hole is understandable, but without an overall game plan, there is a strong likelihood that efforts will be duplicated and other risks overlooked. 60

ET-Pundit.indd 60

J U N E 1 , 2 0 0 9 | REAL CIO WORLD

problem — that's painfully obvious. Nor is it because of a lack of technology. Encryption products for every level can be obtained from mainstream vendors. It's easy to point to the challenges of key management as the primary roadblock to more widespread adoption of media encryption, and this is certainly a contributing cause. However, the problems of key management point to a larger issue: the lack of a comprehensive security strategy that truly encompasses storage. As long as storage sits at the periphery of organizations' security focus, there will continue to be risks, and

processes, were instituted to manage all data security access, the prospect of off-site tape encryption wouldn't be as daunting. Given the current economy, it's improbable that many organizations will undertake this type of program in the near future. However, it's important to begin to bridge the gap between storage and security and build a rational framework on which to incrementally improve. Otherwise, the breach tally is certain to climb even higher in 2009. CIO James Damoulakis is CTO at GlassHouse Technologies. Send feedback on this column to editor@cio.in

Vol/4 | ISSUE/14

6/1/2009 4:18:40 PM


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.