MOBILE BACKUP • QUALITY AWARDS: ENTERPRISE ARRAYS
Managing the information that drives the enterprise
STORAGE Vol. 10 No. 1 March 2011
The best storage for
virtual desktops There’s a lot more to know about storage for virtual desktop infrastructures than boot storms and linked clones. ALSO INSIDE Networks: What’s old is new again Storage virtualization doesn’t have to be so hard Time to take the cloud seriously Want more capacity … or performance? Striving for storage efficiency
1
STORAGE inside |March 2011
STORAGE inside | March 2011
STORAGE inside | march 2011
For networks, what’s old is what’s new 6
EDITORIAL As Ethernet continues to navigate its roadmap on the way to 100 Gbps, it looks like it might take over all networking chores in the data center. by RICH CASTAGNA
Three vendors still stalking storage virtualization 9
STORWARS Are you still confused about ser ver virtualization? There are some good alternatives—with solid benefits—to virtualizing storage systems. by TONY ASARO
Managing storage for virtual desktops 13
Watch out for boot storms! Configuring storage for virtual desktop infrastructures can be tricky, but there are creative ways to help ensure that your virtual desktops get the performance they require. by ERIC SIEBERT
Backup for remote and mobile devices 25
Backing up remote sites and mobile devices has always been a challenge. But with a workforce that’s getting more mobile, it’s time to get a handle on remote backups. by W. CURTIS PRESTON
Quality Awards VI: NetApp tops enterprise arrays field again 36
NetApp once again leads a very strong field of enterprise array vendors that racked up some of the hi ghest scores we’ve seen to date. by RICH CASTAGNA
Cloud projects climbing IT priority lists 45
HOT SPOTS A recent ESG survey indicates that investments in cloud services and infrastructure will increase in 2011, meaning the muchhyped technology may start to hit its stride in the real world. by TERRI MCCLURE
Let’s focus on storage performance in 2011 50
READ/WRITE For the last few years we’ve been preoccupied with storage capacity and astronomical data growth rates. In the process, we’ve overlooked storage performance. by JEFF BOLES
Still a struggle to achieve storage efficiency 55
SNAPSHOT Our latest survey finds most respondents feel their companies use storage pretty efficiently, but disk capacity is still wasted because administrators don’t have the right tools. by RICH CASTAGNA
From our sponsors 57
4
Storage March 2011
Useful links from our advertisers.
Illustration by Enrico Varrasso
Transform the way you store data with award-winning iSCSI SANs from Dell EqualLogic*
Reduce man hours and save up to 76% in network management costs. Learn how PACSUN reduced their storage administration time by 20% at dellenterprise.com/equallogic. * The Dell EqualLogic PS6010XVS Hybrid SSD/SAS SAN is InfoWorld’s 2011 Technology of the Year award winner for Best Storage System. Click here to learn more.
Efficient Enterprises do more with Dell EqualLogic.
editorial | rich castagna
For networks, what’s old is what’s new
t
As Ethernet continues to navigate its roadmap on the way to 100 Gbps, it looks like it might take over all networking chores in the data center.
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
6
HE NETWORK IS still the Rodney Dangerfield of enterprise storage infrastructure— it just can’t seem to get the respect it needs, and sometimes doesn’t even get the attention it deserves. Sure, you care about the network when it gets too congested to carry data around efficiently or when you run out of places to plug things in, but for the most part it’s just there. But a couple of developments in the last few years have generated more interest in the neglected network. Fibre Channel over Ethernet (FCoE), touted as the great storage uniter, offers a way to knit together otherwise disparate networks, making Fibre Channel and Ethernet networks function as one (at least in theory). FC has been the de facto standard connective tissue for enterprise-class storage environments, and with its storage-centric protocol design it’s generally accepted as the performance leader. But Ethernet’s stake in the storage shop has grown significantly over the last five or so years as NAS systems have proliferated in response to the ever-rising tide of file storage. Ethernet’s reach has also been extended as iSCSI continues to gain a share of the block storage market. iSCSI didn’t exactly take the data center by storm, but based on our research, its incremental growth puts it in more than 40% of the country’s data storage shops. That makes Ethernet-based storage a player for both file and block. And it almost makes it the default candidate for multiprotocol storage, where block and file share the same system, which is quite possibly the fastest growing segment of the storage system market. Every IT shop has years of Ethernet experience, and the hardware it hangs on is tried, true and pretty cheap. Fibre Channel aficionados will say that’s all true, but you don’t get the performance of a network specifically designed for
Every IT shop has years of Ethernet experience, and the hardware it hangs on is tried, true and pretty cheap.
Copyright 2011, TechTarget. No part of this publication may be transmitted or reproduced in any form, or by any means, without permission in writing from the publisher. For permissions or reprint information, please contact Mike Kelly, VP and Group Publisher (mkelly@techtarget.com).
Storage May 2010
STORAGE
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
storage. Well, now you do get that performance (or nearly so) as 10 Gigabit Ethernet (10 GbE) gear begins to replace the old 1 GbE stuff in most shops. Even at a slightly slower 8 Gbps, Fibre Channel might still have the performance edge because of its storage pedigree. But it’s not that much of an edge, and the Ethernet roadmap, bolstered by the ratification of the IEEE 802.3ba standard last year, has 40 GbE on the near horizon with products already being demonstrated, and 100 GbE to follow hard on its heels. Add the emergence of SAS disks as a replacement for FC disks on the back end and how SAS can easily combine with SATA drives for tiered multiprotocol storage, and it looks like the NAS/iSCSI/Ethernet combo will be pretty tough to top. Where does that leave Fibre Channel over Ethernet? If Fibre Channel is getting squeezed out of the storage picture and Ethernet is destined to be the data center-wide network of choice, why should we even bother with FCoE? FCoE appears to be a bridge technology (pardon the pun) that does a good job of linking FC and Ethernet. The question is how long will we need that bridge, and how many companies need to connect those two environments? Is it worth investing in CNAs and special switches and new interfaces? We’re not seeing a ton of FCoE storage systems, so maybe storage vendors don’t see a lot of potential for a big payback on that investment. Of course, there are some shops that can use FCoE now, use it effectively and save money in doing so. But I don’t expect they’re in the majority. Ultimately, it looks like it’ll be an Ethernet world, which will certainly make things simpler. And as I/O virtualization catches up with server and storage virtualization, working with a single network architecture will make it even easier to virtualize and share network resources, and to use them more efficiently. 2
Even at a slightly slower 8 Gbps, Fibre Channel might still have the performance edge because of its storage pedigree.
Rich Castagna (rcastagna@storagemagazine.com) is editorial director of the Storage Media Group. * Click here for a sneak peek at what’s coming up in the April 2011 issue.
Storage efficiency
Sponsor resources
7
Storage March 2011
HP Converged Infrastructure makes storage agile and efficient so your data center is ready for what’s next Highly virtualized HP 3PAR Storage Systems are designed from the ground up to deliver transformative levels of agility and efficiency to virtualized data centers. Exclusive thin technologies and autonomic management capabilities built into HP 3PAR Storage Systems deliver unique benefits: • Reduce acquisition and operational costs by 50% • Reduce storage management time by 90% • Cut storage capacity requirements by 50%1
Are you ready for what’s next? » Learn more about HP 3PAR Storage Systems
1. Based on documented client results that are subject to unique business conditions, client IT environment, HP products deployed, and other factors. These results may not be typical; your results may vary. © Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
StorWars | tony asaro
Three vendors still stalking storage virtualization
e
Overwhelmed by all the buzz around server virtualization? There are still solid alternatives—with real benefits— to virtualizing storage systems.
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
9
XTERNAL STORAGE VIRTUALIZATION (ESV) is the use of intelligent storage controllers
that provide volume management, data management and protection features by creating a virtual storage system using external hardware and capacity. The goal of ESV is to consolidate management and intelligent features, and to enable tiering and heterogeneous replication. If you implement a high-end intelligent ESV system, you can scale with lots of dumber and less-expensive systems behind it, creating a great balance of high-end capabilities with lower cost hardware. This all looks good on paper, but if the value proposition is so apparent, why hasn’t ESV become the dominate storage infrastructure in our data centers? There are several reasons why ESV isn’t pervasive. The first is that there are only a handful of leading vendors providing ESV solutions, including Hitachi Data Systems, IBM and NetApp. One could argue that if EMC Corp. decided five years ago that ESV was the new vision for storage, it would probably be the dominant architecture embraced by IT professionals today. Hitachi was the first to combine its leading enterprise-class storage system with support for ESV technology back in 2003. The company has done a great job of implementing ESV technology and, in many cases, has taken business away from the likes of EMC because of this unique capability. The Hitachi ESV story has been getting better over time because of new technology that clearly quantifies its value. It’s able to provide thin provisioning, reclaim unused storage capacity and provide sub-LUN tiering to external storage systems. Sub-LUN tiering could be a “killer app” for external storage virtualization, but users are cautious of an idea that scatters their data across virtual volumes spanning multiple storage systems. If Hitachi can validate that sub-LUN tiering is highly reliable when combined with ESV, it could save customers millions and enable them to further leverage their unique architecture.
Hitachi was the first to combine its leading enterprise-class storage system with support for ESV technology back in 2003.
Storage March 2011
STORAGE
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
10
However, while ESV technology has helped to keep Hitachi in the enterpriseclass storage game, it hasn’t been enough to make them dominant. The enterprise-class storage system market is still a toe-to-toe battle with gains and losses measured in inches, not miles. IBM has offered its external SAN Volume Controller (SVC) appliance for a number of years, and it’s been adopted by a sizable number of IBM users. The most frequent complaints I hear from IBM SVC users center around its cost, to the point where price has been the key deterrent to taking advantage of some of SVC’s coolest features, like heterogeneous replication. One IT professional asked me for advice about performing data replication between an IBM DS8000 and a DS4000, even though his company has an SVC deployed. The IT pro said it was too expensive to use the SVC, so they were seeking alternatives. I rarely run into non-IBM storage users who have implemented SVC, so it doesn’t look like IBM has done a particularly good job increasing its footprint beyond its existing customer base. NetApp’s V-Series has been a popular product, enabling companies to leverage the intelligence of NetApp FAS using SAN-attached storage on the back end. It’s actually a more popular approach than you might think. Again, it’s not NetApp’s primary strategy but it provides a smart alternative for users who have already invested a ton of money in their current SAN infrastructure. Those three products from major vendors aren’t all that competitive with each other. The Hitachi USP V and VSP are high-end enterprise-class systems focused on SAN storage, providing a storage system that also enables external storage virtualization. IBM’s SVC seems to have been designed as a complement to IBM’s DS8000 and DS4000 storage systems. And NetApp’s V-Series is more of a midrange appliance that provides both NAS and SAN external storage virtualization. I doubt these three vendors ever run into each other in competitive situations. External storage virtualization has made an impact and will continue to do so, but it has failed to establish any sort of dominance in the data center. Cost, complexity and the risk of the unknown are still major impediments to its adoption. Another reason ESV isn’t more pervasive is that only Hitachi has made this technology a core part of its strategy. IBM and NetApp have it in their portfolios, but it’s not a top priority in terms of growth or vision. The other major storage vendors, including Dell Inc., EMC, Hewlett-Packard (HP) Co. and Oracle Corp.
External storage virtualization is real and widely adopted, with thousands of companies and organizations using it courtesy of these three vendors.
Storage March 2011
STORAGE
don’t offer ESV solutions in any real way. HP does OEM Hitachi’s USP V, but has never truly focused on its virtualization capabilities. External storage virtualization is real and widely adopted, with thousands of companies and organizations using it courtesy of these three vendors. These solutions have been on the market for years and have developed a great deal of field traction, so the risk factor seems to be much less of a concern. ESV offers a great deal of benefit, and should be considered when analyzing your storage strategies and roadmaps. 2 Networking: Old is new again
Tony Asaro is senior analyst and founder of Voices of IT (www.VoicesofIT.com).
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
11
Storage March 2011
server rooms that require GPS Navigation.
SOLVED. We get that virtualization can drive a better ROI. Highly certified by Microsoft, VMware, HP and others, we can evaluate, design and implement the right solution for you. We’ll get you out of this mess at CDW.com/virtualization
©2011 CDW LLC. CDW®, CDW•G® and People Who Get IT™ are trademarks of CDW LLC.
Managing storage for virtual desktops
Virtual desktops promise savings and consolidation similar to what server virtualization delivers, but once again, storage is a big issue.
Networking: Old is new again
BY ERIC SIEBERT
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
13
i
mplementing a virtual desktop infrastructure (VDI) involves many critical considerations, but storage may be the most vital. User experience can often determine the success of a VDI implementation, and storage is perhaps the one area that has the most impact on the user experience. If you don’t design, implement and manage your VDI storage properly, you’re asking for trouble.
Storage March 2011
STORAGE
VDI’S IMPACT ON STORAGE
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
14
The biggest challenge for storage in VDI environments is accommodating the periods of peak usage when storage I/O is at its highest. The most common event that can cause an I/O spike is the “boot storm” that occurs when a large group of users boots up and loads applications simultaneously. Initial startup of a desktop is a very resource-intensive activity with the operating system and applications doing a lot of reading from disk. Multiplied by hundreds of desktops, the amount of storage I/O generated can easily bring a storage array to its knees. Boot storms aren’t just momentary occurrences—they can last from 30 minutes to two hours and can have significant impact. After users boot up, log in and load applications, storage I/O typically settles down; however, events like patching desktops, antivirus updates/scans and the end-of-day user log off can also cause high I/O. Having a data storage infrastructure that can handle these peak periods is therefore critical. Cost is another concern. The ROI with VDI isn’t the same as server virtualization, so getting adequate funding can be a challenge. A proper storage infrastructure for VDI can be very costly, and to get the required I/O operations per second (IOPS) you may have to purchase more data storage capacity than you’ll need. Expect to spend more time on administration, too. Hundreds or thousands of virtual disks for the virtual desktops will have to be created and maintained, which can be a difficult and time-consuming task.
A proper storage infrastructure for VDI can be very costly, and to get the required I/O operations per second (IOPS) you may have to purchase more storage capacity than you’ll need.
DETERMINING STORAGE REQUIREMENTS To properly design a VDI infrastructure you need to understand the resource requirements needed by virtual desktop users. Don’t make assumptions; to properly calculate resource requirements you need actual statistics from the users whose desktops will be virtualized. Profiling the users and measuring their resource usage is the key to determining storage requirements. Products from vendors like Lakeside Software Inc. and Liquidware Labs Inc. can collect data from users’ desktops so you can perform an assessment of your environment and determine your needs. The longer you collect data, the less likely it will be affected by unusual or periodic activities.
Storage March 2011
INTRODUCING FLUID DATA EXPERIENCE ENTERPRISE STORAGE WITH THE FUTURE BUILT IN. » Slash costs 80% now and in the future » Scale on a single platform with the future built in » Secure your data against downtime and disaster
FIND OUT WHAT FLUID DATA HAS DONE FOR OUR CUSTOMERS. Scan the QR code with your smartphone for a preview. For scan app visit get.beetag.com.
STORAGE
Networking: Old is new again
The key measurement for storage is IOPS. A number of factors can affect IOPS (caching, block size), but the base calculation is derived from hard drive mechanics: rotational speed (rpm), latency and seek time. A typical 7,200 rpm drive might be capable of 75 IOPS, a 10K drive 125 IOPS, a 15K drive 175 IOPS and a solid-state drive 5,000 IOPS. Spread across a RAID group, you can multiply the number of drives in the RAID group times the IOPS of the drive to get the total IOPS the RAID group is capable of (e.g., six 15K drives x 175 IOPS = 1,050 IOPS). There are other factors, such as caching, that can increase IOPS, while RAID overhead and latency in network storage protocols can decrease it. You should always measure actual user resource usage, but there are accepted averages you can use as a starting point. The averages are based on the characteristics for certain types of users:
Storage virtualization made easy?
Storage for VDI
Task workers
Productivity workers
Knowledge workers
I/O patterning (reads/writes)
60/40
80/20
40/60
IOPS range
2 to 13
2 to 21
7 to 59
6
9
22
Avg. IOPS Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
16
Don’t design your VDI storage to handle just average I/O loads; it has to accommodate peak I/O loads to provide a good user experience. Having enough storage capacity is obviously important, but how it performs is more important. Because the number of spindles plays a big part in a storage array’s performance, you may end up with more capacity than you need just to get the required IOPS.
FIBRE CHANNEL vs. iSCSI vs. NAS The type of storage is often dictated by budgets and available existing storage infrastructure. A Fibre Channel (FC) SAN would provide ample performance, but acquiring it may make VDI too expensive to implement. iSCSI and NAS (NFS) are attractive alternatives, but you need to ensure they can meet I/O requirements. Using 10 Gb Ethernet (10 GbE) can dramatically increase the throughput to iSCSI and NAS devices, but if you haven’t implemented 10 GbE yet it could be just as expensive as implementing FC. Peak IOPS loads may exceed the number of IOPS an iSCSI or NAS (NFS) device can handle. But adding cache or an accelerator in front of the storage device may improve performance sufficiently. Both iSCSI and NFS add CPU overhead to the host server; for iSCSI this can be offset with hardware initiators. Accelerator solutions typically won’t work with NAS, but there are other caching solutions available for NAS (NFS).
Storage March 2011
STORAGE
USE LINKED CLONES TO SAVE STORAGE LINKED CLONES can be an invaluable feature to use in a virtual desk-
Networking: Old is new again
Storage virtualization made easy?
top infrastructure (VDI) environment. Linked clones work by having a single master virtual machine (VM) that holds an image of the base operating system the desktops will use. All virtual desktops read from this image with any writes captured in a separate delta file created for each VM. Delta files are typically small, although they can grow if every disk block was written to—but that’s unlikely to happen. Linked clones can be periodically refreshed to include patches and operating system and application updates. Linked clones offer clear advantages, but they can be more complicated to maintain than full disk images.
Storage for VDI
LUN SIZES AND RAID Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
17
When sizing LUNs/volumes for VDI, don’t focus on performance rather than capacity to ensure that your LUNS can provide the required IOPS. There truly is no magic number for LUN sizes as many factors come into play. Generally, the more spindles you have in the RAID group that makes up your LUN the better. You also shouldn’t size your LUNs too small for the number of virtual desktops you’ll have on them. Whether or not you’re using full virtual disks or linked clones will also influence sizing as the latter requires much less disk space. You have a range of RAID options to achieve either better protection or better performance. A key factor that will influence your RAID choice is the read/write ratio of your virtual desktops. When reading data from a RAID group there’s no I/O penalty associated with the RAID overhead, but there’s an I/O penalty when writing. The more protection you want, the more it will cost you in I/O penalties. For example, RAID 1 has an I/O penalty of two as writes have to be written to both drives; with RAID 5 it increases to four and for RAID 6 it’s six. If your I/O workloads will involve more writing than reading, you want to use a RAID level that has less of a penalty when writing. Having a larger write cache in your array controller or using a custom RAID level like NetApp’s RAID-DP can also help.
A key factor that will influence your RAID choice is the read/ write ratio of your virtual desktops.
Storage March 2011
Having trouble deciding whether to migrate to the cloud? Iron Mountain can help. We offer a rich set of on-premises, hybrid and cloud information management solutions for archiving, eDiscovery, data protection and compliance. We can help you select the right solutions to meet your specific needs so you can migrate to the cloud when and if you're ready. Iron Mountain has a proven track record of intelligently managing enterprise information and pioneered cloud information management over 15 years ago. Our underground data centers provide an unprecedented level of security, reliability and protection. Trusted by over 90% of the Fortune 1000, with 140,000 clients in more than 40 countries, we go to great lengths to protect our customers’ information, their reputation and ours. Email digital-info@ironmountain.com or visit www.ironmountain.com/cloudhelp to schedule an assessment of your information management needs.
Š 2010 Iron Mountain Incorporated. All rights reserved. Iron Mountain, the design of the mountain
and the Iron Mountain logo are registered trademarks of Iron Mountain Incorporated in the U.S. and other countries.
STORAGE
DRIVE TYPES
Networking: Old is new again
Storage virtualization made easy?
SAS drives offer better performance but SATA drives can lower storage costs. Fast 15K drives can speed things up but at an increased cost compared to 10K drives. Solid-state drives (SSDs) offer blazing performance but have a hefty price tag. Choosing drives to handle virtual desktop infrastructure workloads usually comes down to buying the best drives you can afford. Slower performing SATA drives typically aren’t desirable for most VDI workloads, so SAS drives are a better choice. The platters of a 15K drive read and write data faster, and overall latency is reduced, but the head actuator that moves across the drive to access data doesn’t. So even if the drive is spinning 50% faster, overall performance increases by approximately 30%, which results in higher IOPS. You can mix and match drive types to provide faster storage where needed and use cheaper, slower storage for less demanding workloads. You might store the master disks for linked clones on fast SSD storage and the delta disks on SAS storage. You could take this a step further and use an automated tiering application to automatically balance workloads based on demand.
Storage for VDI
CACHING AND SAN ACCELERATORS Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Using a caching device or a SAN accelerator can make up for slower performing storage devices and provide more IOPS to deal with boot storms and other periodic I/O peaks. It can also save money because you may be able to use less-expensive storage devices but still be able to handle your VDI I/O workloads. Caching device like NetApp’s Flash Cache can make a huge difference and can greatly increase the number of IOPS your storage is capable of. Con-
VIRTUAL MACHINE RAM AND PAGING THE AMOUNT OF RAM assigned to a virtual machine can have a big
Capacity or performance?
Storage efficiency
Sponsor resources
19
impact on its performance. If you don’t a ssign enough RAM, the operating system will start paging to disk, which can greatly increase the amount of disk I/O—a situation you want to avoid as the needless storage I/O can degrade performance. Assigning too much RAM can cause swapping at the virtualization layer if a host has overcommitted memory, which can also degrade storage performance. It’s OK to overcommit host memory and it’s commonly done with virtual desktop infrastructure (VDI); just make sure you don’t completely exhaust your host memor y.
Storage March 2011
STORAGE
figure your caching for the appropriate areas; events like boot storms generally are very read intensive so a larger read cache will make a big difference. SAN accelerators are a great way to add a high-performance caching layer in front of your existing storage device. FalconStor’s Network Storage Server (NSS) SAN Accelerator for VMware View is an easy-to-deploy appliance that can improve a storage system’s performance. It may even let you use lowcost SATA drives for your VDI storage and still get adequate performance. Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
OTHER HELPFUL STORAGE FEATURES
Storage arrays come bundled with many features that can help offload methods and processes that might normally be done elsewhere. Allowing the storage array to handle the things it does best can increase efficiency and performance. Here are some storage array features that can be beneficial in a VDI environment. Data protection. Features, like Microsoft Volume Shadow Copy Service (VSS), that save previous versions of changed files, can make it easier for users to restore their own files. But implementing this feature on all user desktops can cause undesirable overhead and increase storage array I/O.
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
20
Allowing the storage array to handle the things it does best can increase efficiency and performance.
AVOIDING I/O SPIKES EVENTS THAT CAUSE I/O spikes like boot storms can’t be avoided,
but other operations that cause I/O spikes can be. Use staggered schedules when performing antivirus scans/updates, as well as patching and updating operating systems and applications. By spreading the load across a longer period of time, you can a void concentrated I/O on your storage system. And you can offload antivirus processing from the guest OS layer and move it to the virtualization layer where it can run more efficien tly. VMware Inc.’s vShield Endpoint can offload antivirus scanning to a dedicated virtual appliance eliminating the need to run A/V software inside the guest OS. This greatly reduces the number of instances of antivirus you have to run on your hosts and, because it’s centralized, it’s easier to manage and the resource usage is greatly reduced.
Storage March 2011
THERE’S A FASTER WAY TO IDENTIFY THE ISSUES IN YOUR IT ENVIRONMENT Hitachi IT Operations Analyzer. Proven to save you time and money.
Hitachi IT Operations Analyzer will give you the insight to successfully monitor your infrastructure. Every component, server or storage device can be displayed and monitored through a single dashboard, so you can: • Improve efficiency of your network • Minimize downtime of IT and applications • Identify critical issues • Make effective decisions based on real-time data
Take a free trial today at
itoperations.hds.com © Hitachi Data Systems Corporation 2011. All rights reserved
STORAGE
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
22
With FalconStor’s NSS SAN Accelerator, you can load an agent into the VDI gold master desktop template that allows the virtual desktop to communicate with the NSS SAN Accelerator appliance so any file changes that occur inside the guest OS are backed up by the appliance. Files can be recovered by users who can browse through previous versions and restore files to their desktop without involving the back-end storage device. Data deduplication. Data deduplication can greatly reduce the amount of storage you’ll need for your virtual desktops, especially if you’re using full image virtual machines (VMs) instead of linked clones. If you have 100 desktops, each with a 20 GB disk, you’d need approximately 2 TB of desktop space. But VDI users typically run the same OS and use many of the same applications, so there’s lots of duplicate data. Data deduplication can reduce the amount of disk space needed when using full image virtual desktops by as much as 90% and reduce the 2 TB to 200 GB. With linked clones, a single master disk is shared with all writes saved to a delta file, which may be only 2 GB to 5 GB. But if you plan to use full images, data deduplication is a must. Thin provisioning. Linked clones are already space efficient, so thin provisioning won’t provide much of a benefit. But when using full image virtual desktops, thin provisioning can be a huge space saver, allowing you to overallocate storage. Thin provisioning coupled with data dedupe can provide tremendous space savings when using full images. Thin provisioning can be done at the storage array layer or virtualization layer. While you can implement it at both layers in a dense VDI environment, it might make more sense to offload it to the storage array so there’s less overhead on the virtualization layer. It also simplifies management by only having to monitor and manage thin disks in just one area. VMware vStorage APIs for Array Integration. VMware Inc.’s vStorage APIs for Array Integration (VAAI) allow storage-related tasks normally performed by the virtualization layer to be offloaded to the storage system, including data copy operations (cloning, Storage vMotion), disk block zeroing and vmdk file locking. Leveraging VAAI in a VDI environment can provide benefits, as disk operations can be completed quicker and more efficiently than can normally be done by the hypervisor. VAAI is still rather new, and adoption and integration by storage vendors is still a work in progress, but storage arrays that support VAAI can provide some good benefits today and probably even more as the technology matures.
Dedupe can reduce the amount of disk space needed when using full image virtual desktops by as much as 90%.
Storage March 2011
STORAGE
KNOW YOUR NEEDS
Networking: Old is new again
Storage virtualization made easy?
There are many things to consider when designing storage to support a virtual desktop infrastructure environment. While budgets may limit some of your options, there are a number of creative solutions available that can help you get the performance your virtual desktops will require. But the first step is to know your requirements; a proper assessment will help you define storage requirements that will, in turn, help you implement a properly sized storage solution. With a right-sized storage system in place, you can enjoy the benefits of VDI without worrying that your storage system will become a bottleneck for your users. 2 Eric Siebert is an IT industry veteran with more than 25 years of experience who now focuses on server administration and virtualization. He’s the author of VMware VI3 Implementation and Administration (Prentice Hall, 2009) and Maximum vSphere (Prentice Hall, 2010).
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
23
Storage March 2011
Announcing a Business Continuity Solution in a Weight Class By Itself Introducing MirrorCloud, the next-generation business continuity solution from Zenith Infotech! While the competition was playing catch-up we were creating the next breakthrough in business continuity. MirrorCloud is a feature-rich, add-on to the robust SmartStyle Computing platform which continuously mirrors data from Windows-based servers and desktops to the scalable SmartStyle Cloud Servers. It is expandable up to 100
The Next-Generation in Business Continuity: • Continuous Data Mirroring • Virtual Failover • Live Bare-Metal Restore • Granular Exchange Recovery • Colo-Level Virtualization
reliable than RAID 6!
Score a ‘technical’ knock out with your customers today!
Learn More! Get the free mobile app at http://gettag.mobi
www.zenithinfotech.com
Backup Networking: Old is new again
Storage virtualization made easy?
for remote and mobile devices
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
25
The problem of properly backing up remote site servers and mobile computing devices has been with us a long time. But with a workforce that’s getting more mobile, it’s time to get a handle on remote backups.
r
BY W. CURTIS PRESTON
EMOTE DATA CENTERS and mobile users represent the last frontier of backup and recovery. And that frontier spirit is often reflected in the way many companies rein in backup and recovery of remote and mobile data. Remote data centers, as well as users of laptops or other mobile devices, are often left on their own to make do with inferior methods (or none at all), while the “big” data center enjoys a modern day backup and recovery environment. But with so much data being created and carried around outside the main data center, it’s time for a change.
Storage March 2011
STORAGE
THE ROOT OF THE PROBLEM
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Remote data centers often use standalone backup systems with limited connections to the corporate backup system. And because they typically deal with smaller data sets, remote centers often use less-expensive software and hardware. So, while the central data center may be running an enterprise-class backup product backing up to a large target data deduplication system or tape library, remote data centers often have workgroup-class backup products feeding backups to small autoloaders or even individual tape drives. Likewise, the corporate data center is likely to have a contract with a media vaulting company to ensure that backups are taken off-site every day. Even better, the data center may be using a deduplication system that replicates backups off-site immediately. Remote data centers, on the other hand, often have backup systems that may go unmonitored, with backups that may end up in the backseat of someone’s car if they leave the premises at all. Mobile data backup is in even worse shape. Many companies don’t have a policy for backing up mobile data at all other than instructing mobile users to copy important data to a file server. That’s more about ignoring the problem than having a viable backup policy in place.
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
PLANTING THE BACKUP SEED THE FIRST BACKUP from a remote computer, referred to as the “seed,” must be taken into consideration when designing your backup plan for remote data. Unless you’re backing up extremely small amounts of data (a few gigabytes), you need to figure out a way to transfer the seed to your central site. Typically, this is done by backing up to a portable device of some sort that’s then physically transferred to the central site and copied to the backup server. Make sure to discuss the options your backup vendor can offer in this area.
Sponsor resources
26
Storage March 2011
Up to 85% of computing capacity sits idle in distributed environments. A smarter planet needs smarter infrastructure. Let’s build a smarter planet. ibm.com/dynamic
IBM, the IBM logo and ibm.com are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. A current list of IBM trademarks is available on the Web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.
STORAGE
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
28
The typical mobile computer user simply doesn’t think about backing up their data on a regular basis. And requiring mobile users to synchronize their important data to a file server also ignores one basic fact—they’re mobile and there’s a good chance they don’t have the bandwidth to synchronize large files or lots of smaller files. Given the increased mobility of today’s workforce, a significant amount of what your company considers its intellectual property may reside solely on unprotected remote devices.
Given the increased mobility of today’s workforce, a significant amount of what your company considers its intellectual property may reside solely on unprotected remote devices.
WHY MOBILE BACKUP IS SO HARD Unfortunately, there are reasons why remote and mobile backup data sets have typically been handled so haphazardly. It’s important to understand these reasons before attempting to fix the problem. The main reason why both remote and mobile data sets aren’t treated the same way as data in the corporate data center is the most obvious one: because they’re not in the corporate data center. Slow connections between remote sites or users and the central data facility mean the remote systems can’t use the same backup software used in the data center. Those backup applications expect quick connections to servers in the data center and tend to perform very poorly when trying to speak to remote servers. Bandwidth limitations prevent the software from transferring large amounts of data, and latency creates delays that cause chatty backup apps to make a lot of roundtrips between the backup server and client. Another challenge is that the computers being backed up can’t be counted on to be powered on at all times the way servers are in a data center. Most laptop users (and users of other types of remote devices) power down their devices or put them to sleep when they’re not in use. Less obvious, perhaps, is that users in remote data centers often do the same thing with their servers and desktop PCs. Not a monumental issue, but one that must be addressed. The next challenge is at the other end of the spectrum: some users leave their computers on—and apps open—24 hours a day. So any viable remote backup system must address the issue of open (and possibly changing) files. Finally, there’s the requirement for bare-metal recovery. In the corporate data center, there are plenty of alternatives when a piece of hardware fails, such as a quick swap of an already-imaged drive. The best alternate a remote
Storage March 2011
STORAGE
user may have is a WAN connection with a decent download speed and the hope that someone from corporate IT is available. If your remote servers or laptops have on-site service, the vendor can replace the hard drive or any other broken components. But then you’ll need some type of automatic recovery that requires only the most basic steps (e.g., inserting a CD and rebooting).
REMOTE AND MOBILE BACKUP SOLVED Networking: Old is new again
Storage virtualization made easy?
The typical way the remote bandwidth challenge is solved today is by using a block-level incremental-forever backup technology. The key to backing up over slow links is to never again transfer data that has already been transferred. Full backups are no more and even traditional incremental backups transfer too much data. You must back up only new, unique blocks. Latency is a separate issue. Just because a product does block-level incremental backups doesn’t mean it was designed for use as a remote application. You need to ensure that the backup software understands it’s communicating over a remote connection and avoids “roundtrips” whenever possible. Even if
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
DEVICES GETTING MORE MOBILE iPad users come in two varieties: those who use the iPad to view data and those who use it to create or modify data. You don’t have to worry about those in the first category. But the second group—those who are actually creating or altering information on the go—needs to be instructed on how to back up their devices. The easiest way to do this is to make sure users sync their iPad with their laptop or desktop PC and then ensure that device gets backed up. It’s not a perfect solution, but it’s probably the best we have right now given the architecture of the iPad. The main challenge is that each application is given its own file space. Even if there’s an application that can back up data remotely over the Internet, it wouldn’t necessarily have access to the file spaces where data is being created or modified.
Sponsor resources
29
Storage March 2011
www.lsi.com/channel
you have a remote connection with enough bandwidth, the latency of the connection can severely hamper your backup performance if your backup software isn’t prepared for it.
DEDUPE DOES IT ALL
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
31
The technology that most people have adopted to solve many of these problems is data deduplication, which significantly reduces the number of bytes that must be transferred. A dedupe system that’s aware of multiple locations will only back up bytes that are new to the entire system, not just bytes that are new to a particular remote or mobile location. So if a file has already been backed up from one laptop and the same file resides on another laptop, the second instance of the file won’t be backed up. There are two basic types of deduplication: target deduplication (appliance) and source deduplication (software). Target deduplication appliances are designed to replace the tape of standard disk drives in your existing backup system so your backup software sends backup data to the appliance that dedupes the backups and stores only the new, unique blocks. Using a dedupe appliance has an added benefit, as switching from tape to disk as your initial backup target will likely increase the reliability of remote site backups. To use target deduplication, you’ll have to install an appliance at each remote site and direct backups to the appliance. After the appliance dedupes the remote site’s backup, it can be replicated back to a central site. Because it requires an appliance of some sort, target deduplication isn’t appropriate for mobile data. Source deduplication is backup software that dedupes the data at the very beginning of the backup process. The server or mobile device being backed up communicates with the source deduplication server and “describes” the segments of data it has found that need to be backed up. If the source deduplication server sees that a segment has already been backed up, the segment isn’t transferred across the network. This saves disk space on the server and reduces the amount of bandwidth the backup process uses. Source deduplication can be used to back up both remote sites and mobile users. All you need to do is install source deduplication software on the computer to be backed up and initiate the backup. (This is a bit of an oversimplification, of course, and ignores the challenge of completing the initial full backup.)
There are two basic types of deduplication: target deduplication (appliance) and source deduplication (software).
Storage March 2011
www.pillardata.com
Your CEO, CFO and CIO
Thank You. Putting your data on Pillar Axiom® is the most efficient way to boost productivity, cut costs, and put more money to the bottom line. Don’t put up with the wasteful ways of legacy storage systems. They are costing you way too much in floor space, energy, and money. Make the move to Pillar Axiom, the world’s most efficient system, because it’s truly Application-Aware. Get the industry’s highest utilization rate – up to 80%. Guaranteed. Slash energy costs. Save floor space. Reduce TCO by as much as 50%. And increase user satisfaction immeasurably. It’s time to Stop Storage Waste and see how efficient you can be. www.pillardata.com
Download a complimentary white paper: Bringing an End to Storage Waste. www.pillardata.com/endwaste © 2010 Pillar Data Systems Inc. All rights reserved. Pillar Data Systems, Pillar Axiom, AxiomONE and the Pillar logo are all trademarks or registered trademarks of Pillar Data Systems.
STORAGE
CONTINUOUS BACKUP OF REMOTE DATA
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
33
Another technology that should be considered for remote site and mobile user backup is continuous data protection (CDP). Think of CDP as replication with a “back” button. Like replication, it’s a continuous process that runs throughout the day, incrementally transferring new blocks to a remote backup server. But unlike standard replication products, CDP systems also store a log of changes so that a protected system can be restored to any point in time within its retention period in a few seconds or less. While a traditional backup system (including one using deduplication) can restore a client to the last time a backup ran, a CDP system can restore a client to only seconds ago, since the backup is continuously occurring. A CDP product can be used to back up both remote sites and mobile users because it’s also a block-level incremental-forever technology.
INTEGRATED DATA PROTECTION Remote sites may have another option, using what’s sometimes referred to as self-healing storage. This broad term refers to storage that has backup and recovery integrated as core features. Typically, it’s used to describe storage arrays that use redirect-on-write snapshot technology to provide historical versions of blocks and files within the volume being protected. The snapshots are then replicated to another volume (typically located in an alternate location), providing both history and relocation of data without using traditional backup methodologies. To use one of these products to back up a remote site would, of course, require installing a storage array at each remote site that would replicate to another larger array in a central site.
WHAT ABOUT THE CLOUD? A cloud backup service is simply another method of delivering one of the above options. Some cloud backup services use source dedupe, while others use CDP. And some services provide an on-site target appliance that then replicates to the cloud or acts as a target for the replicated backups from your deduplication appliance. Some self-healing storage arrays know how to replicate to the cloud as well. The bare-metal recovery issue is one that can only be addressed with a backup software product or service that has the feature built into the product. Give careful consideration to the importance of this feature for your environment. And like everything else in IT, don’t just believe what the vendors say; test the product or service to see if it does exactly what you need it to do. You should also ask how a vendor’s products handle backing up systems that aren’t always turned on or connected to the WAN. While most products
Storage March 2011
STORAGE
Networking: Old is new again
and services can accommodate these occurrences, the way they do it can significantly impact the user experience. Suppose, for example, that a laptop hadn’t been connected to the Internet for a long time and when it finally did connect, the backup products started the long-overdue backup. That might seem like a good idea, but it may also consume all of the laptop’s available resources. That could prompt a help desk call or cause a user to stop the backup process when it interferes with other work. Make sure you understand the load the backup application places on the system it’s backing up under various conditions. 2 W. Curtis Preston is an independent consultant, writer and speaker. He is the webmaster at BackupCentral.com and the founder of Truth in IT Inc.
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
34
Storage March 2011
Quantum’s DXi-Series Appliances with deduplication at lower cost than the provide higher performance p competitor. leading competi Q Quantum has helped some of the largest organizations in the world integrate deduplication into their backup process. The benefits they report are immediate and d ssignificant—faster backup and restore, 90%+ reduction in disk needs, automated DR using remote replication, reduced administration time—all while lowering overall costs u and improving the bottom line. a Our award-winning DXi®-Series appliances deliver a smart, time-saving approach O to t disk backup. They are acknowledged technical leaders. In fact, our DXi6500 was just nominated as a “Best Backup Hardware” finalist in Storage Magazine’s Best j Product of the Year Awards—it’s both faster and up to 45% less expensive than the P leading competitor. le
G more bang for your backup today. Get Faster performance. Easier deployment. Lower cost. F
Contact us to learn more at (8 (866) 809-5230 or visit www.quantum.com/dxi Preserving The World’s Most Importan Important Data. Yours.™ ©2011 Quantum Corporation. All rights reserved.
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
NetApp tops enterprise arrays field again
Quality Awards: Enterprise arrays
Taking the cloud more seriously
A very strong group of enterprise array vendors racked up some of the highest scores we’ve seen to date on the sixth edition of the Quality Awards for enterprise arrays. BY RICH CASTAGNA
Capacity or performance?
Storage efficiency
Sponsor resources
36
EVERY COMPETITION has its winners and losers, and the latest Storage magazine/SearchStorage.com Quality Awards for enterprise arrays is no exception. NetApp Inc. is the clear winner in a very competitive field, but it doesn’t seem appropriate to call the other five vendors losers as scores across the board were impressively high—only 0.34 points separated NetApp’s high overall rating and the lowest score.
Storage March 2011
STORAGE
OVERALL RATINGS
OVERALL RANKINGS: ENTERPRISE ARRAYS 6.71 NetApp 6.61 Hitachi 6.53 EMC Networking: Old is new again
6.53 IBM 6.45 3PAR
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
6.37 HP 4.00
4.50
5.00
5.50
6.00
6.50
7.00
PRODUCTS IN THE SURVEY AND KEY TO GRAPHS These products were included in the sixth Quality Awards for enterprise arrays survey (number of responses in parentheses). n 3PAR InServ Storage Server T400/ T800 or S400/S800 (32) n EMC Symmetrix, DMX/ DMX-3/DMX-4, V-Max (240) n Hewlett-Packard XP Series or StorageWorks P9000 Series (126) n Hitachi Data Systems USP/USP V/VSP Series (95) n IBM DS8000 Series or XIV Storage System (106)
Storage efficiency
n NetApp FAS6000 Series or V6000 (123) Fujitsu Eternus DX8400/DX8700*
Sponsor resources
37
7.50
*Did not receive enough responses to be included in the final results
Storage March 2011
This is the second consecutive time NetApp has finished first, after sharing top honors with EMC Corp. on the fourth edition of the Quality Awards. Such a solid showing from NetApp should convince any remaining skeptics that NetApp isn’t just networkattached storage (NAS) anymore. NetApp cruised into the lead with an overall 6.71 by taking top honors in four of the five rating categories, including a couple of narrow victories for sales-force competence and initial product quality. Hitachi Data Systems finished a strong second with a score of 6.61, while EMC and IBM were deadlocked in third with a score of 6.53. Users’ ratings were overwhelmingly positive. For the first time, all vendors and their product lines racked up scores of 6.00-plus in every category. Average category scores were also the highest we’ve seen in six iterations of the enterprise array survey, and the overall average rating was the highest to date. Clearly, enterprise storage system vendors are doing a good job of meeting the needs of their users.
SALES-FORCE COMPETENCE NetApp just barely nudged out Hitachi in the sales-force competence rating category. The six statements in the category all relate to how well a vendor’s sales and sales support teams respond to the needs of their prospective customers. NetApp finished first for two statements: “My sales rep is knowledgeable about my industry” (6.66) and “My sales rep understands my business” (6.56). Hitachi ranked highest for “My sales rep is flexible” (6.61) and “My sales rep is easy to negotiate with” (6.45). EMC’s 6.78 led a bevy of strong
Get more, do more with Infortrend ESVA The perfect balance between affordability and performance.
Data Storage Optimized for Mid-range Enterprises Proven Excellence with Best-in-Class Price/Performance ESVA (Enterprise Scalable Virtualized Architecture) is a high-performance SAN solution for mid-range enterprises. With high-end hardware capabilities and excellent software features including: • thin provisioning • automated storage tiering • snapshot • volume copy/mirror • remote replication ESVA optimizes return on investment, simplifies storage infrastructure and maximizes application productivity. In 2010, the ESVA F60 achieved superior SPC Benchmark 1TM results and delivered: • a throughput result of 180,488.53 SPC-1 IOPS with an average response time of 8.38 milliseconds (with the system at a 100 percent workload level) • an average response time of 1.79 milliseconds (at a 10 percent workload level) • best-in-class SPC-1 Price-Performance of $5.12/ SPC-1 IOPS To learn more about Infortrend’s ESVA storage solution: • visit www.infortrend.com/products/families/ESVA • Tel: 408-988-5088 • email: sales.us@infortrend.com * Detailed full disclosure information for this SPC Benchmark 1TM Result may be found at http://www.storageperformance.org/results/benchmark_results_spc1#a00088
Infortrend Corporation 2200 Zanker Road, Suite 130, San Jose, CA. 95131 sales.us@infortrend.com
www.infortrend.com
STORAGE
scores for the statement “The vendor’s sales support team is knowledgeable.” Praise was spread around fairly liberally in the sales-force competence category, with four vendors leading on at least one statement. Overall, the average for all six products is the highest sales satisfaction rating registered to date on the enterprise array Quality Awards.
SALES-FORCE COMPETENCE 6.55 NetApp 6.51 Hitachi 6.41 EMC 6.33 IBM Networking: Old is new again
6.33 3PAR 6.09 HP
Storage virtualization made easy?
4.00
4.50
5.00
5.50
6.00
6.50
7.00
7.50
INITIAL PRODUCT QUALITY Storage for VDI
6.75 NetApp
INITIAL PRODUCT QUALITY
6.74 3PAR Mobile backup 6.58 Hitachi Quality Awards: Enterprise arrays
6.53 IBM 6.49 HP 6.33 EMC
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
39
4.00
4.50
5.00
A sales team may pave the way for its company’s products, but the true test doesn’t happen until the system is uncrated, racked and declared ready to run.
5.50
6.00
6.50
7.00
7.50
A sales team may pave the way for its company’s products, but the true test doesn’t happen until the system is uncrated, racked and declared ready to run. NetApp prevailed again in this category but by such slim a margin (0.01 point) over 3PAR that it would be more accurate to call it a statistical dead heat. 3PAR flirted with the 7.00 mark with a score of 6.97 for
ABOUT THE QUALITY AWARDS The Storage magazine/SearchStorage.com Quality Awards are designed to identify and recognize products that have proven their quality and reliability in actual use. The results are derived from a survey of qualified readers who assess products in five main categories: sales-force competence, initial product quality, product features, product reliability and technical support. Our methodology incorporates statistically valid polling that eliminates market share as a factor. Our objective is to identify the most reliable products on the market regardless of vendor name, reputation or size. Products were rated on a scale of 1.00 to 8.00, where 8.00 is the best score. A total of 441 respondents provided 727 system evaluations.
Storage March 2011
STORAGE
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
40
“This product was easy to get up and running,” which helps to validate its claim as being among the easiest-to-use arrays. Hitachi’s 6.78 and 3PAR’s 6.77 led for the statement “I am satisfied with the level of professional services this product requires,” and NetApp ranked highest for three statements relating to defect-free installations, little need for vendor intervention and ease of use. Hitachi garnered the crown for the key statement “This product delivers good value for the money” with a 6.73; NetApp was second for the statement with a 6.62. Scores were high in the initial product quality rating category, and winning margins were barely measurable. With strength across the board, it appears enterprise data storage vendors are making good first impressions by getting their products up and running as quickly and painlessly as possible.
PRODUCT FEATURES
PRODUCT FEATURES Even when a good sales experience is reinforced by solid initial operations, 6.86 NetApp the acid test for an enterprise storage system ultimately comes down to 6.67 EMC whether or not it does all the things you 6.56 IBM need it to do. Once again, all products fared extremely well in the product 6.54 Hitachi features category, with NetApp leading the pack with a 6.86 score, highlighted by 6.42 HP one of only two 7.00-plus scores—a 7.11 for “This product’s snapshot features meet 6.31 3PAR my needs.” That’s probably not a huge 4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50 surprise as NetApp is known for its snapshot prowess and has done well in that area on previous Quality Awards. NetApp nearly swept the product features category, getting scores close to 7.00 for two other statements: “This product’s mirroring features meet my needs” (6.96) and “Overall, this product’s features meet my needs” (6.91). For the lone statement NetApp lost out on, EMC flexed its scalability muscles but fell just shy of the 7.00 mark with a still-striking 6.98 for “This product’s capacity scales to meet my needs.”
PRODUCT RELIABILITY With many enterprise array purchases carrying six- or seven-digit price tags, the tipping point for an investment of that size is likely to pivot on how reliably the product performs over a period of time. Here, too, enterprise storage vendors can pat themselves on the back a little, as the product reliability category netted
Storage March 2011
Can you backup mobile users who don’t VPN in? Without opening any firewall ports?
Can users perform full-text search & recovery from the Windows START button?
Can you access your PC data from a tablet or smartphone? When your PC is switched off?
Now You Can! Enterprise-class endpoint backup and data management that runs on your existing infrastructure, securely behind your firewall. Designed for IT pros managing mobile users and distributed sites who don’t want to look through backup logs every day (that’s you, right?). Backup securely with or without a VPN For remote sites: WAN backup, LAN recovery Truly non-intrusive, backup-while-idle, near CDP Native OS user search & restore with no training Object-based deduplication: 95% storage savings Contact us to learn more at 866-831-1821 or try Copiun at www.copiun.com/copiun-sandbox ©2011 Copiun Inc. All rights reserved
STORAGE
the highest average score among the five rating categories. Four of the six products in the current survey turned in their best 6.85 Hitachi scores in this category, with Hitachi beating all comers with a very solid 6.85 rating. 6.78 EMC EMC (6.78) and IBM (6.72) had their best 6.72 IBM category results for product reliability and finished second and third, respectively. 6.69 NetApp Hewlett-Packard (HP) Co. also netted its best point total, finishing with a 6.57 that 6.57 HP placed it fifth in this strong field. Hitachi earned high scores for four of 6.41 3PAR the five statements in this category and 4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50 missed snagging the fifth by a mere 0.03 points. The true test of reliability is uninterrupted service, and all products in this survey were rated at 6.50 or better for the critical statement “This product experiences very little down time.” Hitachi and EMC share the top spot for that statement, both within a hairsbreadth of 7.00 with identical 6.97 scores. Meeting expectations is also a key gauge of reliability. Again, all six products came through with scores of 6.50 or higher for “The product meets my servicelevel requirement,” highlighted by Hitachi’s 6.98 and EMC’s nearly as dazzling 6.96. PRODUCT RELIABILITY
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
42
TECHNICAL SUPPORT Stuff happens. And sometimes that stuff happens to a very large, very expensive storage array. No storage manager wants to be the one who has to pick up the phone and make “the call,” but when it’s unavoidable, a quick and effective reply is the best way a vendor can soothe TECHNICAL SUPPORT jagged nerves. 6.70 NetApp When it comes to delivering support as promised, our vendors fared well, with 6.56 Hitachi all six netting their highest scores in the technical support category for “Vendor 6.50 IBM supplies support as contractually specified.” Hitachi’s 7.00 nosed out NetApp (6.97) 6.46 EMC for that statement and racked up only the third 7.00 statement score in the survey 6.45 3PAR along the way. There truly weren’t any 6.26 HP weak performances when it comes to meeting users’ expectations—HP’s 6.26 4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50 might have trailed the pack, but it’s a
Storage March 2011
STORAGE
ENTERPRISE ARRAY HEAVY LIFTERS Average installed capacities of the enterprise arrays as reported by survey respondents.
VENDOR
TB
Hitachi . . . . . . . . . . . . . . . 143 Networking: Old is new again
EMC . . . . . . . . . . . . . . . . . . 142
sturdy score nonetheless. Ultimately, NetApp prevailed in the technical support category by topping the ratings on five of the eight statements; second-place Hitachi ranked highest on the other three. In the six enterprise arrays surveys we’ve fielded, NetApp’s 6.70 and Hitachi’s 6.56 technical support category scores are the highest recorded to date.
IBM . . . . . . . . . . . . . . . . . . 122 Storage virtualization made easy?
3PAR . . . . . . . . . . . . . . . . . 120
WOULD YOU BUY AGAIN?
NetApp . . . . . . . . . . . . . . . 106
The strongest indicator of product satisfaction might be a customer’s willingness to make the same purchase all over again. When we asked that question on this survey, the results mapped very closely with the overall rankings for service and reliability. That’s great news for vendors like NetApp and Hitachi, whose prospects of repeat customers look very promising with buy-again numbers north of 90%. But the overall picture has to be pretty pleasing to data storage vendors and users alike as 87% of all users in our survey said that with the benefit of hindsight, they’d still buy the same storage systems again. 2
HP . . . . . . . . . . . . . . . . . . . . . 84
Storage for VDI ALL THINGS CONSIDERED, WOULD YOU BUY THIS PRODUCT AGAIN? Mobile backup
92% NetApp 90% Hitachi
Quality Awards: Enterprise arrays
Taking the cloud more seriously
88% EMC 88% IBM 86% HP 81% 3PAR 65%
70
Capacity or performance?
Storage efficiency
Sponsor resources
43
Storage March 2011
75
80
85
90
95
100
Rich Castagna (rcastagna@storagemagazine. com) is editorial director of the Storage Media Group.
StorageCraft is proud that ShadowProtect has been selected as a finalist for the Storage magazine/SearchStorage.com 2010 Product of the Year.
“When it comes to disaster recovery, ShadowProtect® makes me fast, flexible and reliable.” “When it comes to yoga, I’m just a disaster.” The fast, flexible and reliable disaster recovery, data protection and system migration that ShadowProtect Server™ 4 provides will not only help expand your business, but give you the time to “expand your mind” as well.
New VirtualBoot™ technology lets you recover a 1.5 TB server in just 3 minutes! Download a FREE trial version at www.storagecraft.com/shadow_protect_server.php. It’s simple and painless. (That’s more than can be said for the Lotus position.)
hot spots | terri mcclure
Cloud projects climbing IT priority lists
s
A recent ESG survey indicates that investments in cloud services and related infrastructure will increase in 2011, meaning the much-hyped technology may start to hit its stride in the real world.
Networking: Old is new again
PENDING ON CLOUD infrastructure and services is expected to increase this
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
45
year, according to ESG’s recent 2011 spending intentions survey. In late 2009, when we asked survey respondents about their organization’s IT priorities for 2010 and the first half of 2011, cloud computing and software-as-a-service (SaaS) ranked 22 and 24, respectively, out of the 24 priorities listed. In late 2010, when asked about IT priorities in 2011 and the first half of 2012, respondents ranked cloud computing in the top half of the priority list (number 12) and placed SaaS in spot 14. That bodes well for cloud infrastructure and service providers, and indicates that the cloud investigations users conducted in 2010 could translate to real investments in 2011.
BACKGROUND
In late 2010, when asked about IT priorities in 2011 and the first half of 2012, respondents ranked cloud computing in the top half of the priority list (number 12) and placed SaaS in spot 14.
To assess IT spending priorities over the next 12 to 18 months, ESG recently surveyed 611 North American and Western European senior IT professionals representing midmarket (100 to 999 employees) and enterprise-class (1,000 employees or more) organizations. As part of the survey, we asked questions pertaining to both SaaS and infrastructure-as-a-service (IaaS). The survey defined those two cloud-related terms as follows: SaaS: A software distribution model in which applications are hosted by a vendor or service provider and made available to customers over a network, typically the Internet. IaaS: A computing model in which the equipment, including servers, storage and networking components, used to support an organization’s operations is
Storage March 2011
STORAGE
hosted by a service provider and made available to customers over a network, typically the Internet. The service provider owns the equipment and is responsible for housing, running and maintaining it, with the client typically paying on a per-use basis.
ADOPTION TRENDS Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
46
While 34% of respondents currently use SaaS, a surprising 91% report they’ll spend at least some of their 2011 budget dollars on SaaSbased applications.
While 34% of respondents currently use SaaS, a surprising 91% report they’ll spend at least some of their 2011 budget dollars on SaaS-based applications. But even at those organizations using SaaS today, usage is pretty light; a large majority (70%) use SaaS to deliver fewer than 20% of their applications, and only 15% of respondent firms deliver 31% or more of their applications via SaaS. That picture changes significantly when we asked users to consider how they’ll deliver applications three years from now: the percentage of those survey takers using SaaS for 20% or fewer of their applications shrinks to 39%, while the percentage of those respondents delivering 31% or more of their applications via SaaS rises to 30%. The apps respondents deliver via SaaS run the gamut of enterprise apps, from accounting and financial applications to security, but the ones most likely to be delivered via SaaS are customer relationship management (CRM) and email, which isn’t surprising given the amount of data they generate. Among ESG survey takers, IaaS usage is lower than that of SaaS, with 17% currently using IaaS to meet infrastructure requirements. But things are looking up on the IaaS side as well, with 81% of respondents reporting they’ll spend at least some of their 2011 budget on the technology.
THE BIGGER TRUTH Few technologies have been able to maintain the level of hype that cloud has for IT users and vendors alike. Typically, new technologies just can’t live up to the expectations and promises vendors set. Cloud is bucking the trend for multiple reasons, not the least of which is that no one can seem to agree on exactly what it is. ESG believes cloud is ultimately a service delivery model where applications (and/or infrastructure) are delivered as a service to a consumer. The jury is still out in end-user and vendor circles as to whether or not “private cloud” counts as a cloud initiative or is just IT with a servicesoriented architecture.
Storage March 2011
EMC: #1 IN STORAGE FOR
VIRTUALIZATION Source: Enterprise Strategy Group (ESG) Data Center Spending Intentions Survey EMC2, EMC, the EMC logo and where information lives are registered trademarks or trademarks of EMC Corporation in the United States and other countries. Š Copyright 2010 EMC Corporation. All rights reserved. 2128
STORAGE
Networking: Old is new again
Storage virtualization made easy?
The spending survey indicates that reducing costs, especially operational costs, is still one of the top drivers of IT decision making in 2011, and the cloud certainly seems to be gaining popularity as a means to help contain overall IT costs. Because the value propositions for IaaS and SaaS are related to cost reduction, ESG specifically asked users about cost-cutting measures and saw cloud computing start to emerge as a viable option. Organizations in cost-reduction/containment mode indicate a significant increase in their willingness to consider cloud computing services (23% in 2011 vs. 17% in 2010 and 13% in 2009) as a way to control IT costs this year. Will cloud strategies continue their momentum in 2011? ESG research indicates they will—at least from the end-user side. Because 2010 was mostly about cloud talk and hype, and end users now indicate a willingness to invest in this area in 2011, it looks like vendors will have to “put up or shut up” and deliver on the cloud promises they’ve been making to deliver truly cost-effective, secure and available IT services. 2 Terri McClure is a senior storage analyst at Enterprise Strategy Group, Milford, Mass.
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
48
Storage March 2011
Simply Smarter Business Storage for Virtualization, Backup and Cloud Computing
ReadyNAS® Pro 2 ReadyNAS® Pro 4 ReadyNAS® Pro 6
ReadyNAS® 3100, 2100, 4200, 3200
(Top to Bottom)
Products shown above are GSA compliant.
Backup, Restore and Disaster Recovery
Virtualization
• Ideal disk-to-disk backup target for Symantec, Acronis or StorageCraft
• Build affordable virtualization solutions in small or remote offices
• Improves Symantec Backup Exec performance by up to 120%
• VMware Ready and Microsoft Hyper-V certified
• Ideal target for virtual machine backups with Veeam or Vizioncore • ReadyNAS® Replicate option for easy offsite disaster recovery
• Ideal backup target for VMs
Cloud Computing • Hybrid cloud solutions for combination local and hosted file sharing and archiving • FREE! 100GB of ReadyNAS Vault offsite archive
NETGEAR® is Smart IT, Not Big IT Reliable
Affordable
Simple
• 5 year warranty
• A fraction of the cost of traditional vendors
• Easy installation
• No consultants required, no new training or licenses needed
• Embedded VPN remote access
• Enterprise hard disks • Embedded offsite archive
• Reduces operating expenses through automation
• Painless remote management • Centralized multi-site backup management with optional ReadyNAS Replicate
*The 5-Year Hardware Warranty only covers hardware, fans and internal power supplies, and does not include external power supplies or software. Hardware modifications or customization void the warranty. The warranty is only valid for the original purchaser and cannot be transferred. NETGEAR, the NETGEAR logo, Connect with Innovation, ReadyNAS, ReadyNAS Replicate, and ReadyNAS Vault are trademarks and/or registered trademarks of NETGEAR, Inc. and/or its subsidiaries in the United States and/or other countries. Other brand names mentioned herein are for identification purposes only and may be trademarks of their respective holder(s). Information is subject to change without notice. © 2010 NETGEAR, Inc. All rights reserved.
Learn more at: www.netgear.com/business_storage
read/write | jeff boles
Let’s focus on storage performance in 2011
a
For the last few years our focus has been on storage capacity and dealing with astronomical data growth rates. In the process, we’ve overlooked storage performance, but promising developments are afoot.
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
50
S WE APPROACH the end of the first quarter of the new year, it looks like we’re
going to see a much different data storage industry in 2011. That’s a good thing because 2010 was a minefield for strategic planning—a roller coaster of excitement with the unpredictable future that goes with it. Game-changing technologies made their real first forays into the market, including cloud storage service providers, cloud on-ramps, cloud backup, cloud disaster recovery (DR) and others. In a recent emerging market forecast, Taneja Group identified storage products associated with the cloud as growing into a $10 billion market by 2014. As we see it, we’re out of the gate with new product innovations and now engaged in a steady march. In 2010, there was turmoil among the vendor ranks like we’ve never seen, with major vendor strategy shifts, a number of small vendor failures and mind-boggling acquisitions. Whew! Surely, all that vendor turmoil must be tapering off. We need a breather and we need to look at what’s coming. There’s definitely one thing that needs more attention: performance. For too long we’ve been fixated on capacity, and that preoccupation has sapped some of the innovation energy that should have been directed toward solving performance problems. And because of that neglect, many of us are finding we need better performance than we can get from today’s standard approaches. It seems most of the activity in 2008 and 2009 was centered on capacity, and in 2010 that capacity discussion shifted to the cloud. Lately, it seems performance only comes up when the subject is solid-state storage,
Storage March 2011
In 2010, there was turmoil among the vendor ranks like we’ve never seen, with major vendor strategy shifts, a number of small vendor failures and mind-boggling acquisitions.
STORAGE
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
51
an alternative ill-suited to many of the storage systems most of us have in our data centers. That might be the future, but performance needs to be addressed now. But I’m beginning to see performance solutions that have some real promise. Discussions around solid-state technology have made us acutely aware of the massive limitations of many traditional storage systems. And that seems to have inspired some innovators to do something entirely different. Despite what some vendors are peddling, it won’t be the next generation of ridiculously complex, application-specific stacks of hardware that will speed up your database or your number crunching. And don’t expect bigger and cheaper appliances to deliver on that promise either. Those and similar solutions have been anathema for strategic planning. How are you supposed to build a performance strategy from a proliferation of one-trick ponies that can’t adapt to changing demands or the massive shared access required within today’s consolidating data centers? My requirements for doing better aren’t that stringent. Fundamentally, a solution should give you real performance and promise to scale well beyond what you need today. Simultaneously, it shouldn’t look entirely different than your existing storage. It should still store data in a permanent, reliable way and be just as serviceable. We’re seeing vendors like Alacritech, Avere, Kaminario and others think hard about how to deliver performance that fits into today’s storage infrastructure without a massive integration effort, and with the right capabilities to deliver future scalable performance without compromises in storage capabilities. One result of these emerging, improved architectures for storage performance is a dramatic change in what we pay for I/O—when assessed on the basis of dollars per I/O, such systems have massively better costs than rotational disk. But those figures only tell part of the story. Bad architectures come at a price way beyond the difference in upfront costs. The upfront costs of any performance solution with a bad architecture may well be compounded by the operational costs incurred in the course of coping with and offsetting the bad architecture, such as: • The cost of integration. Integrating solutions that address a subset of data can require enormous effort. • The cost of data protection. Few solutions are built for primary longterm storage, so the cost of data protection can increase significantly. Worse yet are the considerations for DR.
Fundamentally, a solution should give you real performance and promise to scale well beyond what you need today.
Storage March 2011
“So I want some kind of product that can combine SSD and magnetic spindles into a single sort of “super volume” which I can use for everything that happens on the VM host, and then the storage virtualization software can automatically figure out which blocks (or whatever?) should be magnetic and which should be SSD.” BrianMadden.com
Your independent source for application and desktop virtualization.
Hybrid ISE is that product. Go to www.xiotech.com/hybrid-ise
Experience the future of virtual desktop infrastructure, call 1.866.472.6764.
STORAGE
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Quality Awards: Enterprise arrays
Taking the cloud more seriously
• The cost of replacement. Solutions built without long service life as a key design criterion may reach a premature end of life. The cost here isn’t just about replacement, but dealing with the effects of potential failure, often with strategies such as maintaining idle spare equipment. • Cost of service. Some products aren’t built for in-place service, so they may require specialized expertise for service or disruptive total replacements if a failure occurs. • Cost of scale. A product that’s attractively priced for a single app may easily be a mismatch for other demands. This can result in apps requiring multiple storage systems, capacity underutilization or compromising on performance. • Cost of management. Inadequate scaling can lead to sprawl and result in having to deploy multiple devices to satisfy different application needs. The cost of managing isolated systems can become enormous.
And the bigger things get, the more costs add up. I’m not suggesting your primary storage vendor doesn’t have answers that address these issues, as some are thinking pretty innovatively about how to extend controller capabilities. But you need to make sure you have the right primary data storage vendor if you’re buying into their answer for performance. For some, their current investments or intended new investments may never go far enough. There may well be a significant deficiency in both the resulting value you’re getting per dollar and in the competitive business capabilities of your resulting IT systems. It may be a year of only incremental improvement, but when it comes to performance, even small improvements can transform how we do storage. With a few innovators on the horizon, it looks like 2011 could see even greater strides. 2 Jeff Boles is a senior analyst at Taneja Group. He can be reached at jeff@tanejagroup.com.
Capacity or performance?
Storage efficiency
Sponsor resources
53
Storage March 2011
snapshot
Still a struggle to achieve storage efficiency
Networking: Old is new again
Storage virtualization made easy?
Storage for VDI
Mobile backup
Everybody’s talking about storage efficiency, but how are we really doing? Judging from our latest survey, there seems to be a sense of accomplishment tempered by a bit of frustr ation. While most respondents feel their companies use storage pretty efficiently, there’s still plenty of disk capacity being wasted because admins just don’t have the right tools to stay on top of the situation. More than two-thirds (68%) say they’d do a better job if they had better tools, and nearly half are looking for a little more cooperation from users (45%) and management (44%). Most storage pros (66%) rely on the tools that came with their st orage systems to manage capacity use, while 30% have third-party apps to help them out. Among the newer storage efficiency technologies, storage tiering (49%) and archiving (48%) are most commonly used to keep the lid on capacity. And while everybody complains about storing old, useless files, only 31% have a process or program in place for data deletion. —Rich Castagna
How would you rate your company’s use of disk storage capacity? Excellent, we’re very efficient 7%
Terrible, we’re wasting lots of disk capacity 3%
Good 27%
Poor 12%
Capacity or performance?
3%
50% to 75% of installed capacity
13% 40%
10% to 25% of installed capacity
31%
Less than 10% of installed capacity Which of these capacity management tools do you currently use at your company? *
Taking the cloud more seriously
More than 75% of installed capacity
25% to 50% of installed capacity So-So, there’s room for some improvement 51%
Quality Awards: Enterprise arrays
How much disk capacity would you estimate is wasted in your company because you lack the tools to manage and use disk capacity efficiently?
49% Storage tiering (automated or manual) 48% Data archiving 38%
Thin provisioning
33%
Primary/nearline data compression or deduplication
5%
None *Multiple selections permitted
13%
69 0%
10
20
30
%
40
Don’t have a regular program or process for data deletion.
Storage efficiency
Sponsor resources
55
“Although there’s no shortage of vendor-supplied tools and third-party software to assist today’s storage managers, it remains nearly impossible to alter storage consumers’ —Survey respondent behavior.” Storage March 2011
TechTarget Storage Media Group
STORAGE Vice President of Editorial Mark Schlack
STORAGE
Editorial Director Rich Castagna Senior Managing Editor Kim Hefner Executive Editor Ellen O’Brien Creative Director Maureen Joyce Contributing Editors Tony Asaro, James Damoulakis, Steve Duplessie, Jacob Gsoedl, W. Curtis Preston
Executive Editor Ellen O’Brien Senior News Director Dave Raffo
Networking: Old is new again
Senior News Writer Sonia Lelii Features Writer Carol Sliwa Senior Managing Editor Kim Hefner Associate Site Editor
Storage virtualization made easy?
Editorial Assistant
Megan Kellett Allison Ehrhart
COMING IN
APRIL
Virtual Disaster Recovery (DR) One of the most touted benefits of virtualization is that the hardware is untethered from the OSes and apps using it. We cover the best ways to leverage virtual machines and virtualized storage for more reliable DR.
Thin Provisioning In Depth Storage for VDI
Senior Site Editor Andrew Burton Managing Editor Heather Darcy
Mobile backup
Quality Awards: Enterprise arrays
Features Writer Todd Erickson
Senior Site Editor Sue Troy
Senior Site Editor Sue Troy
Taking the cloud more seriously
Capacity or performance?
Storage efficiency
Sponsor resources
56
Thin provisioning effectively allows overprovisioning of storage systems by only allocating disk capacity to apps when they write to disk. But vendors implement thin provisioning differently, so users need to know how an array’s thin provisioning process matches their applications.
UK Bureau Chief Antony Adshead
TechTarget Conferences Director of Editorial Events Lindsay Jeanloz Editorial Events Associate Jacquelyn Hinds
Storage magazine Subscriptions: www.SearchStorage.com
Storage magazine 275 Grove Street, Newton, MA 02466 editor@storagemagazine.com
Storage March 2011
Exchange 2010 and Storage Systems With a new version of Microsoft’s Exchange Server came new issues related to the storage that supports the mail server system. Will new features such as built-in archiving and e-discovery capabilities replace purpose-built third-party products? And will improved replication make it easier to protect Exchange mailstores? We provide some best practices.
And don’t miss our monthly columns and commentary, or the results of our Snapshot reader survey.
SPONSOR RESOURCES
See ad page 12
• Storage Management: Control Your Data • Preventing Data Overload
See ad page 15
• White Paper: Increasing IT Efficiency in a Dynamic Data Center • Case Study: Gaston County Leads the Way with Fluid Data Storage
See ad page 41
• ESG Report: Endpoint Data Protection - A Guide to Approaches and Top Purchase Considerations • IDC Technology Spotlight - Endpoint Data Backup and Security: The Growing Need for a Better Approach • Forrester & Copiun Webcast - Enterprise PC Backup: The New Fundamentals
SPONSOR RESOURCES
See ad page 5
• Desktop Virtualization: The New Face of the Enterprise Desktop • Kane County Saves $1 Million with Server and Desktop Virtualization on Dell Servers and Virtualized iSCSI SANs • Breaking Down the Barriers to VDI with Dell EqualLogic iSCSI SAN Arrays
See ad page 54
• Six Must Have Features for Corporate Laptop Backup • Corporate Laptop Backup - Best Practices
See ad page 47
• EMC VNX Family - Unified Storage, Storage Software, Networked Storage • Online Event: Don't miss the record breaking
SPONSOR RESOURCES
• eGuide: Best Practices for Data Protection and Recovery in Virtual Environments • Forrester Research: Virtualization and backup – the challenges and solutions
• Exagrid's Eye on Deduplication
See ad page 21
• Hitachi IT Operations Analyzer Software • The Systems Management Buyer's Guide
SPONSOR RESOURCES
See ad page 8
• 3PAR Storage: Tailor-Made for Virtual Infrastructures • HP 3PAR F-Class Storage Systems: A mid-range storage revolution for lean times
See ad page 27
• Reduce your data storage footprint and tame the information explosion • Leverage the IBM Tivoli advantages in storage management • Virtualize Storage with IBM for an Enhanced Infrastructure
See ad page 38
• Fresh Approaches to Storage Cost Management • Elementary School District Chooses Infortrend to Protect Data & Maintain High Standard of Education
SPONSOR RESOURCES
See ad page 18
• IDC Whitepaper: Are Your PCs and Laptops Recovery and Discovery Ready? • White Paper: How to Measure ROI for Online PC Backup and Recovery • White Paper: How to Categorize PC Online Backup and Recovery Providers for Better Decision Making
See ad page 30
• Video: Extend SAS Beyond Today's DAS Environments • LSI SAS Switch Configuration Guide
See ad page 49
• Printing Franchisee Uses NETGEAR to Protect Against Site-Wide Disaster® ReadyNAS® 2100 • ReadyNAS® 3200 Boosts Financial Services Firm Productivity by Supporting 20 Virtual Machines, Cutting Rack Space by 80% and Costs by Over 50%
SPONSOR RESOURCES
See ad page 32
• Special Report: Top Midrange Array Storage Controllers • Top 5 Advantages of "Alternative" Tier One Storage Vendors
See ad page 35
• 7 Deduplication Questions to Consider Before Deploying for the Midrange Data Center • The Most Important Questions to Ask Before you Dedupe – Part 1
See ad page 44
• StorageCraft ShadowProtect video tutorial • ShadowProtect Server 30-day trial eval
SPONSOR RESOURCES
See ad page 52
• Virtual Desktop Infrastructure Strategies: Ensuring success via Scale-out SAN • Top 10 Reasons Why Virtual Desktops Run Better on Xiotech Storage Blades • Video: Xiotech VDI Interview at VMworld
See ad page 24
• Back Up To The Compute Cloud • Checklist: Key factors in planning a virtual desktop infrastructure • The first step toward a virtual desktop infrastructure: The assessment