About NCE Over 30 years of experience in IT has served the privately-owned NCE business well and established us as one of the leading names in providing, integrating and maintaining technology in the Corporate Data Centre. Although this book is focused on Storage, an area in which NCE continue to enjoy great success in, our service skills go way beyond that aspect alone. We maintain a wide variety of Data Centre products, with servers and switches amongst our service portfolio.
Our engineers fall into two camps, with skilled personnel located at our Dedicated Repair Centres both in Europe and North America, and multi-skilled personnel providing field service (on-site) capabilities through our regional support hubs in both of the aforementioned territories. We work with a variety of “layers” of the IT industry too, providing service delivery for manufacturers & OEMs (as well as selling their technology), distribution, resellers, system integrators, service providers, third-party maintenance companies and end-users. Our customers tell us that they like our technical strengths along with our impartiality and independence, we hope that if you aren’t already an NCE customer you will become one once you’ve read through this book! A special thanks to everyone that has played a part in helping in putting this Book together; Maddy for proof-reading it, David Jerram for laying it out, the Solutions Sales Team and all at NCE for keeping the wheels turning whilst I shut myself away in a room to put this onto paper and my family who have the patience (and provide me with coffee & refreshments) to allow me to do this in what would typically be “family time”. Not to mention those of you in the industry who continue to educate me about storage on a daily basis and provide the content for this publication! Incidentally, a message for my son who says that being an author and writing a book makes you rich - with knowledge yes, that’s the truth of it...
2
Contents About NCE .................................................................. 2
Vendor Feature: DotHill ........................................38
Understanding I/O ................................................... 6
Roadmaps for HDD ................................................40
SSD: What’s the difference? ................................... 8
HDD Manufacturers ...............................................41
SSD: Reliability .........................................................10
Software Virtualisation : VDI.................................42
SSD Summary ..........................................................12
Vendor Feature: Tintri ...........................................43
The SSD race: The runners and riders .............14
Vendor Feature: Fujitsu .........................................44
A quick (affordable) fix in a Flash ........................16
Vendor Feature: Simplivity....................................46
Hybrids ......................................................................18
Tape: The Storage Airbag .....................................48
Vendor Feature: Tegile ..........................................20
Tape Storage: Who’s Who? ...................................49
Vendor Feature: Nexsan .......................................21
Vendor Feature: Quantum Corporation ...........50
Software Virtualisation ..........................................22
Tape and Tape Automation: I used to have one of those! ................................51
Vendor Feature: DataCore ...................................23 Auto-Tiering..............................................................24 Vendor Feature: Dot Hill .......................................25 NCE Professional Services ....................................26 Storage Connectivity .............................................28 Who’s Who of Connectivity...................................29 Vendor Feature: QLogic ........................................30 QLogic Customer Story: Bolton NHS Foundation Trust ............................31
Storage Media Guide .............................................52 Vendor Feature: Sony ............................................53 Data Protection: Definitions .................................54 Vendor Feature: Veeam Software ......................56 Vendor Feature: FalconStor .................................57 Deduplication ..........................................................58 Vendor Feature: Arcserve LLC .............................60 Data Protection Software: Who’s Who? ............62
Storage Interface Guide ........................................32
Vendor Feature: Cloudbyte ..................................63
Hard Disk Drives (HDD) .........................................34
Public Cloud Storage: Risk Management ..........64
RAID: The Levels ......................................................36
Glossary of Terms...................................................66
RAID: Who’s Who?...................................................37
NCE Computer Group recognises all trademarks and company logos. All products and their respective specifications adhere to those outlined by the manufacturer/developer and are correct at the time of publication. NCE will not accept any responsibility for any of the deliverables that fail to meet with the manufacturer/developer specification.
3
Welcome Welcome to the Little Book of Data Storage. I find myself writing and preparing the content for this edition slightly sooner than I’d anticipated owing to the popularity of the predecessor to this publication and the demand to produce an updated and reworked variant as a result. One year on and what’s changed? In all honesty, quite a lot - and that was another motive behind rewriting the book as opposed to simply reprinting more copies of the last one. One constant does remain however; the ongoing challenge that you, the reader and the person tasked with responsibility for storage within your business, is faced with on a daily basis - delivering the required storage capacity & performance within the budgetary and technical constraints that are in place. This is a virtually impossible task. I believe that your challenge is less about managing the storage infrastructure and more about managing the expectation of the business & users - as we are well aware the assumption in any area of Information Technology & Computing is that there are no boundaries or limitations (just ask my wife or the kids when they can’t access facebook because the WI-FI in our house is slow).
“I believe that your challenge is less about managing the storage infrastructure and more about managing the expectation of the business & users...” We live in the real world and the objective of this book from the first edition to now is to keep your feet firmly on the ground, set realistic expectations and step away from the datasheets and websites that provide “lab-tested” statistics designed to lure us into thinking we too can recreate such performance. Yes, a Bugatti Veyron 16.4 Grand Sport Vitesse can achieve a speed of 254.04mph, however that fact seems of very little relevance if you are driving it on the M25 on a wet Friday rush hour - cynical; but fair.
4
For those of you that have read (and hopefully retained!) previous copies of this publication - thanks for your loyalty, and for those of you that are reading this for the first time - welcome aboard! I am always keen to hear any feedback on the content and any mistakes (yes, they have been made along the way…) that you find. Even after 15 years of writing this publication (the sharp eyed amongst you will note that I started writing the first edition as I was leaving school - ahem), the book continues to evolve to incorporate the information that you want to know. The pocket sized format works in many ways, although being informed that it was a perfect replacement for the missing foot on a server rack isn’t one that I initially intended... I hope that you find the Book of use and that it encourages you to contact the impartial and independent company behind this publication - NCE.
John Greenwood Solution Sales Director - NCE Author of the Little Book of Data Storage Winner of “The Storage Magazine “Contribution to Industry” Award 2010 @bookofstorage
FINALIST
5
Understanding I/O An alarming trend has started to emerge in the storage industry as the traditional approach of simply buying storage to match the capacity required is no longer sufficient. The truth is that all storage is not the same and, arguably, capacity is no longer the most significant factor in the equation. However when it comes to scoping your requirement, the capacity point is something that you can make a realistic and educated guess at, largely because the tools are in place to capture and simplify this aspect and always have been. If only the same could be said for I/O - a term used to measure performance. I/O (Input/Output) has parachuted in and is suddenly something that every storage vendor dazzles you with when you are looking to buy storage. The irony is that not many of the aforementioned vendors have the tools or ability to actually provide you with an idea of what your I/O profile is today, let alone what it will be tomorrow. Therefore they are effectively shifting the risk into your corner and banking their “Get out of jail Free” card in case what you buy doesn’t deliver the required performance (I/O operations per Second or “IOPS”). We have seen this happen all too frequently in the past year or two and we are often called in after the event to rescue the situation. So, who can provide a clear and accurate representation of your I/O profile? That’s not an easy question to answer as there are many different layers from which the information can be extracted. You could say the storage is an obvious place to start. If you take a disk shelf and populate it with drives there’s a good chance that you can associate an I/O expectation with each drive on that shelf. However, you then apply a RAID level to the disk shelf to provide resilience. Depending on the vendor and product, you may then find that some of the disk is reserved for snapshots or to store the file system. You then have connectivity to the outside world, which (in certain cases) can limit the performance of the disk that sits behind it. It may have some cache on the controller that boosts the I/O. It may have a file system that presents the storage to the outside world through its unique gateway.
6
Then there’s the network through which the data travels. It may be bandwidth which is throttled at certain times of the day or night, there may be spikes in performance when applications put a surge in demand onto the storage, or it may simply be a network that is already being pushed to its limit. And then there’s the server environment, with poorly scoped or overloaded virtualisation hosts, ill configured databases and resource hungry applications. Oh, let’s not forget the users those with high expectations and little patience (I count myself as one of them!). Take all of these factors into the equation and you can see the challenge. In truth although the storage vendors want to achieve the “single pane of glass” that captures and presents all of this useful information in a consolidated and easy to understand format it isn’t something that they’ve yet to achieve. Subsequently scoping storage with I/O has become aligned with specific areas - VDI (given that it is known to offload the performance demands onto the storage) and the demands of Database Administrators (DBAs) being perhaps the most apparent of the set. Essentially having an appreciation of the potential performance capability of your existing storage, utilisation and any latency in it is a good starting point when you are looking to refresh the environment. That information alone will provide a good foundation on which to build and ultimately a starting point from which you can counter the vendor’s “how many IOPS do you need” challenge with an educated answer. It probably won’t surprise you when I state that this is where NCE can help with a vendor independent assessment of your I/O profile showing the peaks and troughs of your environment without loading the information to suit a specific product/technology.
7
SSD: What’s the difference? Our industry loves an acronym and the aptly named SSD (Solid State Drive) market hasn’t let us down as it has raced in with SLC (Single-Level Cell; 1 bit per cell), MLC (Multi-Level Cell; 2 bits per cell), eMLC (enhanced Multi-Level Cell; 2 bits per cell) and TLC (Triple-Level Cell; 3 bits per cell). I’d hazard a guess that many readers of this book hadn’t known what these stood for until now!
8
SLC
SLC is the luxury item in the SSD portfolio, offering the highest performance with the highest reliability but (unsurprisingly) accompanied by the highest cost.
MLC
MLC sits somewhere in the middle with regard to performance and reliability (when compared to its peers) but is a far more affordable variant and at the time of this book being written it is the market leading SSD “flavour”.
eMLC
TLC
eMLC is essentially a variant of MLC that has been optimised for “endurance”, something that is essentially the same as MLC but (according to the manufacturers) uses the best/premium 10% of the silicon chips to make up a solid state drive. This means that the PE (Program/ Erase) cycles are reduced, thus making it more robust than the MLC offering but with the benefit of having a cost premium associated with it. TLC is viewed as the new kid on the block currently and promises to offer higher capacity at a lower cost. However the reliability of this is a question that remains unanswered until TLC has a customer base using it for Enterprise Storage purposes which can represent a “real world” gradient of the true situation. TLC has the ability to store 3 bits of information (8 possible values) per cell instead of the 2 bits (4 possible values) per cell that is provided by the aforementioned MLC and eMLC. However, in turn, this means that the cells are used more and there is less voltage fault tolerance. To apply voltage to the entire cell multiple times even though just one bit of information is encoded (depending on the bit being changed), can slow down the write speed and causes more wear in general - so there is a trade off in the increased cell capability.
9
SSD: Reliability In some ways it is unfair to compare HDD to SSD but, looking to the future, it is inevitable that SSD represents the biggest threat to the HDD market and, whilst the two complement each other currently, the boundary between them (be it drawn by cost, capacity or performance) will become less defined. However, there are two specific areas where comparisons can be made and it appears that these are the key areas that make prospective buyers of SSD technology nervous - namely reliability and longevity/lifecycle. Given that end-user surveys continue to tell us that the most important factor in any purchase of storage is reliability (typically followed by price and performance) this aspect has to be taken seriously, simply glossing over this isn’t an option and, in view of our strengths in servicing storage at NCE, it’s a factor that we are very wary of too.
“Longevity/lifecycle of SSD was a question that every SSD vendor wanted to answer even before the question was asked...” When we started to look into SSD as a storage protocol it became very apparent, very quickly, from the slides that we were seeing and the order in which they were being presented in, that reliability and longevity/lifecycle of SSD was a question that every SSD vendor wanted to answer even before the question was asked. Ironically it was a question that we weren’t that concerned about, but as they all offered us an answer it was one which became more intriguing. Essentially the benchmark is the established hard disk drive (HDD) technology and the emphasis is on the “mechanical” and “spinning” aspect of this when referred to by the SSD vendors. SSD is, in contrast, a fixed (non-moving) storage media based on either an “enterprise grade” or a “consumer-grade” NAND flash memory that was traditionally designed for digital cameras - and is without doubt used by us all. Does that make SSD more reliable than a HDD? In our opinion no. HDD’s can be repaired, SSD’s are replaced - a very expensive approach by comparison.
10
We were astonished that some of the SSD vendors would not offer a support contract with their products stating that they were so reliable the need for a support contract was simply irrelevant. That tune has changed as prospective customers have responded to say that, even though the vendor maybe confident and committed to the “never fails” approach, the customers are not. They have to provide an SLA for their business and the vendor should, in turn, be prepared to provide this peace of mind. From a hard disk drive perspective, bit error rates are consistent and typically adjacent to each other on a disk platter, in the case of SSD degradation bit errors are more random across cell(s) and are typically associated with usage and time. Bit error correction is addressed in a very different fashion when comparing an SSD with an HDD and this can be a key factor in the longevity of the SSD solution. “Wear levelling” is a term that has been adopted by the SSD market, and it attempts to track wear and movement of data across segments by arranging data so that erasures and re-writes are distributed evenly. Be wary of any solution featuring NAND flash memory that doesn’t include wear levelling, it is a fairly critical factor in the reliability equation. There are two key types of wear levelling, the first is pretty much the de-facto standard in SSD, namely “static” which provides longer life expectancy but slower performance. The alternative is “dynamic” which flips this formula and provides better performance but shorter life expectancy and this is typically found in USB flash based drives. Having talked to someone far more technical than I on this subject who I trust implicitly, he suggests that the reliability of SSD itself is not so much the issue in the world of Enterprise Storage - it is more a case of how the wrapper (Array) around the SSD itself manages the SSD technology, with a complementary tool/engine (such as the use of NV-RAM) typically playing a big part in this equation.
11
SSD Summary The storage market has continually strived to increase capacity but in turn reduce density and power consumption, but the challenge in SSD is that both performance and reliability (or “endurance” as it tends to be tagged in the SSD arena) degrade as you increase the number of bits per cell. On the one hand TLC appears to provide SSD with the opportunity to be seen as a more price competitive animal when compared to the HDD market in which it will, inevitably, compete. On the other hand the concerns over reliability may limit its adoption.
Capacity Points (GB)
SLC
MLC
eMLC
TLC
100, 200, 350, 400, 500, 700, 1000
30, 60, 80, 100, 128, 150, 180, 200, 240, 250, 256, 300, 400, 480, 512, 600, 800, 960, 1200, 1600, 2000
50, 75, 100, 150, 200, 300, 400, 500, 550, 800, 1100, 2000, 2200, 4000, 4800
120, 250, 500, 750, 1000
Cost
High
Middle
Low
Lower
Performance
Fast
Middle
Middle
Middle
Approximate Cycles
100,000
1,000
10,000
1,000
The evolution of storage technology continues to amaze us all. The truth is that what was once stored (and in some cases still is) on a hard drive the size of your hand, can now be stored on a flash memory card the size of your fingernail. That is the direction which the consumer market is taking: how quickly the business market will move to this model remains to be seen. Regardless of the type of SSD, it is fair to say that the technology in an environment requiring random read performance (databases such as Oracle or SQL with their “sequential” demands being good examples of this) will blow HDD out of the water. This is where developing a complete understanding of your environment and the storage demands it has can help to justify investment in SSD - but only if you have the ability to then associate the SSD investment with the specific application or area that demands it.
12
Vendor Feature:
Pure Storage Founded: 2009 Headquarters: Mountain View, California Portfolio: All Flash Arrays (AFA) If you are looking for a well-funded emerging company with disruptive technology in the form of Flash based Arrays, then look no further. Recognition in our industry is often measured using the Gartner Magic Quadrant速 and, unsurprisingly, the inaugural Magic Quadrant for Solid State Arrays recognised Pure Storage as a leader in this field in August 2014 - notably ahead of two other globally recognised brands with three letter acronyms in their names, something that without doubt prompted questions behind closed doors. Pure Storage produce All Flash Array (AFA) solutions based on the Multi-Level Cell (MLC) variant of SSD that benefit from inline data reduction (combining compression, de-duplication, pattern removal and thin-provisioning) to deliver performance and capacity at a price point than those competing in this market. Connectivity is offered through 8Gb Fibre Channel and/or 10GbE iSCSI. The product, and the Purity Software that manages it, has been designed solely for SSD (and the Flash technology contained within the SSD) meaning that it is at the start of the SSD technology curve rather than being in the middle or near to the end of it.
Pure Storage All-Flash Enterprise Array range.
13
The SSD race: The runners and riders Flash/Solid State Disk (SSD) technology is turning a few heads in IT and the cost per GB, with increasing capacity points now emerging, starting to reach a point where it is being considered as an alternative to the more traditional Hard Disk Drive (HDD) based storage systems. There is no question that SSD is ridiculously faster than HDD technology, that’s an impossible argument for the HDD market to win, but the performance versus capacity criteria and associated cost premium for this is why many of the SSD only warriors are yet to capture the imagination (or budgets) of the mainstream IT estates. Environments and projects that are based upon performance hungry applications where I/O is paramount and the associated cost premiums expected remain the heartland for flash based storage, but there are signs that if the budget is available (something that the economic recovery has fuelled), the luxury item is preferred. Bringing things back to layman’s terms, there are two specific categories that you can group the SSD Array vendors into - those who focus on performance and those who focus on achieving a balance between cost and performance. So, who are these vendors?
“ There is no question that SSD is ridiculously faster than HDD technology...”
14
The SSD Runners and Riders 2014:
15
A quick (affordable) fix in a Flash A conversation with a customer recently prompted them to state (having just standardised on putting SSD technology into the latest investment in laptops for their business) “The performance that it (SSD) brings makes it worth every penny in my opinion”. It’s true to say that we all appreciate the performance that flash offers, but perhaps the more difficult question is where and how to use it. Simply throwing it into the storage pool and hoping that it will, somehow, find the files, blocks or applications that you need is a very brave game to play. With the right tools (or partner, think NCE at this junction…..!) you can identify those that are suffering the effects of slow storage and once identified and isolated you can work to provide a solution. This in itself is of value to the business as all too often the assumption is that the solution is to rip and replace or to upgrade the storage in its entirety. Based on our experience it is often a specific application that is hammering the storage and subsequently obscuring and monopolising the available I/O. Once this culprit has been identified (typically it is in the form of an Oracle or MS SQL database or VDI environment), the question is what you do with it! You still need resilience and you still need performance. In parallel to this you may not have been afforded the riches to go and procure a pair of Flash based Arrays - the issue is big, but not big enough to justify that sort of spend. This is where Flash based cards provide a solution.
“ Simply throwing [flash] into the storage pool and hoping that it will, somehow, find the files, blocks or applications that you need is a very brave game to play...”
16
Vendor Feature:
HGST
NASDAQ: WDC Founded: 2006 Acquired by WD; Hitachi Global Storage Technologies - HGST in October 2013 Headquarters: San Jose, California Portfolio: Flash based performance acceleration PCIe cards When it comes to product names, the FlashMAX II is up there with the best of them! This Server Side Flash Storage Solution is the flagship product in the Virident / HGST portfolio; a small half-height, half-length PCIe offering compatibility with all servers and providing high performance & high capacity (with the supported Flash SSD capacities ranging from 550GB to 4.8TB). Complemented by the vFAS (Virident Flash management with Adaptive Scheduler) Software suite, delivering memory like access to data with low latency and performance consistency, it is easy to deploy and perfectly suited to tackle the aforementioned performance sapping applications that are such a thorn in the side of Storage Administrators. In parallel to the product, you have the name Virident - now part of the Hitachi Global Storage Technologies (HGST) division of Western Digital (WD) with their $15bn empire in the equation. In keeping with our horse racing theme on these SSD pages, it is safe to say that the Virident / HGST solution is a very safe bet!
17
Hybrids Different vendors approach flash/SSD technology in different ways and it is essential to understand how they use the technology and where it sits before you can start comparing their respective solutions. Putting flash/SSD technology behind a storage controller that has been designed to push and pull data from a spinning disk can introduce a bottleneck that limits the true performance of flash/SSD, especially when the controller is shared with a spinning disk environment and not dedicated to flash/SSD. However if you have a complementary software layer that benefits from Automated Storage Tiering (AST) it is possible that the differentiation of data and the performance it demands is a good fit for the underlying Hybrid storage hardware. The added benefit to this model is in the cost v capacity argument, with hybrids able to offer the capacity of “traditional” hard disk drives combined with the blistering speed of solid state disk at a more realistic price point (albeit with a performance trade-off) when compared to the All Flash Array (AFA). However, a few words of caution: some of the products offered in this arena are not as flexible as you perceive them to be and are limited on the configurations that they support. For example, you may find that caveats and small print will only crop up after you’ve invested in the technology, with some of the solutions unable to offer the support for the variations or configurations of SSD/SFF HDD/LFF HDD(15k/10k/7.2k) that the hybrid vendor datasheet or website may represent. A word with NCE on this subject is advised!
“ You may find that caveats and small print will only crop up after you’ve invested in the technology... ”
18
19
Vendor Feature:
Tegile
Founded: 2009 Headquarters: Newark, California, USA Portfolio: Intelligent Flash Storage & Hybrid Arrays Let’s face it, if you were to turn the topic of conversation at a dinner party to that of “Managing Metadata efficiently” you’d struggle to keep the audience for any longer than the time it takes to write data to flash. I have promised from the outset to keep writing this book in “real world terms” so let’s try and tackle this one together; on the basis that it is a key differentiator in the Tegile value proposition.
“ She may not realise it, but Mrs G represents the Tegile IntelliFlash Metadata Acceleration technology in this example...”
Traditionally in storage, metadata and data is stored collectively, and as a result it is distributed (or “scattered”) throughout the available storage. The lifecycle of the data sees it deleted, amended or over-written. Reclamation and reorganisation of the data and metadata is typically ignored or overlooked because of this aggregated approach. Subsequently the metadata is left in a rather unstructured and disorganised state.
I’d liken it to my (almost) teenage son’s bedroom floor as a representation of this. Now introduce my wife/his mum into the equation (stick with me on this) who comes in and tidies up things, organising them (indexing them) so they can be easily found. She may not realise it, but Mrs G represents the Tegile IntelliFlash Metadata Acceleration technology in this example, and my almost teenage son represents all of the other storage vendors. In an environment where performance is key (no - not his bedroom, we’re back in the SSD arena again!), this feature alone provides Tegile with a competitive edge. Tegile offer a transparent feature rich hybrid or flash only storage target with a choice of Fibre Channel, iSCSI (Block) NFS, CIFS or SMB (File) presentation, data deduplication & compression (across both SSD and HDD), thin-provisioning, snapshots and remote replication.
20
Vendor Feature:
Nexsan
NASDAQ: IMN Founded: 1999 (in Derby, UK) Headquarters: Oakdale, Minnesota Portfolio: High density storage technology When hybrid storage first emerged as a solution it was inevitable that Nexsan, a company with an established name and reputation in producing reliable and efficient storage subsystems, would be active in this market. Having scored huge success with their “Beast” in the Noughties, the Nexsan brand name was not unfamiliar to many in the data centre as a result. This brand recognition was further complimented by the Imation name that appeared above the door when the $1.5bn giant acquired Nexsan late in 2012. However, whilst continuing to strengthen their portfolio of DAS products with the E-Series (again offering excellent storage density), in parallel to this the NST range has been capturing increased attention and business for Nexsan. The most recent addition to the NST family is the NST4000, a unified block (SAN) and file (NAS) solution offering iSCSI, NFS, CIFS, FTP and Fibre Channel presentation. The NestOS management software leverages the FASTier (“secret sauce”) DRAM caching technology, SSD and processing power within the NST hardware to provide optimum performance. Each NST4000 can non-disruptively scale to a capacity of 2.1PB, which may sound a ridiculous capacity point but let’s not forget when we started producing this book 12 years ago a Terabyte seemed out of reach to many of us!! Inline compression (which can be configured on a per volume basis), takes place “on the fly” where the data (file or block) is compressed prior to it being written to the RAID volume, shrinking the total storage capacity requirements and improving storage system performance. For more information on the NST 4000, or any products from the Nexsan portfolio (new or old!) please contact NCE in the first instance.
21
Software Virtualisation Hands-up! Who knows what Virtualisation is? Hmmm, yes it seems that a lot more of you have become familiar with this since we first made mention of it back in a previous edition of the Little Book! The new generation has great confidence in something that seemed like black magic a decade ago. It appears that the terminology that virtualisation has brought to the party seems something that we’ve all become accustomed to. Thin-provisioning, failover, snapshot, virtual machines, hypervisor, replication, mirroring etc. are all terms on which we are rarely challenged when using them at face to face meetings or on conference calls. Therefore the bar has been raised and additional (new) features are what the captive audience is waiting for. No doubt you will be aware that we currently have three dominant players on the server hypervisor layer - namely Citrix (with XenServer), Microsoft (with Hyper-V) and VMware (with vSphere), all with different strengths and stories to tell. We’ll skate over the detail on this as this book is about storage as opposed to server virtualisation. From a storage hypervisor perspective, the choice seems limited. The challenge is that most, largely to save themselves a lot of work, offer a self-certified hardware appliance which takes away the dynamic (and they would argue, certification) of using the wide variety of storage hardware that is available today (something this publication has hopefully helped to increase your awareness of!). This appliance has a software engine pre-installed (typically running on a linux kernel) which can offer the storage hypervisor capabilities in association with the underlying storage hardware. In some cases, this is the perfect solution for the customer (and we recognise this) as, by being a pre-configured and singly supported product, it can meet the specific deliverables that you are looking to achieve. In other cases, this is a solution that recognises a protection of the storage investment that has already been made, but provides the flexibility to add capacity, resilience and performance into the equation - which can be preferential. It’s not a case of one size fitting all requirements, and this is where NCE can help to position the options that are available.
22
Vendor Feature
DataCore
Founded: 1998 Headquarters: Fort Lauderdale, Florida Portfolio: SANsymphony-V Storage Virtualisation Software, the Storage Hypervisor The key differentiator for DataCore when compared to the Appliance based alternatives is that the company are focused purely on producing software. Critically and uniquely, it is a software platform that is vendor independent and storage agnostic. This business platform offers huge flexibility and total scalability and one striking factor of the DataCore customer base is the longevity of their use of the software - many of them have been customers for many years and have scaled the product as and when their environment has demanded the increase in capacity or performance. One NCE NHS Trust customer describes DataCore as the “Lifeblood to every IT decision they take.” The latest generation, SANsymphony-V One NCE NHS Trust was launched in 2011, and represented a customer describes complete overhaul of the DataCore software DataCore as the stack. It runs on standard x86 based server “Lifeblood to every IT hardware. Perhaps the jewel in the crown of SANsymphony-V is the ability that it has to decision they take.” monitor I/O behaviour, determining frequency of use, and then dynamically moving storage blocks to the appropriate Storage Tier - be that SSD, SAS HDD, SATA HDD or even out to the Cloud: True Auto-Tiering. The product also has one of the best storage reporting and monitoring engines in the business; producing real-time performance charts & graphs, heat-maps and trends - something that represents huge value to anyone tasked with managing storage. Features including Thin-provisioning, Replication, Load balancing, Advanced Site Recovery and Centralised Management are integral to the product. Overall, a solution that continues to capture the attention of those tasked with Storage Consolidation and Cost Efficiency.
23
Auto-Tiering
Auto-Tiering or Caching? That is the question It is fair to say that both vendor Salesman and Marketeers alike are guilty of labelling things incorrectly to suit their agenda and this is one of the “grey areas” that all too often falls victim to this statement. Perhaps defining what each is will help you, the reader, to differentiate one from the other and then come to your own conclusions on whether the vendor can, or cannot, achieve what you require. Auto-Tiering (AKA: Automated Storage Tiering or Automated Tiered Storage) is the dynamic movement of data blocks up (“hot blocks) or down (“cold blocks”) across different storage tiers (types) when taking into account the profile of the data and matching it against specific criteria (such as type, age/access frequency, size, owner). It is a process that requires intelligence to match the criteria to the available storage type and manage the use and availability of the underlying storage dynamically. The block is actively moved between storage tiers and does not reside on multiple storage tiers. Auto-Tiering allows you to scale to add performance or capacity and benefit from the true performance of the underlying investment. Caching is an intermediate store (“buffer”) of files or blocks typically used to reduce latency and provide a performance boost for the front-end application. In the majority of cases a solution that comprises of SSD and HDD technology uses a caching process that leverages the performance of SSD (capacity permitting) to mask the fundamental performance shortcomings of disk-based storage. Most provide a solution where the data lives on the hard disk drive and this is temporarily mirrored to the SSD cache as needed to accelerate the underlying disk for read or write purposes. This is effectively like putting a turbo charger (and go-faster stripes!) onto your car. Is there a place for both Auto-Tiering and Caching? Absolutely, however the two should not be confused and some vendors have a habit of implying that these are the exact same thing - typically because they can’t offer one or the other.
24
Vendor Feature:
Dot Hill
Founded: 1984 NASDAQ: HILL Headquarters: Longmont, Colorado Portfolio: Storage Array Developer & Manufacturer When a long-standing name with over 500,000 systems installed worldwide announces a new product in their portfolio you hope that it is going to have been well thought out and a potential market-leader. Dot Hill didn’t disappoint on that expectation when they announced the AssuredSAN Pro 5000, an intelligent hybrid solution combining SSD with Disk in their robust and proven architecture. But the true attention grabber was that of the RealStor software that accompanied this technology, a suite that not only matched the features that other hybrids proclaimed to offer, but raised the bar to a new level. This had been developed from the ground up and rather than trying to clone or recreate others approaches to storage efficiency and management, Dot Hill came to the party with a new approach to the challenges. There was one specific feature within RealStor that was a game changer RealTier with real time storage tiering. Nobody had challenged those that offered “Auto-Tiering” on the frequency of the movement of data in their Auto-Tiered solutions until now. Essentially the RealTier approach involved scoring, scanning and sorting each block (or page) in real time. The assumption was that all of the solutions on the market offering Auto-Tiering features did this but the reality was that most of them conducted the tiering process (moving hot blocks “up” and cold blocks “down”) in an automated fashion but one that was anything up to 24 hours after the data blocks arrived at the storage. It was (and still is in most cases) a well masked back-end batch processing operation. Running Auto-Tiering in real-time means that the storage tiers are used efficiently and the load that is typically placed onto a landing area in other solutions is negated. Features including thin-provisioning, remote replication and snapshotting are also included as standard on the AssuredSAN Pro 5000, and with support for a whole host of SSD and HDD capacity and spin speed variations, there’s a (realtime auto-tiered) solution to meet your needs in the Dot Hill range.
25
NCE Professional Services I have had many a conversation with a vendor where their frustration is that their business doesn’t own the Intellectual Property (IP) that they promote and sell. Their destiny is, largely, out of their control. They are reliant on others. One of the key differentiators of NCE is that our IP is very much our own. Fundamentally this is because we don’t sell a product; our business is based on providing solutions and the services that customers require to accompany them. Customer demand has been the catalyst to prompt a wide variety of the services that we now offer, and this continues to be the case. Thankfully, because we are not a “no” company and we have always been open to ideas, suggestions and opportunities. And yes, it is fair to say, that some of those more obscure ideas, suggestions and opportunities have evolved into services that now feature in our portfolio. There are typically two key limitations in most IT environments that drive the need for Professional Services - Having the time or having the right tools or skills to do what is required. Subsequently it is our objective to ensure that we have the resource available that you require with the right skills and tools to meet your business need. That said, we work in an industry where no two customers have the same environment and as a result we have to ensure that our personnel are multi-skilled and can access the information and support required whilst on-site working with you. Thankfully our pedigree as a support organisation offering “follow the sun” helpdesk services means that our skillset is a global one rather than just localised resource. Contact NCE: Call: +44 (0)1249 813666 info@nceeurope.com www.nceeurope.com
26
So, what can we do? Many of the Professional Services that we offer result from customer requests and engagements so please don’t let us take the credit for any that you see and think “Oh, that’s a good idea…” Holiday/Absence cover - On-site People with the right storage skills can be hard to find and can be a risk to have someone monitoring and managing your storage environment without the right skills, even if it is for a week or two. NCE can provide that peace of mind with the correct skills to meet the business need.
On-site Storage Assessment & Health check There’s a lot to be said for an independent pair of eyes looking at your storage infrastructure. NCE can provide skilled personnel to come to site and identify storage bottlenecks or hot spots, signs of impending failures, firmware mismatches, or any vendor recommendations or modifications in line with their best practises. In addition, a summary report of our findings is also an optional part of this service. Remote Storage Assessment & Health check This is one of our more popular services, subject to the relevant access privileges being provide, we can remotely dial-in to access and monitor your storage estate (typically on a monthly or quarterly basis) and provide a summary email outlining our findings and any areas of concern. Storage Capacity and/or Performance Assessment The holy-grail of the majority of customers that we talk to is to gain some sort of appreciation and understanding of their capacity growth and performance. We have the tools to monitor this and, over a period of time, provide an accurate representation of this information. Storage Migration & Decommissioning Service It’s no secret that storage vendors don’t like you to move away from their technology, especially if you are moving to their competitor’s product, and on this basis they can make the transition and migration a very difficult process.
Thankfully this is where NCE, as a vendor neutral organisation, can help. We can migrate the data, conduct a secure and certified data destruction service on the legacy storage and we may even offer you a buy-back value for your old equipment too! Storage Installation & Configuration Service We have been engaged on numerous occasions where the partner or vendor has sold some hardware &/or software and subsequently not had the ability to configure or install it. If you find yourself in this situation then please contact NCE to see if we can help. Storage Disaster Recovery/ Failover Verification Test Another service born from a customer request! If you have a DR site or setup that needs to be tested and proven then NCE can assist and independently document/verify that the Disaster Recovery environment is correctly configured and can/will act as a failover if required. Solution Training & Overview Service Time is something that you and your team aren’t always given, especially when training is required. We have been told that vendor training courses have a habit of spending 75-80% of the time teaching people what they already know or information that is irrelevant to their environment. We can tailor-make a bespoke, on-site training session that covers the relevant information and skills required by your Team.
27
Storage Connectivity Data can travel to storage devices in different ways. Probably the best comparison is that of the road network that we travel upon. Some would say that the toll road of storage is that of Fibre Channel as, typically, it carries a premium but (as with most toll roads!) there’s less traffic on it and you have a smoother journey. The main alternative is iSCSI which travels along the established Ethernet highway - perhaps more representative of the road that most people use with occasional traffic congestion and bottlenecks but not costing anywhere near as much to use as the toll road. One consideration in the analogy above is that your means of transport needs to be allowed on the road in order to travel upon it. Some storage supports only one or the other protocol, so in some cases you have to travel on the toll road and you don’t have the choice to take the more affordable option. Other storage offers you the choice. And then there’s a new concept that those stuck in the queue on the Ethernet highway are watching and wondering if they should invest in it: Fibre Channel over Ethernet (FCoE). Perhaps this should be seen as the car pool lane as it travels on the Ethernet highway, but you have to have the qualifying vehicle to use it. Which is the best for you? This is largely dependent on which storage you will be using to achieve this this connectivity; the last thing you want
28
to do is to introduce a performance bottleneck into the equation. At the back-end (behind the scenes as it were) of the storage infrastructure you’ll also find that, in addition to the aforementioned protocols, a few others come into the equation. SAS (Serial Attached SCSI) and with 6Gbit/s interfaces featuring on storage technology this has aided performance to the device. High Performance Computing (HPC) has also seen a surge in the use of InfiniBand with the use of serial links operating at one of five data rates: single data rate (SDR); double data rate (DDR); quad data rate (QDR); fourteen data rate (FDR); enhanced data rate (EDR). These provide our industry with the reason to adopt five more acronyms!
Common Storage Interfaces and Gigabit per second ratings: SAS
iSCSI/Ethernet
Fibre Channel
1Gbit/s 3Gbit/s 4Gbit/s 6Gbit/s 10Gbit/s
8Gbit/s
12Gbit/s* 16Gbit/s 32Gbit/s 40Gbit/s* 100Gbit/s* *Denotes future release of technology at time of Little Book publication
Who’s Who of Connectivity
29
Vendor Feature:
QLogic
NASDAQ: QLGC Founded: 1992 Headquarters: Aliso Viejo, California Portfolio: Manufacture High Performance Server & Storage Networking Connectivity products NCE are a long-standing partner of QLogic having provided 4 generations of their Fibre Channel Host Bus Adapter (HBA) and Scalable Switch (SANbox) technology as an integral part of the connectivity solutions that we have provided. A more recent addition to the QLogic portfolio was that of the 10000 Series “FabricCache” Adapter. This provides a quick-fix solution to those of you suffering from storage latency and performance by introducing a transparent cache into the fabric. So what is it exactly? It’s an acceleration tool (using a combination of SSD, DDR3 and nvSRAM) which amalgamates the QLogic strengths and pedigree in the Host Bus Adapter (HBA) and Storage Router market. It appears as a normal QLogic 8Gb Fibre Channel HBA (meaning that no additional drivers are required), albeit commanding 2 x PCIe ports within the server as opposed to the one that this would typically need, and provides an aggregated read booster; (that’s my way of expressing it rather than using any official terminology!). On the occasions when you implement multiple FabricCache Adapters you can cluster them to provide a transparent cache with heartbeat verification across the estate. Other approaches to this (I/O cards) typically shift the processing demands back onto the CPU in the server. But the FabricCache solution doesn’t add overhead to the server into which it is integrated as it addresses the workload itself. I/O intense applications like VDI, Oracle RAC, SQL and Exchange will visibly benefit from the use of FabricCache technology. The Case Study on the opposite page represents one such example of this.
30
QLogic Customer Story:
Bolton NHS Foundation Trust The Challenge: An aging SQL cluster spread across slow physical systems was in desperate need of a performance increase to accelerate Royal Bolton Hospital’s largest SQL data warehousing operations. The SQL systems are critical to Royal Bolton; they provide accurate and ongoing financial reporting used for calculating payments through results based on imported patient data. The Solution: The QLogic FabricCache 10000 Series Adapter was used as a simple slot-in solution, delivering Royal Bolton’s critical hot application data closer to the point of processing and thereby dramatically reducing the time to data through high performance clustered caching. The Results: The existing solution was accelerated with an over 60 percent reduction in SQL transaction time. Reports that previously took 5.5 hours were reduced to a little over two hours with no licensing or administration costs, and no additional hardware or management overhead. In addition, the FabricCache solution that NCE provided was totally agnostic to the storage hardware. Customer quote: “We now have both FabricCache cards running on a Dual node Windows 2008 R2 Clustered SQL 2008 environment. It’s incredibly easy to use and manage. We didn’t do anything apart from inserting the cards. No changes were made to native drivers or to the configuration. Once in situ, we simply specified the problem LUNS to target through a few simple clicks. The management GUI remains the same as all of the other QLogic adapters and is therefore incredibly familiar to us.”
31
Storage Interface Guide SCSI connection 50 Pin Centronics (SCSI 1)
50 Pin High Density (SCSI 2)
68 Pin High Density (SCSI 3)
68 Pin Very High Density VHDCI (SCSI 5)
80 Pin SCA
“Serial Attachment Interface” for SAS & SATA Connectivity
USB (Universal Serial Bus) connections
32
USB Type-A connection
Mini USB-A
USB Type-B connection
Mini USB-B
Micro-A
Micro-B
Fibre connection SC Connector
Escon Connector
ST Connector
fDDi Connector
LC Connector
SFP/SFP+ Connector
Commonly used transceivers SFP
850nm (short range) 150m at 4.25 Gbit/s (FC)
1310nm (long range) 40km at 1.25Gbit/s (FC)
SFP+
850nm (short range) 300m at 10.0 Gbit/s (FC)
1310nm (long range) 10km at 8.0 Gbit/s (FC)
( For SCSI, SAS or Fibre-Channel cables, terminators, GBIC’s or any other consumables please don’t hesitate to contact NCE: +44 (0)1249 813666
33
Hard Disk Drives (HDD) Contrary to the rumours, customers are still buying HDD’s and it is still very much the storage media of choice representing an excellent cost/capacity formula. It dominates the (largest) middle tier of the storage environment in any datacentre and certainly isn’t about to give up that title at anytime soon.
“ I’m trying not to show my age here but the simple comparison to make is with that of a traditional record player...”
Let’s not forget that the technology has been around for over fifty years, so it’s fair to say that it doesn’t need to represent that it has pedigree in the storage arena. This $30bn market encompasses a wide variety of data storage purposes with many of us using HDD’s on a daily basis when we record or playback TV, save our progress on a games console or key in our destination on our satellite navigation system.
Unquestionably the aforementioned flash drive technology has encroached on the HDD market with portable devices such as MP3 players and phones embracing this with its smaller and no moving parts attraction - perfectly suited to the mobile arena. But economies of scale with mass production and consumer demand along with the capacity v cost formula have, at this point, led to the Hard Disk Drive remaining the dominant player. If you open up a hard disk drive, and believe me we do this a lot at NCE, the first thing that you’ll see is the circular “platter” onto which the data is written and from which it is read; this is supported by a spindle at the centre onto which the platter is loaded. The platter will spin at a speed measured in revolutions per minute (rpm) - typically between 5,400rpm and 15,000rpm on the current drive technology.
34
In a Hard Disk Drive, there is the spindle motor and an actuator. An actuator (consisting of a permanent magnet and a coil), passes current through the coil and creates movement, depending on the direction of the current.Power connectors, jumper blocks and connectivity interfaces then feature on the back of the drive. I’m trying not to show my age here but the simple comparison to make is with that of a traditional Record Player (I know that this will isolate some of our readers who will have to Google what I am referring to at this point!), with the vinyl record being the platter and the stylus being the head - although the HDDs’ heads do not touch the surface they fly just above it. Also it isn’t one continuous track like a record, but thousands of circular tracks on each surface, made up of sectors (blocks).That comparison massively dumbs down a Hard Disk Drive but it works for me… So how exactly is our data stored on this mechanical contraption!? I’ll try and keep it simple (and physics wasn’t something I excelled at, believe me!): essentially it’s a magnetic recording process. Changes in the direction of magnetic patterns and sequences are encoded to form binary data bits. The head plays a key role in this process by translating what is written to and read from the platters that are being spun at high speeds. Platters are usually made from a non-magnetic material, for example aluminium or glass, and are coated in a narrow layer of magnetic material. All of this sounds very meticulous with little margin for error, especially when you consider the size of a Hard Disk Drive. And, thankfully, the technology incorporates exactly that - margin for error, in the form of the Error Correction Codes (ECC) that feature to allow for bad sectors on a platter. The use of this information can be vital in foreseeing any potential drive failures that can become apparent through excessive wear, drive contamination or simply poor manufacturing and quality.
35
RAID: The Levels I’ve faced some challenges in my life but trying to make RAID levels an interesting subject has to be up there with the best of them. So, ride with me on this one and we’ll get through it together! You never know, between us we may find this knowledge useful somewhere down the line…. So, let’s focus on the term itself: RAID, meaning Redundant Array of Independent (or previously Inexpensive) Disks. The key word in the whole phrase being “Redundant” as this implies that a failure can occur and the disk array will still remain operational. Although we know that RAID 50 is a RAID level offered in the storage industry I am thankful to say that there aren’t fifty RAID levels to be covered in this section. In truth there are only a few that are typically used or offered by RAID manufacturers today, and some manufacturers (let’s use NetApp as an example in the HDD environment with their own exclusive RAID level - RAID-DP or Pure Storage in the SSD environment with RAID-3D). Here’s a snapshot of what each conventional and non-exclusive RAID level provides:
36
RAID Level
Main Feature
Parity
RAID-0
Block-Level striping
No
RAID-1
Mirroring
No
RAID-2
Bit-Level striping
RAID-3
Byte-Level striping
RAID-4
Block-Level striping
RAID-5
Block-Level striping
RAID-6
Block-Level striping
RAID-10 (1+0)
Mirroring + Block-Level striping No
Dedicated Parity (on a single drive)
Dedicated Parity (on a single drive)
Dedicated Parity (on a single drive)
Distributed Parity (can tolerate one drive failure)
Double Distributed parity (can tolerate two drive failures)
RAID: Who’s Who?
37
Vendor Feature:
DotHill
NASDAQ: HILL Founded: 1984 Headquarters: Longmont, Colorado, USA Portfolio: Storage Array Developer & Manufacturer Achieving storage density is a well-trodden path, with the challenges far bigger than simply how many drives can you fit in the physical space available. There are other considerations; cooling of the drives (with operating temperatures paramount to reliability) and vibration being the most obvious of the set. Tiered storage solutions with a unified and scalable architecture have also reset the rulebook, especially when applying the performance drive (2.5” Small Form Factor) and the capacity drive (3.5” Large Form Factor) to this formula. Dot Hill already had a scalable solution in their 24 bay, 2U SFF architecture and their 12 bay, 2U LFF architecture but although this long standing and well established footprint served the purpose, the innovative minds with their pioneering designs had ambitions that would set the standard for others to follow once again.
Early in 2014 Dot Hill unveiled the Ultra 48 - part of the 4004 Series, providing incredible storage density with support for 48 of the SFF (2.5”) drives in a mere 2U of rack space. This succeeded in “turning heads” in the storage industry; offering resilience (with dual “active-active” controllers), a wide variety of drive options - with support for SSD, 15k, 10k and 7.2k rpm drives and flexible connectivity through the interchangeable ports of the controllers (allowing Fibre Channel, iSCSI and SAS to presentation to the host or fabric).
38
This has been followed by the Ultra 56, again part of the 4004 Series, supporting 56 LFF (3.5�) drives in the 4U rack architecture. By combining the Ultra 48 & Ultra 56 (interconnected through SAS ports) you have a solution that supports 104 drives in 6U combining performance and capacity, and a Unified Solution for Tiered Storage requirements.
The 4004 Series, as with the other members of the AssuredSAN portfolio, also has the capability to provide Snapshots (supporting up to 1000 snapshots with AssuredSnap), Mirrors (supporting up to 1024 volume copies with AssuredCopy) and Remote Replication (with the AssuredRemote feature) all at the Controller layer without the need for any additional hardware required.
39
Roadmaps for HDD Gaining access to this information (and more importantly the accuracy of what you are then told) is perhaps one of the biggest challenges in the storage industry. We tend to believe what we’ve seen and what is actually published openly rather than the vapourware that can feature heavily on corporate slide decks. On that basis, here’s what we can categorically say exists and will exist from a HDD perspective: Drive Capacity
2.5” Small Form Factor (SFF) SATA
SAS
146GB
15,000 rpm
300GB
10,000 rpm 15,000 rpm
450GB
10,000 rpm
500GB
SATA
SAS
15,000 rpm 7,200 rpm
600GB
10,000 rpm
900GB
10,000 rpm
1TB
3.5” Large Form Factor (LFF)
7,200 rpm
15,000 rpm 10,000 rpm 7,200 rpm
1.2TB 1.5TB
5,400 rpm
2TB
5,400 rpm
7,200 rpm
3TB
7,200 rpm
4TB
7,200 rpm
6TB
7,200 rpm
8TB
7,200 rpm
10TB
7,200 rpm
You will note that there are two key variants of Hard Disk Drive - the 2.5” Small Form Factor (SFF) drive and the 3.5” Large Form Factor (LFF) drive. There has been a significant shift towards the SFF drive in the past few years as capacities have increased and server and storage array manufacturers have integrated the smaller drives into their portfolios.
40
The LFF drive continues to have the capacity edge and subsequently we have customers that are using a mix of the two, with SFF drives/arrays serving their mid-tier performance requirements (typically with the real SAS drives - see the “Sheep in Wolfs’ Clothing article for details) and the LFF drives/ arrays serving their low-tier capacity requirements (typically with the SATA drives). It is also worth mentioning that the actual spin speed (rpm) of the drive when comparing the SFF with the LFF technology does not equate directly to a drive being that percentage faster - by having more real estate to cover the Large Form Factor drive, although spinning at 15,000 rpm in the example of the 600GB capacity point can be rivalled for performance by the Small Form Factor 10,000 rpm variant as it has a smaller platter to address. A notable arrival and addition to this section is the “Heliumfilled” drive technology (I can’t confirm that this has the ability to make your Hard Disk Drive float into the sky or make sounds like it is talking with a squeaky voice!) where the air inside the drive is replaced with helium, resulting in the drive’s power consumption and operating temperature being lowered and, in turn, allowing for higher density storage.
HDD Manufacturers
41
Software Virtualisation: VDI Users; wonderful things aren’t they? Without them life would be so much easier but without them it is highly likely that the roles we serve for the organisation may no longer be required. And, let’s be honest here, we are users too so perhaps we should be a little more accommodating and appreciative of what they want. One challenge that they bring to the party is that they now require access to their “desktop” anywhere everywhere and on a bewildering array of devices. The traditional desktop PC model and approach is dated and now way behind the technology curve. This, along with the visionary “marketecture” of some of the leaders in virtualisation, has fuelled the fire of VDI. The current generation of user expects an infrastructure that allows them to Bring Your Own Device (BYOD) - something that has been associated with the rise of smartphone and tablet computing with the “data anywhere” association that they enable. We equally have the evolution and maturity of the Working from Home culture, with the benefit of increased productivity by adopting this model. The savings in travel time, office space (and associated costs such as heating, power etc) and overcoming the negativity bred by marginal illness and times of bad weather where travel is impossible (or deemed to be impossible) all play a part in this formula to the business. Equally there is the Mobile Workforce - again bringing business benefits by maintaining user productivity whilst out of the office through enabling users to interact with systems in real time and synchronise data efficiency. In truth the challenge for many organisations has been less to create a business case for VDI and more one of providing the funding, manpower and underlying infrastructure to run such an environment. One aspect that can easily be overlooked when being attracted like a moth to the VDI light bulb is the horsepower required to run a VDI infrastructure. From an NCE perspective we appreciate the importance of this, especially when Storage is brought into the conversation. In all fairness this has triggered interest in many of the hybrid (SSD/HDD) solutions that have emerged in the industry and a number of the vendors in this arena focus their messaging around integration and delivering performance for VDI. Our alliances and partnerships allow us to design, implement and support VDI solutions - please contact us if you are looking for a Virtual Desktop Infrastructure partner.
42
Vendor Feature:
Tintri
NASDAQ: HILL Founded: 2008 Headquarters: Mountain View, California Portfolio: Hybrid Arrays for Virtual Environments Founded by a former Executive VP of Engineering at VMware, it is no surprise that the Tintri message is aimed at the Virtual Server and Desktop audience, with the tagline “VM-aware Smart Storage” coined on many of their corporate slide decks. The Tintri technology stack moves away from the more traditional approach to storage (think LUNs and volumes) using VMs and Virtual Disks to mask and manage the underlying storage hardware. The application aware storage is fine tuned to respond to application behaviour seeing, learning, and adapting to the incremental and unique demands made upon it. In keeping with the comment on the previous page, where I made mention to some vendors “focus(ing) their messaging around integration and delivering performance for VDI”, Tintri can be included in this envelope with certification and customer case studies in the Citrix XenDesktop, Microsoft Hyper-V, Red Hat Enterprise Virtualization (RHEV), VMware vSphere and VMware View environments representing that it is a proven and established virtualisation storage platform. So what is “under the hood” from a storage perspective? Tintri combine the Solid State Drive (SSD) technology (MLC drives) with 3.5” Large Form Factor (LFF) Hard Disk Drives in the VMstore appliance. The FlashFirst feature offers inline deduplication and compression, along with automatic block alignment meaning that 99% of the IO is delivered from flash. Perhaps a snappy term to position this is “dynamic speed-dating” between the application and storage appliance. The VMstore also supports snapshots, clones and replication at the VM level by using a redirect-on-write architecture. This means that spaceefficient VM snapshots have no impact on system performance.
43
Vendor Feature:
Fujitsu
Tokyo Stock Exchange Listing: TYO: 6702 Founded: 1935 Headquarters: Tokyo, Japan Portfolio: Disk Arrays, All Flash Arrays, Tape Systems and Data Protection Appliances As a leading Japanese Information and Communication Technology (ICT) company offering a wide range of technology products, solutions and services it may come as no surprise that Fujitsu have an established brand, portfolio and reputation in the data storage sector. For a company in the worlds’ top five providers of servers, many of you may already have Fujitsu hardware in your data centre. As a result of our partnership with Fujitsu, NCE offer, integrate and support both their server and storage hardware solutions. For those of you unfamiliar with the current generation of server technology from Fujitsu this is represented by the Primergy family of products available in Blade, Rack and Tower configurations. From a storage perspective Fujitsu offer a family of “Business-centric Storage” products. The Eternus branding encompasses all of the storage technology, you will find that the two letter suffix which travels with each of the Eternus products typically identifies the specific range in which it features. The Eternus DX family represents the storage architecture with which we are all (hopefully!) familiar. The entry-level model (the 2U Rackmount DX60 S2) supports up to 24 SAS and/or Nearline SAS drives in mixed configuration and is equipped with 2 to 4 Fibre Channel, iSCSI or SAS host interfaces. The big brothers to this are represented by the DX100 S3 and DX200 S3 which offer increased scalability through the addition of expansion chassis and support SSDs, SAS and Nearline SAS drives in a mixed configuration with up to 4 or 8
44
host interfaces. The DX200F is a pre-configured All Flash Array available with between 5 and 24 SSDs in the 2U architecture. In keeping with the family tree approach, next in line are the Mummies of the Eternus DX range - in the form of the DX500 S3 and DX600 S3. The DX500 S3 supports up to 528 SSDs, SAS and Nearline SAS drives in mixed configuration and is equipped with 4 to 16 host interfaces, whilst the DX600 S3 supports up to 1,056 SSDs, SAS and Nearline SAS drives in mixed configuration and is equipped with 4 to 32 host interfaces. Finally, we have the Daddy; the Eternus DX8700 S2 with support for up to 3,072 SSDs, SAS and Nearline SAS drives in mixed configuration and equipped with 4 to 128 host interfaces. The Eternus JX range are effectively JBOD’s where the RAID engine will reside at the host and this features two 6Gbps SAS attach offerings; the 24 Bay 2U SFF JX40 supporting up to 144 SAS, SATA and SSD, 2.5” drivers mixable in up to 6 shelves and the 60 Bay 4U LFF JX60, supporting 240 SAS, 3.5” drivers mixable in up to 4 shelves.
The Eternus DX family represents the storage architecture with which we are all (hopefully!) familiar...
45
Vendor Feature
Simplivity
Founded: 2009 Headquarters: Westborough, Massachusetts, USA Portfolio: Hyper Converged Infrastructure Platform The objective of this book is to simplify the terminology and messaging that vendors use to position their technology. Here’s an example of the challenge this objective presents, as SimpliVity have come to market with a Hyper Converged Infrastructure platform. Perhaps that phrase will mean something to you and you will have an immediate appreciation of what this is, but it’s time for me to confess - it brought more questions than answers to my head when I first heard the phrase… Ironically the SimpliVity stance is one where they state that they “have a mission to simplify IT” (which explains the name of the company, I’d imply). So, having dodged the bullet of trying to decipher word for word what the Hyper Converged infrastructure platform is, I’m going to explain what SimpliVity do and then hopefully this will dovetail beautifully into answering that question. Delivering enterprise performance, data efficiency and protection the ‘OmniCube’ is a building block of infrastructure that realises web economies by utilising commodity x86, memory and storage assets giving the ‘Best of Both Worlds’. These OmniCube building blocks are used to scale capacity & performance in one or multiple locations communicating together as a “Federation” delivering scale-out infrastructure with Enterprise Data Protection, managed from a single point using VMware vCenter.
46
By leveraging the aforementioned OmniCube technology SimpliVity delivers: ■ Consolidation of multiple point Datacentre solutions into one elegant building block of infrastructure ■ De-duplication, optimisation and compression of all data at ingest, in real time for all i/o sizes. No post-processing, re-processing or compromises. ■ Commodity building blocks of compute and storage that scale-out delivering enterprise performance and data management capabilities ■ Inbuilt Virtual Machine centric management including efficient mobility and data management (backup, clones & moves) ■ Globally unified management of Federations with multiple physical locations ■ Open architecture that incorporates existing external server assets and cloud into a Federation ■ 3x TCO savings on CAPEX and OPEX Intrigued? Hopefully by this point you have a far better appreciation of what this Hyper Converged Infrastructure Platform solution offers and it will prompt you to contact NCE to ask more questions. We look forward to hearing from you!
47
Tape: The Storage Airbag I always have a wry smile on my face when I reach this section of the book having been told that “tape is dead” on many occasions by various vendors and individuals in our industry over the past 10 years, only to see the technology and market continue to serve a significant role (where some of those that come to mind have since moved on to other sectors where they can predict the future!). Here’s an analogy of the role that tape plays in the typical IT estate today - think of tape as an airbag in your car; you only need it if you crash your car, but should that time be called for it has a massive role to play and, in the instance of tape, could mean survival in an employment sense! Nevertheless, this role that tape plays has evolved and the frequency and dependency on tape has shifted down the food chain. The role it serves really is last resort territory. The day to day dependency is now based on disk, with rollbacks and snapshots underpinning the modern era of recovery. The cost to implement this sort of solution has also reduced significantly as the consumer fuelled expectation to be able to “rewind” to a point in time has brought the economies of scale to adopt such a solution. But using disk as the storage platform isn’t a true “endpoint” solution, that’s where tape continues to serve a purpose. Compliance and data retention don’t sit so well with spinning disk (accepting that some vendors offer spin-down and park features on their Arrays to fight this argument). Equally retaining this long-term data on-site doesn’t always sit well with the auditors. Tape meets this challenge head-on (probably a good thing considering I classed it as an airbag earlier in this page), and this has (without question) played a part in ensuring that the $1billion tape drive and media active is every bit alive and kicking.
48
“...using disk as the storage platform isn’t a true “endpoint” solution, that’s where tape continues to serve a purpose...”
Tape Storage: Who’s Who?
NCE continue to provide and support a wide variety of Tape (typically LTO) based solutions. Many of the automated offerings on the market currently are of a unified specification and architecture, and it is on this basis that we have constructed a Tape Automation matrix to represent what is available at the time of publishing this edition of the Little Book. We hope that this is of help to you, please contact NCE for more detail and a quote on any of those products listed below: Maximum Number of LTO drives supported
Maximum Number of LTO Cartridges supported
Size (Rack Units)
1 drive
8 slots
1U
1 drive
9 slots
1U
2 drive
16 slots
2U
2 drives
24 slots
2U
2 drives
30 slots
4U
2 drives
40 slots
3U
4 drives
40 slots
4U
2 drives
41 slots
5U
2 drives
48 slots
4U
3 drives
50 slots
6U
4 drives
60 slots
8U
5 drives
80 slots
6U
6 drives
80 slots
6U
6 drives
114 slots
10U
6 drives
133 slots
14U
8 drives
170 slots
16U
49
Vendor Feature
Quantum Corporation New York Stock Exchange: QTM Founded: 1980 Headquarters: San Jose, California Portfolio: Tape Drives & Automation, Deduplication Appliances, File System and Archive Solutions Here’s a well-established name in the storage industry. The back catalogue for Quantum includes many industry firsts both in Hard Drive and Tape hardware technology, indeed they featured in the very first Little Book with the DLT tape drive that took the market by storm back at the turn of the century. With such pedigree and a great reputation, it comes as no surprise that Quantum continue to develop revolutionary products. The Scalar brand name, which underpins their Enterpriseclass Tape Libraries, came as part of the acquisition of Advanced Digital Information Corporation (ADIC) in 2006. BBased on the linear tape technology (LTO) format, which is developed by a consortium which includes Quantum, the Scalar portfolio encompasses the i40, i80, i500 and i6000. The “i” in the naming denotes the Intelligence which features as standard on the libraries with the iLayer providing proactive management and monitoring of the drives and robotics. Quantum’s Scalar i40 represents the entry-point of the portfolio supporting two LTO drives and up to 40 cartridges in the 3U architecture, with the 6U i80 supporting up to five LTO drives and up to 80 cartridges. The i500 meanwhile offers increased scalability, ranging from two to eighteen drives and 41 to 409 cartridges. At the top of the family tree you’ll find the i6000, with support for a colossal 12,006 LTO cartridges and between one and one hundred and ninety two LTO drives. How’s that for scalability?
50
Tape and Tape Automation:
I used to have one of those! Given that NCE continue to maintain legacy brands and products that you may have forgotten about we thought it would be time to do a spot of reminiscing about some of the brands and products that some of us had but have, conveniently, erased from our memory. So, here are a few random blasts from the past…. The Digital Equipment Corporation (DEC) TZ887 A Seven Cartridge Desktop DLT Autoloader supported the DLT2000, DLT2000XT, DLT4000 and DLT7000 drive technology with a standard SCSI interface. The Exabyte 8200 This 8mm (Helical Scan) Tape Drive SCSI-based tape drive was offered either as a bare drive or mounted into 5.25” (full-height) enclosure. The 112m tape in the drive allowed you to achieve the maximum 2.5GB backup capacity. The DDS1 DAT (4mm) Remember those tiny 4mm DAT (Digital Audio Tapes)? This represented the first generation “DDS-1” (Digital Data Storage) of the technology that was found in PC’s and Servers more than a few years ago! The ATL P1000 Tape Library Supporting up to four DLT7000, DLT8000 or Super DLT tape drives and 16 or 30 cartridges in a single library this was a popular choice for data centre backup requirements. It was also offered through OEM agreements by Sun (as the StorEdge L1000) and IBM. If you have a product that the manufacturer is unable to offer support for (perhaps not as old as some of those listed above!) then please don’t hesitate to contact NCE to see if we can provide you with support for your equipment.
51
Storage Media Guide 100GB Native Capacity, 15MB/sec (54GB/hr) uncompressed data throughput
LTX100G
LTO2
200GB Native Capacity, 35MB/sec (126GB/hr) uncompressed data throughput
LTX200G
LTO3
400GB Native Capacity, 80MB/sec (288GB/hr) uncompressed data throughput
LTX400G
LTO3
WORM variant available - same specification as above
LTX400W
LTO4
800GB Native Capacity, 120MB/sec (432GB/hr) uncompressed data throughput
LTX800G
LTO4
WORM variant available - same specification as above
LTO5
1.5TB Native Capacity, 140MB/sec (504GB/hr) uncompressed data throughput with LTFS
LTO5
WORM variant available same specification as above
LTX1500W
LTO6
2.5TB Native Capacity, 160MB/sec (576GB/hr) uncompressed data throughput with LTFS
LTX2500G
LTO7
Up to 6.4TB Native Capacity, 315Mb/sec (1.134TB/hr) uncompressed data throughput with LTFS
LTO
52
Universal Cleaning Cartridge for LTO Drives
Sony Part Number:
LTO1
LTX800W
LTX1500G
TBC
LTXCLN
Vendor Feature:
Sony
New York Stock Exchange: SNE Tokyo Stock Exchange: 6758 Founded: 1946 Headquarters: Tokyo, Japan Portfolio (Storage): Storage Media (LTO) & Hard Disk Drives (HDD) It is hardly surprising that Sony are such a strong name in storage when you take into account their incredible history and realise their influence in the mainstream data storage market. Did you know that Sony built Japan’s first tape recorder, called the Type-G? Sony were the name that brought the Video8 format and Hi8 format into the consumer camcorder market. Sony introduced 3.5� Floppy Disk Drives (or 90 mm micro diskettes as they were originally known) to us all in 1983 and Sony were the name that brought the 4mm DAT (Digital Audio Tape) into the storage industry in 1987. Bringing it into my front room at least, Sony were behind the launch of DVD technology and Blu-Ray technology. This underpins why the Sony brand name is such a key differentiator for many of our customers when it comes to their Storage Media purchase. It is a name that can be trusted something that you need when we are talking about protecting business critical data. Please use the Storage Media Guide on the preceding page to identify the Sony Media that you require and then contact NCE for pricing and availability. Recent additions to the Sony product family are the Portable Storage range including a 500GB, 1TB and 2TB Hard Drive and a 256GB Solid State Drive. With USB 3.0 and Firewire 800 connectivity, these drives are encased in a special silicon cover to protect the ports from dust and water and have been robustly designed for ruggedized environments to take shocks and falls of up to 2 meters and conform to military standard 810-G(3). These drives are a welcome addition to the NCE portfolio, and further enhance our longstanding relationship with Sony.
53
Data Protection: Definitions The expectations of Data Protection from both the users and the business have changed in the past few years and I’d like to say that those responsible for meeting and matching these expectations have been able to keep up with them but let’s be honest, that typically isn’t the case. Unfortunately the underlying architecture that is expected to deliver a one size fits all Data Protection solution simply can’t provide everything that is demanded. Subsequently we are uncovering, on a daily basis, the well-guarded secrets that can be labelled as legacy “shortfalls” in the technology stack. I’m hoping that we can help you to establish a way of associating your existing data protection strategy and technology that you use to deliver it with the expectations that the users, along with their data and associated applications, have. In some cases, it is simply a case of explaining that the limitations are technology or budgetary constraints and to try and negotiate with them to reach a compromise based on the technology and/or budget that you have to work with!
“ ...the underlying architecture that is expected to deliver a one size fits all Data Protection solution simply can’t provide everything that is demanded...”
The criteria that can be applied to Data Protection is typically aligned with two aspects: a Recovery Point Objective (RPO) and a Recovery Time Objective (RTO), terms that you may have stumbled across before. A Recovery Time Objective stipulates “a target time set for resumption of product, service or activity delivery after an incident”. Effectively the clock starts ticking on the RTO the minute data becomes unavailable. Many solutions to overcome this challenge work on the basis that Recovery should be based around the most recently accessed data is likely to be the data that will be demanded first should recovery be required. It was once explained to me as the swan smoothly sailing upon the water, as the recovery process appears smooth on the surface but the frantic paddling and recovery of the data is taking place beneath the surface.
54
The Recovery Point Objective is something that overlaps with the frequency of rather aptly named “point in time” snapshots or backups. In this instance, the approach is to look back to the point from which you want to recover the data rather than to look forward to see how quickly you can recover it. Online RPO definitions stipulate that this is “the maximum tolerable period in which data might be lost”. I would be nervous at the term “data might be lost” let’s hope that this means unavailable for online access as opposed to actually “lost”! Once you have clarified when users expect you to be able to recover their data from (for example “10 minutes ago”, “yesterday” or “whenever you can”) you then have a clear definition of the RPO. Unfortunately achieving some sort of consistency and parity to the RTO and RPO is perhaps more of a challenge than understanding it. Different users, applications and business units have different objectives. Unifying them all to make your life easier isn’t an option. This is where providing a one size fits all Data Protection solution falls down.
“...the recovery process appears smooth on the surface - but the frantic paddling and recovery of the data is taking place below...”
Our advice is to create some sort of chargeback or categorised model, leveraging something like a CDP (snapshot/rollback) solution at one end of the scale and a traditional backup model at the other. Those requiring the “premium” service with optimum data protection are aware of the overhead it carries in regards to cost (typically in the underlying server and storage hardware as opposed to the actual software to provide this feature), management and technology to meet their objectives and ultimately they have to justify that their data is worthy of such investment. You’ll be aware that I have avoided mentioning any specific technology in this segment, that represents our eagerness at NCE to avoid generalising on what is the best solution - this varies from customer to customer, hopefully you’ll contact us to discuss your own unique and specific environment.
55
Vendor Feature:
Veeam Software Founded: 2006 Headquarters: Baar, Switzerland Portfolio: Backup & Disaster Recovery Software for VMware and Hyper-V environments A very shrewd way to gain market share and brand awareness is to offer a free software utility that makes peoples life’s a lot easier and is quickly accepted by all in that community as the best way to do something. Veeam Software did exactly that with their FastSCP offering, an efficient file transfer utility for VMware ESX(i). Subsequently when Veeam then launched their Backup & Replication software (which wasn’t free) for VMware the brand familiarity cleared the first hurdle - acceptance & recognition. This had sideswiped a number of established Backup software vendors who could see the virtual server market emerging (with VMware leading the way) but had been slow to react. Veeam quickly became the defacto Backup software solution for VMware and grew from 10 employees in 2008 to 1200 in 2014, a growth that NCE have seen first-hand having initiated conversations and a partnership with the Veeam Founders at VMworld Europe back in 2008! Veeam haven’t stood still and continue to broaden the portfolio with tighter integration with vendors including Cisco, HP, Microsoft and NetApp added into the feature set, cloud support and Veeam ONE - a monitoring, reporting and capacity planning tool for vSphere and Hyper-V all now part of the family. Perhaps the ultimate industry recognition came in 2013 when Veeam were identified as one of the “Visionaries” in the Gartner Magic Quadrant for Enterprise Backup/ Recovery Software - an accolade that is earned and not bought.
56
Vendor Feature:
FalconStor Founded: 2000 Headquarters: Melville, New York Portfolio: Data Protection and Storage virtualisation software Transparency and flexibility are two key attributes that are integral to a good CDP solution. Some applications and storage appliances offer the ability to snapshot, mirror/replicate and rollback from within their technology stack but most are littered with caveats when offering this. Using an independent software solution to deliver this feature set has cemented the FalconStor name as a leading name in CDP. Bringing it back to basics, what this technology offers is the ability to rewind your data to a point in time (Recovery Point) the perfect solution if a corruption has occurred or someone has inadvertently over-written or deleted a file. It is a heterogeneous software application that works with blocks of data and as a result doesn’t care whether the data is on a physical or virtual machine, resides on a specific hardware platform or storage type or is connected over a specific protocol. It delivers business continuity. Supporting up to 1,000 snapshots per LUN, the frequency and granularity provides you with the ability to offer the users complete peace of mind knowing that you can deliver point in time recovery. The out of band approach to snapshotting means that this process does not interfere with the primary storage path and overcomes the bandwidth overhead challenge that the competition typically put in the datasheet small print. Complemented by the patented (and industry respected) RecoverTrac technology, the CDP solution from FalconStor also provides an automated disaster recovery tool that offers a P2P, P2V, V2V or V2P server and application failover capability to complement the localised granular data rollback expected from a CDP solution. Like what you read? Contact NCE for further information on FalconStor CDP.
57
Deduplication Having worked in the storage industry “ Occasionally something for over 20 years, occasionally something will be developed and will be developed and released which released which makes makes you stop and question “why you stop and question didn’t anyone think of that before?”. In the case of Deduplication it was easy to “why didn’t anyone think understand why the idea had been stifled of that before?...” along the way. The idea of reducing your storage footprint wasn’t one that those who had based their business on selling lots of storage hardware were about to embrace. The explosion of data and the storage of it was putting some very nice cars on some executives driveways, thank you very much (please don’t include me in that bracket!). The obvious place that Deduplication would be of benefit was Backup. Here we had a somewhat archaic process that would simply backup all of the data or the data that had changed. The concept of looking at the data and establishing if the same files or blocks of data were actually being backed up (consuming the available bandwidth, storage and time available) every time had quite simply been dismissed as an unachievable target. But thankfully a generation of Appliances were released that had the intelligence to do exactly that and interrogate the data, by analysing byte patterns, and as a result greatly reduce the capacity demands made on the back-end storage. The names behind these Appliances didn’t have any commitment to those that would quash such challenging technology had it emerged from within their business, but these were either new names or companies with pedigree in integrating with and complimenting backup software with solutions based around Virtual Tape Library (VTL) technology, for example. Initially the audience was cautious and somewhat cynical, fuelled by the Deduplication ratios that some of the Appliance evangelists were lording (50:1 being the headline grabber). Nevertheless market share was being gained and the concept was swiftly becoming a reality. Naturally there were frantic steps taken by some of the Backup Software vendors - you could almost hear the words echoing in the corridors of power at some of them as they demanded answers to “why can’t we do this”, with the outcome being reactive
58
“Deduplication options” being added to their range.
“ Initially the audience was cautious and somewhat cynical, fuelled by the Deduplication ratios that some of the Appliance evangelists were lording...”
Those that opted for the appliance approach had the luxury of being able to protect that investment if their data protection software needed to change, knowing that they could still deliver Data Deduplication regardless of the software engine that they chose to move forward with.
But the Deduplication process has also evolved and the appliance (with a post process or inline Deduplication approach) is one that has had to overcome challenges with the emergence of client side deduplication. Ultimately this means that data (blocks or files) don’t even need to traverse the bandwidth if the same data is already held at the destination. Client based Deduplication agents can be the communication vehicle to manage this process but this is where the Backup Software engines have the edge as they (traditionally) already have Client(s) active and in place across the backup estate. One of the leading names in Data Deduplication Appliances features elsewhere in this book, namely Quantum. It is fair to say that Quantum were one of the pioneers in the Deduplication Appliance market, and are, as a result, recognised as a leading name in this field. The latest generation of Quantum DXi technology provides patented variable length deduplication which reduces disk usage and enables efficient data movement across the WAN to other sites. The DXi has a solution to fit all capacity requirements from the entry-level DXi 4700 scaling from 5TB to 135TB of usable capacity, through to the DXi 6900 scaling from 17TB to 510TB of usable capacity.
59
Vendor Feature:
Arcserve LLC Founded: (as Arcserve LLC) in August 2014 Headquarters: Minneapolis, Minnesota, USA Portfolio: Unified Data Protection Software It would be an insult to label Arcserve as a new name in the storage industry, a product used by 43,000 end users worldwide across more than 50 different countries certainly isn’t one I would class as “new”. Nevertheless there have been some hugely significant and strategic changes both in the product and the company that signify that Arcserve and the Unified Data Protection (UDP) software are here to stay.
“Arcserve UDP is far more than what it started life as, offering replication, high availability and source side global de-duplication technologies within one solution...”
Some of you may remember the Arcserve (or perhaps the Arcsolo - the baby brother of the product) name from the days when it was the de-facto backup platform for Novell Netware under the stewardship of Cheyenne Software, or perhaps know of the product that flew under the Computer Associates (or CA Technologies as they became) banner from 1996 until recently when Marlin Equity Partners acquired the business and formed Arcserve LLC. With the new company came the new platform in the form of Universal Data Protection (UDP) providing Assured Recovery for both virtual and physical environments. This has proved to be a key differentiator when compared to the competition, as the word “unified” bridging the virtual and physical estate is a rare, and somewhat unique, attribute to have. Arcserve UDP is far more than what it started life as (a backup product), offering replication, high availability and source side global de-duplication technologies (and of course, backup technology!) within one solution. Easily configured data protection plans through the intuitive interface make the user experience far better than ever before.
60
The licensing model has also won many accolades with those considering the UDP solution. At the core sit four editions - Standard, Advanced, Premium and Premium Plus. The Standard edition provides image-based protection for file servers with tape migration whilst the key differentiator between this and the Advanced edition is that the Advanced Edition also includes image-based protection for Application Servers. The Premium edition provides all of the Advanced edition features along with file-based protection capabilities to both disk and tape, in addition to image based protection. Finally there’s the Premium Plus edition again including all of those great features that come with the Premium edition and also High Availability and application level replication capabilities.
“The licensing model has also won many accolades with those considering the UDP solution...”
Once you’ve decided on the UDP Edition that suits you best it’s then simply a case of deciding whether you’d like to opt for the per CPU Socket or per Terabyte licensing. Far easier than the incremental client, option and agent licenses that have been a thorn in our side for so long.
61
Data Protection Software: Who’s Who?
62
Vendor Feature
Cloudbyte Founded: 2013 Headquarters: Cupertino, California, USA Portfolio: Cloud Storage Solution Ask anyone at board level about saving the business money and the response will be that this is a good thing. Ask anyone responsible for managing and provisioning storage about delivering this without spending a lot of money and the response will be that this is difficult achieve. CloudByte have the objective to satisfy both parties, producing a feature rich cloud storage platform at an acceptable price point. The company have crafted both virtualisation technology and software-defined intelligence to create a game changing solution. The product that provides this is called ElastiStor, and it is capable of handling thousands of applications and disparate workloads on its storage platform accommodating their varying performance demands, thus cutting down a storage footprint and associated costs. In a sentence, CloudByte offer guaranteed performance and QoS (Quality of Service) in a secure, multi-app, shared storage solution. Each VSM’s (Virtual Storage Machine) capability can be configured - with specific IOPS, bandwidth, and latency to ensure that each tenant, or application, can run optimally, without interference from other workloads. Linear scaling allows for storage resources to be allocated elastically and on demand. Although a company still in its infancy, CloudByte have been identified by many recognised industry authorities as an “upcoming” and “one to watch” company and they have quickly secured market share with success stories in the ERP (Enterprise Resource Planning) and e-commerce sectors where linear performance and capacity silos are required along with centralised management.
63
Public Cloud Storage: Risk Management The previous edition of the book made mention to “The demise of a British Based Managed Service Provider providing Cloud Services to the corporate market in early 2013” and this line prompted many questions from customers asking us to elaborate a little more, something I had interpreted to be a viral news story (or so it appeared at my end!), wasn’t something that the mass audience was aware of. There is no question that this is something that lessons can be learnt from and I would encourage anyone looking at a Public Cloud based solution to not only read this section of the book but to go online and google (other search engines are available) “2e2 collapse” for additional background and information. If we rewind the clock to around the 8th Edition of this publication (so let’s set the date on our modified DeLorean back to 2008), we will discover that the marketing departments of several multi-national conglomerates have uncovered the terms Cloud Computing and Cloud Storage and they are about to weave these words into the outside world. The audience at boardroom level are intrigued, especially as the global economic downturn is demanding that both Capital Expenditure and Operational Expenditure be reduced. On the surface this appears to represent a viable solution by shifting the costs of the resource (which includes the hardware, software, power & cooling, floor space and salaries people to manage it) out of the business. Those in our industry that rode this wave were redressing the shop window to label themselves as Storage Service Providers (SSP). Legal departments (where they had them) were hastily drafting contracts and agreements to represent what could and would be delivered and the cost per TB became a gradient with which each SSP was judged. Perhaps more key considerations including company stability, success stories/ customer base and security of data almost took a back seat. The storage vendors adapted to fit this model too, with new programs in place specifically for those active in the Cloud, Hosting or Managed Service Providers (MSP).
64
The momentum was there for all to see, in the domestic market Apple had implemented the iCloud, Dropbox had become an accepted tool and even your TV, Broadband and Phone provider was (and still is) using the cloud tagline at every opportunity. However in January 2013 UK based Business Cloud Provider 2e2 collapsed under the weight of more than £270m in debt. Word was very quick to spread within the industry, primarily because of the rather dramatic “turning off the lights” way this happened. Administrators have a job to do and communication sent to those with the vast majority of their assets (data) in the 2e2 infrastructure were sent, what some interpreted as a ransom note, demanding that collectively they all provide around £1m if they wanted to see their data again. This situation firmly underlined the risk and dependency of outsourcing to the public cloud. Looking at the detail, the downfall of 2e2 appears to have been triggered a number of factors. In 2011 the 2e2 Group turned over £404m - with published operating profit of £20m, however the restructuring charge and interest repayments left a net loss of £8.4m (more of a sign that the garden wasn’t as rosy as perhaps it was perceived). Financial due diligence is essential if you are considering putting the lifeblood of your business - the data, outside of the organisation. Those 2e2 customers that received the frightening ransom note from their administrators will stress that point more than any other.
65
Glossary of Terms
66
AIT
Advanced Intelligent Tape
GBIC
Gigabit Interface Converter
AFA
All Flash Array
HBA
Host Bus Adapter
API
Application Programming Interface
HDD
Hard Disk Drive Integrated Drive Electronics
ATA
Advanced Technology Attachment
IDE IP
Internet Protocol
IPO
Initial Public Offering (Share issue)
ISV
Independent Software Vendor Just a Bunch of Disks
BYOD
Bring Your Own Device
CAS
Content Addressed Storage
CDP
Continuous Data Protection
CIFS
Common Internet File System
JBOD
Library Resource Module
CNA
Converged Network Adapter
LRM LTO
Linear Tape Open
CoD
Capacity on Demand
LUN
Logical Unit Number
CPU
Central Processing Unit
LVD
Low Voltage Differential
D2D2T
Disk to Disk to Tape
MEM
Memory Expansion Module
DAS
Direct Attached Storage
DAT
Digital Audio Tape
DBA
Database Administrator
DLT DR DSD
Dynamically Shared Devices
ECC
Error Correcting Code
eMLC
enhanced Multi-Level Cell (SSD)
FCoE
Fibre Channel over Ethernet
FTP
File Transfer Protocol
GBE
Gigabit Ethernet
MLC
Multi Level Cell (SSD)
NAND
Negated AND (Flash)
NAS
Network Attached Storage
Digital Linear Tape
NCE
Disaster Recovery
National Customer Engineering
NFS
Network File System
NIC
Network Interface Card
nm
Nanometer (Fibre Channel)
OEM
Original Equipment Manufacturer
P2V
Physical to Virtual
PEP
Part Exchange Program
RAID
Redundant Array of Independent Disks
ROI
Return on Investment
SSD
Solid State Drive
RPM
Revolutions per minute
TB
Terabyte
RPO
Recovery Point Objective
TLC
Triple Level Cell
RTO
Recovery Time Objective
UDO
Ultra Density Optical
SaaS
Storage as a Service
SAN
Storage Area Network
VADP
vStorage API for Data Protection
VCB
VMware Consolidated Backup
VCP
VMware Certified Professional
VSS
Volume Snapshot Service
VTL
Virtual Tape Library
SAS
Serial Attached SCSI
SATA
Serial Advanced Technology Attachment
SCSI
Small Computer Systems Interface
SFP
Small Form-Factor Pluggable
SLA
Service Level Agreement
SLC
Single Level Cell (SSD)
SLS
Shared Library Services
SMB
Server Message Block
WAFL
Write Anywhere File Layout
WEEE
Waste Electrical and Electronic Equipment
WORM
Write Once Read Many
Printed by Park Lane Press on FSC certified paper, using fully sustainable, vegetable oil-based inks, power from 100% renewable resources and waterless printing technology. Print production systems registered to ISO 14001: 2004, ISO 9001: 2008 and EMAS standards and over 95% of waste is recycled.
67
Download and share the Little Book of Data Storage by visiting www.nceeurope.com Stay informed with NCE: @nceeurope www.linkedin.com/company/nce-computer-group NCE Computer Group Europe USA 1866 Friendship Drive, El Cajon, California CA 92020
t: +44 (0)1249 813666 f: +44 (0)1249 813777 e: info@nceeurope.com
t: +1 619 212 3000 f: +1 619 596 2881 e: 4info@ncegroup.com
www.nceeurope.com
www.ncegroup.com
ONAL ATI RN
CE R
ATION
68
FIC TI
INT E
United Kingdom 6 Stanier Road, Calne, Wiltshire, SN11 9PX
ISO 9001 AND 14001 REGISTERED FIRM