Little Book of Data Storage 12th Edition

Page 1

Little Book Of Data Storage 12th Edition

The Little Book of Data Storage - 12th Edition


Little Book Of Data Storage

About NCE Over 30 years of experience in IT has served the privately-owned NCE business well and established us as one of the leading names in providing, integrating and maintaining technology in the Corporate Data Centre. Although this book is focused on Storage, an area in which NCE continue to enjoy great success, our service skills go way beyond that aspect alone. We maintain a wide variety of Data Centre products, with servers and networking topology amongst our service portfolio.

As you will see throughout this publication, NCE carry partner status and accreditation from a wide variety of leading names in IT ensuring that we are recognised, endorsed and skilled to support and offer their technology. Our BS ISO 9001 & BS EN ISO 14001 certification also helps to differentiate us and represents the high standards of quality that we maintain within the business. Our engineers fall into two camps, with our multi-skilled personnel located at our dedicated repair centres both in Europe and North America, and complemented by those providing field service (on-site) capabilities through our regional support hubs in both of the aforementioned territories. We have also developed the “NCE Live” portal providing our customers with a real-time view of any services that NCE are providing and ensuring that they receive an accurate representation of the active status of the calls and contracts that they have with us. We work with a variety of “layers” of the IT industry too. Service delivery is provided for manufacturers and OEMs (as well as selling their technology), distribution, resellers, system integrators, service providers, third-party maintenance companies and end-users. Our customers tell us that they like our technical strengths along with our impartiality and independence, we hope that if you are not already an NCE customer you will become one once you’ve read through this book! NCE Computer Group recognises all trademarks and company logos. All products and their respective specifications adhere to those outlined by the manufacturer/developer and are correct at the time of publication. NCE will not accept any responsibility for any of the deliverables that fail to meet with the manufacturer/developer specification.

2


NCE Computer Group

Contents About NCE................................................................... 2

Under the hood of... a Hard Disk Drive..............40

IO, IO it’s off to work we go..................................... 6

RAID: Who’s Who?....................................................41

SAN or NAS? That is the question ........................ 7

Object Storage.........................................................42

SSD: What’s the difference?.................................... 8

Vendor Focus: Quantum.......................................43

SSD Summary...........................................................10

Data Protection Back to the Future!...................44

The SSD race: The runners and riders..............12

Data Protection Software: Who’s Who?............45

Under the hood of... a Solid State Drive............14

Vendor Focus: Arcserve.........................................46

Hybrids.......................................................................16

Customer Story: ......................................................48

Vendor Focus: Tegile...............................................18

The Open University MK:Smart Research Initiative................................48

VMware vSphere - Vvols (Virtual Volumes)........20 Vendor Focus: Veeam Software...........................21 SDS: Software Defined Storage............................22 Customer Story: ......................................................24 Kirklees College........................................................24 NCE Professional Services.....................................26 Hyper-Convergence ...............................................28 Vendor Focus: Cisco UCS......................................29 Vendor Focus: Maxta..............................................30 Hyper-covergence: Who’s Who?..........................31 Storage Interface Guide.........................................32 SMR Shingled Magnetic Recording.....................34

Deduplication...........................................................50 Vendor Focus: ExaGrid Systems, Inc...................51 LTO-7..........................................................................52 Tape Storage: Who’s Who?....................................53 Vendor Focus: Quantum.......................................54 Tape Storage Media Guide....................................55 Vendor Focus: Nexsan by Imation......................56 What Could Possibly Go Wrong?.........................57 Vendor Focus: Barracuda Networks...................58 Customer Case Study: Park Resorts...................60 The Future... of Storage..........................................64

Vendor Focus: Quantum.......................................36

Sales Buzzword Bingo The Customer Sales Game!...................................65

Roadmaps for HDD.................................................37

Glossary of Terms....................................................66

Vendor Focus: Nexsan by Imation......................38 RAID: The Levels.......................................................39 A huge thank you to all those that have put in the hours and energy to help me to put this publication together once again. Maddy for Proof-Reading it and Chloe & David for making it all (somehow) fit on the pages and finding the Hi-Res pictures and logos. Not forgetting the technical input from Alex, Steve & Mark. It’s very much appreciated. John

3


Little Book Of Data Storage

Welcome Welcome to the Little Book of Data Storage. For those of you who have never seen this pocket sized publication before – welcome aboard, and for those of you who have – welcome back! The purpose of this independent guide is to try to assist you in identifying what will, and what won’t, overcome the challenges that face you with regard to storage of data. Many of the vendors in our industry profess to have a solution that can meet all of your objectives, but typically few can actually deliver on the promises that they make. The NCE objective is to keep your feet on the ground - managing your expectations based on real-world customer experience and proven technology as opposed to the speeds and feeds that grab the headlines and have been engineered in a finely-tuned laboratory environment worlds away from the one you manage. We have previously been labelled as the cynics of the industry, given that we look for the limitations of the technology along with the benefits, but our customers (we’re told) appreciate the honesty and realism that we provide by adopting this approach.

The list of vendors that have come to market stating that they can solve your storage problems is endless, and whilst most can ease the pain for a while they can’t take the problem away... This, the 12th edition, features the usual high-level surface skim of vendors, buzzwords, acronyms and customer case studies that have provided the foundation to this publication over the past 15 years. Hyper-convergence, object storage, deduplication, all flash arrays, hybrids, shingle drive technology and cloud storage all feature in the book, with vendor listings, interface guides and drive capacity points that may just be of help to you in your hour of need!

4


NCE Computer Group

My promise to you, the reader, is that I try to cross the bridge between the (rather brash) assumption from some in our sector that you already know the phrases/terminology/acronyms/buzzwords and the reality - that you turn to google the minute we leave the room to find out what on earth we were talking about and try to rewind the conversation to make sense of it. This book will almost act as a dictionary or translator to overcome that hurdle by simplifying matters into wordage that (hopefully) makes it easier to understand. Storage is a constant challenge for those tasked with managing an IT environment. As we, NCE, approach our 35th year in this industry (I appreciate that for some of our younger readers this will be difficult to comprehend as we may have been “founded” before you were!) we can safely say that this is nothing new. The list of vendors that have come to market stating that they can solve your storage problems is endless, and whilst most can ease the pain for a while they can’t take the problem away - the challenge is one that will not go away. Thankfully, our longevity, reputation, experience and pedigree, mean that NCE are here to help. Hopefully this book represents our independence and focus, and encourages you to contact us when anything that is covered in this publication features at the top of your priority list.

John Greenwood, Solution Sales Director - NCE Author of the Little Book of Data Storage Winner of “The Storage Magazine “Contribution to Industry” Award 2010 Twitter: @bookofstorage

FINALIST

5


Little Book Of Data Storage

IO, IO it’s off to work we go... It’s somewhat ironic that you, the user, can find it a challenge to establish the expected performance (measured in I/O – Input/Output) of a storage device. Some in our industry class this as the “secret sauce”, something that is only appreciated by and visible to the privileged few. Well, on that basis, I owe those individuals an apology, as I’m more than happy to share the “true” performance that we typically see in the field: If you apply the adjacent formula to whatever you are looking at then you won’t be too far off the mark for the actual performance of the storage. External factors can skew these figures but as a general “rule of thumb” this provides you with a template to work with.

6


NCE Computer Group

SAN or NAS? That is the question It surprises me how often confusion arises around SAN and NAS technology, and the difference between the two. The key differentiator is how the storage is presented to the outside world (application, OS or machine on the network). In the case of NAS (Network Attached Storage) the storage target appears through a file system. The most common File Systems used in NAS solutions are: ■■ CIFS (Common Internet File System) or SMB (Server Message Block) typically used in MS Windows environments; ■■ NFS (Network File System) typically used with UNIX; ■■ NCP (Network Time Protocol) typically used with specific flavours of Linux ■■ ...and AFP (Apple Filing Protocol) - you guessed it, typically used in the Mac OS X environment. The easy grab for an NAS solution is that it provides a File-Level Storage presentation. From a connectivity perspective, Network Attached Storage (and the clue is in the name) travels through the tried and tested Ethernet network fabric - a big tick in the box for many as it uses a familiar and established protocol that is prevalent in pretty much every data centre worldwide. In contrast to this, the SAN (Storage Area Network) appears as a locally attached storage device to the outside world (application, OS or machine on the network) and provides Block-Level access to the storage. Those who remember the birth of the SAN concept will recall that early incarnations of the technology gravitated towards Fibre Channel as the default protocol, but a combination of cost and performance has seen iSCSI emerge as a suitable alternative, with the blocks travelling across (typically existing) Ethernet network links. Given that Ethernet represents a road on which either blocks (SAN) or files (NAS) can travel, this is often where we see some confusion as to exactly which is which. Couple that factor with vendors that now offer technology that provide both SAN and NAS support and it can get all the more confused! So which one is best for you? I wish it was that easy to answer! My advice is to contact NCE in the first instance and we will work with you to qualify exactly what technology is best for your environment.

7


Little Book Of Data Storage

SSD: What’s the difference? Our industry loves an acronym and the aptly named SSD (Solid State Drive) market hasn’t let us down as it has raced in with SLC (Single-Level Cell; 1 bit per cell), MLC (Multi-Level Cell; 2 bits per cell), eMLC (enhanced Multi-Level Cell; 2 bits per cell) and TLC (Triple-Level Cell; 3 bits per cell). I’d hazard a guess that many readers of this book hadn’t known what these stood for until now! SLC is the luxury item in the SSD portfolio, offering the highest performance with the highest reliability but (unsurprisingly) accompanied by the highest cost. SLC

MLC

SLC allows for the storage of one bit of information per NAND memory cell. The lifespan of an SLC SSD drive is around 100,000 writes. MLC sits somewhere in the middle with regard to performance and reliability (when compared to its peers) but is a far more affordable variant. It is seen as the consumer-grade variant of SSD and there are numerous variations of the MLC drive that you will see available in the market, typically labelled with the reason for the optimisation (eg Read Optimised, . MLC allows for the storage of two bits of information per NAND memory cell). The lifespan of an MLC SSD drive is expected to be between 3,000 to 5,000 writes.

8


NCE Computer Group

eMLC

eMLC is essentially a variant of MLC that has been optimised for “endurance�, something that is essentially the same as MLC. However, according to the manufacturers, it uses the best/premium 10% of the silicon chips to make up a solid state drive. This means that the PE (Program/Erase) cycles are reduced, thus making it more robust than the MLC offering but with the benefit of having an associated cost premium. eMLC allows for the storage of two bits of information per NAND memory cell and achieves endurance levels of around 30,000 writes. TLC is still an emerging technology, and it has the ability to store 3 bits of information (8 possible values) per cell. However, in turn, this means that the cells are used more and there is less voltage fault tolerance.

TLC

To apply voltage to the entire cell multiple times even though just one bit of information is encoded (depending on the bit being changed), can slow down the write speed and causes more wear in general - so there is a trade off in the increased cell capability.

9


Little Book Of Data Storage

SSD Summary The storage market has continually strived to increase capacity but in turn reduce density and power consumption. But the challenge in SSD is that both performance and reliability (or “endurance” as it tends to be tagged in the SSD arena) degrade as you increase the number of bits per cell. We are seeing the term “DWPD” (Diskful Writes Per Day) being associated with some of the SSD drives that are commercially available, effectively providing the vendor with a “get out of jail free” card on the warranty validity should the usage of the drive exceed the DWPD specified at point of sale.

Capacity Points (GB)

Cost Performance Approximate Cycles

SLC

MLC

eMLC

TLC

32, 64, 100, 128, 200, 350, 400, 500, 700, 1000

30, 60, 80, 100, 128, 150, 160, 180, 200, 240, 250, 256, 300, 320, 400, 480, 512, 600, 800, 960, 1200, 1600, 2000

50, 75, 100, 150, 200, 300, 400, 500, 550, 800, 1100, 2000, 2200, 4000, 4800

120, 250, 500, 750, 1000

Highest Fast

Middle Middle

High Middle

Lower Middle

100,000

1,000

10,000

1,000

Nevertheless, the evolution of storage technology continues to amaze us all. The truth is that what was once stored (and in some cases still is) on a hard drive the size of your hand, can now be stored on a flash memory card the size of your fingernail. Over the past 15 years, NAND Flash memory cell structure has gone from 120nm scale to 19nm scale whilst the capacity in this arena has grown by a factor of 100x. With the announcement of 3D NAND and V-NAND (The “V” denoting Vertical) SSD drives with muted capacity points associated with the technology of up to 16TB per drive, it is clear that these much improved devices will be able to meet both the capacity and performance demands. Stacking the MLC and TLC layers will yield greater efficiency but both reliability and cost are areas of uncertainty at this point. As this matures we will give it increased coverage in the Little Book, but it is very early in the lifecycle of the technology at this point in time. SSD and HDD do, and will continue to, co-exist and complement each other; this will not change. They both have their place in the market.

10


NCE Computer Group

Performance Monitoring Having covered this topic in previous editions of the Little Book, it appears that I trod on a nerve! By highlighting the blind spot of the storage industry it triggered us to delve a little deeper into non-proprietary software that could help our customers who manage and maintain their mixed storage environments. The fact that we, as an established name in this sector, had to look to find what was required perhaps highlights the absence of technology and vendors in this space. What we quickly established is that those storage performance monitoring tools that are available are typically used as a business enabler by specific vendors to justify why you need their solution; the results tend to appear with a quotation that, rather conveniently, happens to fit exactly what the tool identifies you need. However there are some that can monitor performance, some of whom are “virtual only” and major on complementing the hypervisors – publishing information that they either can’t or won’t share, and others who have established technology in parallel areas to storage but have realised that there is an opportunity here. It is by no means a saturated market (from a vendor perspective) but here are a few names that fall into this category:

11


Little Book Of Data Storage

The SSD race: The runners and riders Those of you that own or read the two previous editions of this book will know that we have preserved the same (horse racing) theme as we did in those copies. Ironically, those in the All Flash Array (AFA) sector, have cleared a few hurdles since we produced those editions, so what may have appeared to be suited to the flat race has since become the steeple chase in horse racing terms. At this point the fallers have been few and far between. However there are signs on the horizon that a few of the finance houses and backers are looking to realise their (sizeable) investment to allow those names that see this as their sole business model to make a lot of noise in the storage industry. This represents a major hurdle for some, and pressure is on their highly paid, flash (excuse the pun) sales operation to turn the potential into actual customers. Before you place your bet, you may want to do more research into the form of the name you are looking to back...

“ There are signs on the horizon that a few of the finance houses and backers are looking to realise their (sizeable) investment...�

12


NCE Computer Group The SSD Runners and Riders 2015:

13


Little Book Of Data Storage

Under the hood of... a Solid State Drive There’s very little to say about the exterior of a Solid State Drive (SSD). It has an interface to connect it to the host and that’s pretty much it. All the magic is on the inside! Perhaps the most important role that the controller chip (“brain”) plays is to manage something called “wear levelling”. The aforementioned bank of memory chips in an SSD stores data as ones or zeros. Each physical chip requires an individual component within it to hold this state (on or off) to achieve this storage. If the data is written continually to the same physical chips the result is that they will physically wear out before others, leading to a failed SSD. To overcome this, the controller chip applies “wear levelling”. It essentially keeps the data moving around all the available chips and keeping track of the location of the data. This way all the chips get equal use and equal wear, resulting in a prolonged SSD life. The controller chip is also tasked with managing free (available) memory and used memory along with identifying if the data needs correcting (known as ECC – Error Correcting Code). Additional chips that may feature within an SSD are bridge chips (to control communication between the host and SSD) and a High Speed Buffer Memory (to hold the data to be sent to and from the host). Typically we find that the High Speed Buffer Memory is increasingly being included in the controller chip (“brain”). You will have noticed that there is a huge emphasis on the controller chip and the role it plays within an SSD and should the controller chip fail, the SSD will become unresponsive. Our engineering team have established that it is possible to simulate the controller with the correct hardware and software and thus recover data; although replacing the controller chip is also possible, this would not enable data recovery.

14


NCE Computer Group Vendor Focus:

HGST

NASDAQ: WDC Founded: 2003 Acquired by WD; Hitachi Global Storage Technologies HGST in October 2013 Headquarters: San Jose, California Portfolio: Flash based performance acceleration cards The development and popularity of Flash based cards that simply drop into a PCIe slot in the host to meet the required performance boost has been notable in the past year or two. One of the leading names in this field are HGST, and the Ultrastar SN100 Series sits at the forefront of this technology. The product comes in a choice of form factors; available as a low-profile HH-HL expansion card and as a highly-serviceable SFF 2.5� drive. The NVMe (Non-Volatile Memory express) standard is an interface specification that was created to deliver the full potential of nonvolatile memory in PCIe based SSD devices to meet the needs of enterprise and client platforms. The Ultrastar SN100 Series adheres to this standard, effectively enabling the use of the high speed PCIe interconnect with a standard OS driver. The Ultrastar SN100 Series PCIe SSDs are available in capacity points of 1.6TB and 3.2TB and offer 310,000 mixed random I/O; perfect for applications including Online Transaction Processing (OLTP), Online Analytical Processing (OLAP) and High Frequency Trading (HFT).

15


Little Book Of Data Storage

Hybrids Significant growth has been experienced by the leading-names in this sector. Customers can see the benefit of adding flash to their environment but equally appreciate that in disk technology they have complete peace of mind with how it behaves, what it costs and (perhaps most significantly) the reliability of the technology. Therefore implementing a product that combines the best of both worlds is an excellent solution. Having decided that a hybrid is the best approach, the question then turns to which one is suited to your needs? Some hybrid solutions treat the SSD layer as a “cache/buffer” to deliver fast frontend performance masking the slower back-end disk. This is a good fit as long as you have an appreciation of the IO requirements of your own environment; some allow you to add more performance to this layer incrementally, others don’t. The alternative is to adopt a true “tiered” solution that dynamically allocates each block of data to the respective storage tier. There’s the NAS or SAN aspect (covered earlier in the book); perhaps you want both? Oh, and then there’s the connectivity puzzle; perhaps you want iSCSI (in which case 1GbE or 10GbE?) or perhaps you want Fibre Channel presentation (in which case 8Gb or 16Gb?). How much SSD? How much 15k disk? How much 10k? How much 7.2k? What about expansion... Hopefully all of these points have emphasised the importance of engaging NCE. We can help to establish what the best hybrid solution(s) are to meet your needs.

“... in disk technology, customers have complete peace of mind with how it behaves, what it costs and (perhaps most significantly) the reliability of the technology.”

16


NCE Computer Group

17


Little Book Of Data Storage Vendor Focus:

Tegile

Founded: 2009 Headquarters: Newark, California, USA Portfolio: Intelligent Flash Storage & Hybrid Arrays “Tick box” technology is something to be wary of. A number of vendors in the storage industry claim to offer features that essentially mean that they can tick the box when responding to RFI, RFP or RFQ (Request for Information/Proposal/ Quotation) documentation, and by doing so are counted in rather than ruled out as a candidate for the project. The truth is that typically minimal effort is applied to achieve that tick in the box, and when you look under the hood, this becomes more than apparent. Of what relevance is this to Tegile you may ask? Tegile provide a hugely scalable, feature-rich storage architecture and, in my opinion, the features that come with the technology have not been added for “tick box” purposes. They do what they claim to do properly; no corners have been cut to develop and include them. And from a commercial perspective, they are all included as standard – there aren’t any financial “gotchas” that emerge after the initial conversation. What are the features I allude to? Let me explain what the foundation of the Tegile portfolio is exactly. Tegile offer both Hybrid Arrays and All Flash Arrays. Hybrid in both the sense of a mixed storage architecture - a combination of DRAM, Flash and HDD, and also in the sense of presentation as the Arrays provide both NAS (NFS, SMB 3.0/CIFS) and SAN (iSCSI and Fibre Channel) capabilities. This means that both File and Block level storage provisioning can be provided from a single system. That is a flexibility that some of the Tegile competition claim to offer (typically on their website or datasheets) but few, if any, can actually achieve. It’s normally one or the other. Thus, the competition can tick all of the boxes with their entire portfolio but, as their products are proposed on an incremental basis, they retract that statement – unlike Tegile. The IntelliFlash Operating System is essentially the brain behind the box. In the previous edition of the Little Book I covered the IntelliFlash Metadata Acceleration Technology so I refer to that copy if you’d like the simplified explanation to that particular feature. But the key to IntelliFlash is how

18


NCE Computer Group it masks multiple grades of storage transparently to deliver optimal performance – efficient media management. Inline deduplication and compression (what I have heard termed as “Nodupe as opposed to Dedupe” in conversations with customers when comparing Tegile to other Dedupe approaches) ensure that efficiency through data reduction is delivered across the hardware stack. Integration for virtual environments is provided through the VMware vCenter web client and desktop client plugin and the Microsoft Systems Center Virtual Machine Manager (SCVMM) for Microsoft Hyper-V virtual machines. Point-in-time VM consistent and application-consistent snapshots can be taken by Tegile, with support for replication of this data in a co-ordinated fashion using aggregated volumes in a consistency group for Data Protection purposes. In summary Tegile offer a solution that doesn’t gravitate towards a specific business need or objective; perhaps the most appropriate word on the wish list would be “flexibility”. Let me revisit that tick box approach as perhaps it will help you to compare the technology that you are looking at (or have even invested in) against that of Tegile (remember to base it on a single device rather than a combination of their portfolio): Feature

Tegile

NAS (File) storage presentation

Yes (NFS, SMB 3.0/CIFS)

SAN (Block) storage presentation

Yes (iSCSI & Fibre Channel)

Supports/Includes DRAM & Flash

Yes

Supports/Includes HDD

Yes

Inline deduplication

Yes (at storage pool level, per LUN or file share level)

Inline compression

Yes

Thin provisioning

Yes (fully supported by VMware APIs for Array Integration - VAAI)

Metadata acceleration

Yes

Application-Aware Provisioning

Yes

Data encryption

Yes (Inline 256-bit AES encryption for data at rest)

Management plug-ins

Yes (VMware vCenter, Microsoft System Center, REST API, Web)

Capacity cost - Price per GB

Contact NCE for details

Performance cost - Price per IO

Contact NCE for details

19


Little Book Of Data Storage

VMware vSphere Vvols (Virtual Volumes) When VMware say that they are changing the way they talk to storage, we all have to sit up and listen. Ignore it at your peril! Vvols appeared with the arrival VMware’s vSphere 6 technology and has been labelled as “an integration and management framework for external storage in virtualised environments”. Essentially it means that there are new, more granular, guidelines (rules) in the way that VM’s (virtual machines) in VMware are stored and managed. Traditionally storage provided to VMware through a Block (SAN) or File (NAS) manner would allow VM’s to be stored in a datastore. The datastore wouldn’t need to apply any sort of file system to the NAS presented storage (using NFS as the gateway) whereas in the case of the SAN presented storage VMFS (Virtual Machine File System) would be used. Without going into the technical jargon in too much depth, limitations were apparent in this approach as the relationship between a LUN or Volume (presented by the storage layer) and the VM was aggregated. This meant that it was difficult to provide the granularity required on a VM by VM basis – something that haemorrhaged the flexibility of VMware with ESXi (the hypervisor in vSphere). The arrival of Vvols now means that policy-based metrics are applied to storage for an individual VM, as opposed to residing at the datastore level. Effectively Vvols are storage containers or instructions that align with each VM. As you can imagine, this has been a high-priority subject for pretty much every vendor that proclaims to have VMware integration and equally there will be a knock-on effect for you, at the coal face, if and when you move to vSphere 6. Arguably Vvols take a level of the “intelligence” from the storage and move it into the control and ownership of VMware. If you have encountered the limitation mentioned above then this is hopefully the news that you have been waiting for. The only caveat you need to be aware of before diving in is the need for your storage platform/ provider to support Vvols within VMware vSphere 6 – something that is associated with those that are signed up to the vStorage APIs for Storage Awareness (VASA) standard.

20


NCE Computer Group Vendor Focus:

Veeam Software Founded: 2006 Headquarters: Baar, Switzerland Portfolio: Backup & Disaster Recovery Software for VMware and Hyper-V environments A very shrewd way to gain market share and brand awareness is to offer a free software utility that makes peoples life’s a lot easier and is quickly accepted by all in that community as the best way to do something. Veeam Software did exactly that with their FastSCP offering, an efficient file transfer utility for VMware ESX(i). Subsequently when Veeam then launched their Backup & Replication software (which wasn’t free) for VMware the brand familiarity cleared the first hurdle - acceptance & recognition. This had sideswiped a number of established Backup software vendors who could see the virtual server market emerging (with VMware leading the way) but had been slow to react. Veeam quickly became the defacto Backup software solution for VMware and grew from 10 employees in 2008 to 1200 in 2014, a growth that NCE have seen first-hand having initiated conversations and a partnership with the Veeam Founders at VMworld Europe back in 2008! Veeam haven’t stood still and continue to broaden the portfolio with tighter integration with vendors including Cisco, HP, Microsoft and NetApp added into the feature set, cloud support and Veeam ONE - a monitoring, reporting and capacity planning tool for vSphere and Hyper-V all now part of the family. Perhaps the ultimate industry recognition came in 2013 when Veeam were identified as one of the “Visionaries” in the Gartner Magic Quadrant for Enterprise Backup/ Recovery Software - an accolade that is earned and not bought.

21


Little Book Of Data Storage

SDS: Software Defined Storage Our industry is notorious for producing not only ground breaking storage technology but also buzzwords and catch phrases that cause mass confusion for their audience. In some cases when the euphoria kicks-in some of the more cynical of us look at the marketing hype and think “hold on a second, didn’t that already exist?” ...Ladies and Gentlemen I give you; “Software-defined Storage”! If I wrote a dictionary (which isn’t on this aspiring author’s list – rest assured) then I would reference other terms on the description: Software-defined Storage; see Storage Virtualisation or Storage Hypervisor …with these being the most obvious two - phrases with which many of those names in the Software-defined Storage category have previously associated themselves. Some of the online “go-to” outlets summarise SDS as an evolving concept, and I wouldn’t argue with that. No matter what you call it, the capabilities of the software with respect to how it can manage, aggregate and load-balance the storage is, as these outlets say “evolving”. Using a combination of disparate, distributed, new and legacy (featureless but cost effective) hardware, is an attractive concept. The idea of an abstraction layer is one that isn’t alien to those in IT. After all, there are options to remove pretty much every ounce of intelligence from the underlying hardware (be it servers, switching, desktops etc.), so why not storage too? Please don’t feel that my cynicism relates to the technology - far from it, and given the success that NCE have enjoyed to date in this (call it what you will) sector, we know that the benefits to you, the customer, are plentiful - both commercially and technically. The emergence and evolution of software that can optimise the distributed commodity storage resource is clear to see, creating a transparent layer that allows for storage of multiple copies of the data block driven by protection policies. The role that the software plays in this architecture is integral and greatly reduces the role (and associated features) that the storage needs to provide. It represents a perfect solution for businesses that are focused on driving down Operating Expenditure (OpEx) costs and maximising the efficiency of their existing hardware inventory.

22


NCE Computer Group Vendor Focus

DataCore Vendor Focus: DataCore Software Gold Partner Founded: 1998 Headquarters: Fort Lauderdale, Florida Portfolio: SANsymphony-V Storage Virtualisation Software, the Storage Hypervisor Having worked with DataCore for over 10 years, we at NCE have seen their portfolio strengthen and their market share grow significantly as a result - to the point where the company now have over 10,000 customers worldwide. The key differentiator for DataCore when compared to the Appliance based alternatives is that the company are focused on producing software, software that is vendor independent and storage agnostic. Don’t get me wrong, DataCore have technology alliances with vendors including Dell, Fujitsu and Huawei to name but a few, allowing pre-certified solutions that leverage the DataCore feature-set to be offered as a bundle with their hardware by the respective vendors as part of the alliance. Nevertheless the core market remains the one where the software is provided by accredited partners (a select group that includes NCE) independent of any specific, proprietary hardware brand or product. SANsymphony-V runs on standard x86 based server hardware. Perhaps the jewel in the crown of the software is the ability that it has to monitor I/O behaviour, determining frequency of use, and then dynamically moving storage blocks to the appropriate Storage Tier - be that SSD, SAS HDD, SATA HDD or even out to the Cloud: True Auto-Tiering. The product also has one of the best storage reporting and monitoring engines in the business -producing real-time performance charts and graphs, heatmaps and trends; these represent huge value to anyone tasked with managing storage. Features including Thin-provisioning, Replication, Load balancing, Advanced Site Recovery and Centralised Management are integral to the product - a product that continues to capture the attention of those tasked with storage consolidation and cost efficiency.

23


Little Book Of Data Storage Customer Story:

Kirklees College Kirklees College in Huddersfield is a leading further education college, offering its 20,000 full and part time students one of the largest nationwide selection of education and vocational courses from its new purpose built campus in the heart of Yorkshire. Part of Kirklees College’s success in being at the forefront of the education sector lies with the flexible, always-on and ready to expand, software based IT infrastructure delivered through a fully functioning Storage Area Network (SAN). The decision to move towards software defined storage actually occurred many years ago - well ahead of the mainstream rush, to combat incidents of outages of their virtual servers; this was achieved through the deployment of DataCore software. Then, as the College expanded and joined forces with neighbouring Dewsbury College, the storage virtualisation platform continued to provide scalability and a robust platform for expansion and resilience. The move to a new campus provided not only a modern data centre featuring the latest advances in data centre innovations, but also allowed for a refresh of the storage infrastructure under the guidance of the College’s trusted storage partner, NCE. As a DataCore Gold Partner, NCE recommended an upgrade of the original SAN infrastructure to new storage hardware combining the Nexsan by Imation storage family together with an upgrade to DataCore’s SANsymphony V software platform.

24


NCE Computer Group

“Essentially, we were looking to build upon what had already been achieved using DataCore as a platform” commented Jonathan Wilkinson, Head of IT, Kirklees College. “The pace of change facing education is nothing short of a revolution in the way we facilitate our students. What needs to underpin this change is a watertight, expandable, system that allows us to grow capacity as needed and keep applications highly performant on a continuous basis. SANsymphony-V delivers the high-end storage services we need today and provides the flexibility for further growth in the future.” The resultant solution is comprised of onsite mirrored DataCore SANsymphony-V10 nodes based on Dell PowerEdge R710 servers in the data centre, with a third synchronous node residing off-site, approximately 10km away, in the Dewsbury campus for Disaster Recovery purposes.

Gold Partner

The College also utilise DataCore’s Auto Tiering feature to ensure that the most appropriate and efficient element of the Nexsan by Imation storage is served to each application. For example, files from FileShare are allocated to lower performing and legacy SATAbeast architecture. In tandem with this, the college’s critical accounting (with data interrogated from the SQL Server database) is auto tiered and promoted to a higher performing storage tier housed in the Nexsan by Imation E60. The result is architecture delivering high performance transactional processing across the College. Effective and automated allocation of data to the most appropriate store continues to lower the cost per TB at the College. Jonathan summarises the result - “It is true that lots of solutions claim to offer resilience, but there are few solutions on the market that could offer us the total peace of mind that the DataCore based solution offers, and has done so for many years. Once installed, SANsymphony-V is a transparent software layer that you can fine tune as you go along without fear of failure, downtime or a dramatic overhaul to the infrastructure or College budget. We wouldn’t face the future without it.”

25


Little Book Of Data Storage

NCE Professional Services I have had many a conversation with a vendor where their frustration is that their business doesn’t own the Intellectual Property (IP) that they promote and sell. Their destiny is, largely, out of their control. They are reliant on others. One of the key differentiators of NCE is that our IP is very much our own. Fundamentally this is because we don’t sell a product; our business is based on providing solutions and the services that customers require to accompany them. Customer demand has been the catalyst to prompt a wide variety of the services that we now offer, and this continues to be the case. Thankfully, because we are not a “no” company and we have always been open to ideas, suggestions and opportunities. And yes, it is fair to say, that some of those more obscure ideas, suggestions and opportunities have evolved into services that now feature in our portfolio. There are typically two key limitations in most IT environments that drive the need for Professional Services - Having the time or having the right tools or skills to do what is required. Subsequently it is our objective to ensure that we have the resource available that you require with the right skills and tools to meet your business need. That said, we work in an industry where no two customers have the same environment and as a result we have to ensure that our personnel are multi-skilled and can access the information and support required whilst on-site working with you. Thankfully our pedigree as a support organisation offering “follow the sun” helpdesk services means that our skillset is a global one rather than just localised resource. Contact NCE: Call: +44 (0)1249 813666 info@nceeurope.com www.nceeurope.com

26

So, what can we do? Many of the Professional Services that we offer result from customer requests and engagements so please don’t let us take the credit for any that you see and think “Oh, that’s a good idea…” Holiday/Absence cover - On-site People with the right storage skills can be hard to find and can be a risk to have someone monitoring and managing your storage environment without the right skills, even if it is for a week or two. NCE can provide that peace of mind with the correct skills to meet the business need.


NCE Computer Group On-site Storage Assessment & Health check There’s a lot to be said for an independent pair of eyes looking at your storage infrastructure. NCE can provide skilled personnel to come to site and identify storage bottlenecks or hot spots, signs of impending failures, firmware mismatches, or any vendor recommendations or modifications in line with their best practises. In addition, a summary report of our findings is also an optional part of this service. Remote Storage Assessment & Health check This is one of our more popular services, subject to the relevant access privileges being provide, we can remotely dial-in to access and monitor your storage estate (typically on a monthly or quarterly basis) and provide a summary email outlining our findings and any areas of concern. Storage Capacity and/or Performance Assessment The holy-grail of the majority of customers that we talk to is to gain some sort of appreciation and understanding of their capacity growth and performance. We have the tools to monitor this and, over a period of time, provide an accurate representation of this information. Storage Migration & Decommissioning Service It’s no secret that storage vendors don’t like you to move away from their technology, especially if you are moving to their competitor’s product, and on this basis they can make the transition and migration a very difficult process.

Thankfully this is where NCE, as a vendor neutral organisation, can help. We can migrate the data, conduct a secure and certified data destruction service on the legacy storage and we may even offer you a buy-back value for your old equipment too! Storage Installation & Configuration Service We have been engaged on numerous occasions where the partner or vendor has sold some hardware &/or software and subsequently not had the ability to configure or install it. If you find yourself in this situation then please contact NCE to see if we can help. Storage Disaster Recovery/ Failover Verification Test Another service born from a customer request! If you have a DR site or setup that needs to be tested and proven then NCE can assist and independently document/verify that the Disaster Recovery environment is correctly configured and can/will act as a failover if required. Solution Training & Overview Service Time is something that you and your team aren’t always given, especially when training is required. We have been told that vendor training courses have a habit of spending 75-80% of the time teaching people what they already know or information that is irrelevant to their environment. We can tailor-make a bespoke, on-site training session that covers the relevant information and skills required by your Team.

27


Little Book Of Data Storage

Hyper-Convergence No two customers are the same. However, it is fair to say that we have customers who like to have someone that owns their entire environment and infrastructure, a single contact point or vendor. In contrast we have those customers who like to work with specialists (such as NCE) in their respective field and combine these independent strengths to meet the business requirements. If you count yourself in the first category then Hyper-convergence is something that will have caught your attention. The term represents a software-centric architecture that tightly integrates compute, storage, networking and virtualisation resources across a commodity hardware platform supported by a single vendor. The objective of a Hyper-converged solution is to offer a modular scale out capability, simplifying the process and introducing uniformity.

“ From a vendor perspective, some established names have jumped onto the Hyperconverged bandwagon...”

From a vendor perspective, some established names have jumped onto the Hyper-converged bandwagon, redefining their messaging to encompass the latest industry buzzword. Others are new names gathering momentum and customers, representing the emergence of the concept. The challenge as we see it is that it is rare to find a customer looking for a complete replacement of their entire compute, storage, networking and virtualisation infrastructure. Traditionally each of these categories is addressed independently and on a project basis. The fear of trying to address every challenge in a single sweep often outweighs the desire to do it. Greenfield sites, start-ups and infrastructures associated with takeovers are the exception to this rule – with more flexibility and the associated risk less significance in these environments. Perhaps Hyper-convergence will become the solution for many in the future? Our role at NCE is to remain independent and keep an eye on such emerging platforms, Rest assured we will continue to monitor the uptake – who knows, a Little Book of Hyper-convergence maybe required in the future!

28


NCE Computer Group Vendor Focus:

Cisco UCS Vendor Focus: Cisco Systems NASDAQ: CSCO Founded: 1984 Headquarters: San Jose, California Portfolio: Networking & Security technology Hardware in the Data Centre has traditionally been layered, carved into areas of speciality – Servers, Networking and Storage; there has almost been an unwritten law in place that each aspect will be seen as an independent entity. However, the demand for rack space in the data centre and improved hardware density to meet this objective has seen the emergence of firstly the “blade server” chassis and then the evolution of the unified compute platform. One of the biggest and most respected names in one of the aforementioned hardware “layers” (Networking) were first through the door on the idea of consolidation of the layers, as Cisco introduced the Unified Computing System (UCS). UCS optimises the horsepower of the hardware and provides a scalable platform that is simplified into a compact design. The UCS customer base, and portfolio, has grown significantly since it was first launched in 2009. Within the UCS family you will find the B-Series (Blade Servers), C-series (Rack Servers), M-Series (Modular Servers) and the UCS Mini. The E-Series are targeted at the branch office complemented by NCEs... hang on a minute, that’s us?... in this case it’s not as in UCS terms it means “Network Compute Engines”!). Cisco UCS Manager software is fundamental to the ease of use of the technology. Each compute node has no set configuration. As an example MAC addresses, UUIDs, firmware and BIOS settings are all configured on the UCS manager in a Service Profile and applied to the servers. The result is that the resource can be swiftly provisioned and allocated without the somewhat cumbersome approach of a legacy, fragmented, architecture. NCE are a Cisco Select Partner and we would welcome the opportunity to discuss and present (both technically and commercially) UCS to you should you have an interest in the technology.

29


Little Book Of Data Storage Vendor Focus:

Maxta

Vendor Focus: Maxta Founded: 2009 Headquarters: Sunnyvale, California Portfolio: MxSP hyper-convergence software Such is the desire to leverage the sale of “tin” in the world of IT, it is rare to find a feature-rich complementary software solution that isn’t tied to a proprietary hardware platform. However, in Maxta we have exactly that, and perhaps it comes as no surprise that this potential has been recognised and supported by some of the most respected top-tier venture capital firms. Maxta MxSP software provides organisations with the flexibility to hyperconverge any x86 server, any combination of storage devices, and any compute abstraction layer; eliminating the need for complex, proprietary and premium (“expensive”) hardware investment. The simplicity of Maxta’s VM centric solutions reduces IT management lowering operational and capital cost, whilst delivering hyper-scale, enterprise-level availability services and capacity optimisation. Perhaps the easiest way to position the distinguishing Maxta MxSP features is to summarise what they are in a few bullet points: ■■ Use any x86 server whether it is a brand-name or a “white box”;

■■ Includes Space reclamation;

■■ Run on any server model up to and including the latest and greatest generation;

■■ To scale compute and storage independently;

■■ Run on any hypervisor;

■■ Global name-space;

■■ Provides local mirroring and local replication;

■■ Use mixed drive types such as Flash, SSD, and spinning disk in any configuration;

■■ Offers Metro Cluster support;

■■ Includes Inline compression and deduplication;

■■ Ability to co-locate VM & associated data.

■■ Features highly efficient snapshots and clones;

Curious? As a Maxta partner, NCE would welcome the opportunity to demonstrate and discuss the MxSP solution in more detail. Please contact us to arrange a visit or Webex.

30


NCE Computer Group

Hyper-covergence: Who’s Who?

Gold Partner

31


Little Book Of Data Storage

Storage Interface Guide SATA (Serial Advanced Technology Attachment) SATA 1.0

1.5 Gbit/s

SATA 2.0

3 Gbit/s

SATA 3.0

6 Gbit/s

SATA 3.2 (SATA Express)

16 Gbit/s

SAS (Serial Attached SCSI/Small Computer Systems Interface) SAS-1:

3 Gbit/s

SAS-2:

6 Gbit/s

SAS-3:

12 Gbit/s

SAS-4:

24 Gbit/s

iSCSI (Internet Small Computer Systems Interface) iSCSI traffic traditionally travels over an Ethernet network, meaning that the following apply:

32

1GbE

1 Gbit/s

10GbE

10 Gbit/s

40GbE

40 Gbit/s


NCE Computer Group FCoE (Fibre Channel over Ethernet)

As the name suggests, this traffic travels over an Ethernet network, will most of the offerings on the market currently based upon:

10GbE

10 Gbit/s

Fibre Channel 1Gb:

.85 Gbit/s

2Gb:

1.7 Gbit/s

4Gb:

3.4 Gbit/s

8Gb:

6.8 Gbit/s

16Gb:

13.6 Gbit/s

32Gb:

27.2 Gbit/s

Commonly used Fibre Channel transceivers 850nm (short range)

150m at 4.25 Gbit/s (FC)

1310nm (long range)

40km at 1.25Gbit/s (FC)

850nm (short range)

300m at 10.0 Gbit/s (FC)

1310nm (long range)

10km at 8.0 Gbit/s (FC)

SFP

SFP+

( For SCSI, SAS or Fibre-Channel cables, terminators, GBIC’s or any other consumables please don’t hesitate to contact NCE: +44 (0)1249 813666

33


Little Book Of Data Storage

SMR Shingled Magnetic Recording I have attended many a meeting where I have sat alongside what I term to be a “flash evangelist” as they’ve questioned the future of the Hard Disk Drive (HDD), fuelling the fire for them as they bang “the SSD is the future” drum as loud as they can. However, those in the HDD Research and Development team at Seagate had an ace up their sleeve in the form of Shingled Magnetic Recording (SMR) technology which added a new chapter to the HDD storybook. It is worth outlining that SMR does NOT displace the performance play of SSD but what it does provide is a high-capacity low-cost storage media that is perfectly suited to environments that require a bulk storage target, or as some have termed it “digital preservation”.

“ By overlapping the data tracks, this increases the Areal Density through an improved Tracks Per Inch (TPI) ratio when writing data to the drive...”

In truth I wasn’t aware of the term “shingled” until recently. In fact if someone had told me a year ago that they’d got a shingled hard drive I’d have sent them to the doctors. Thankfully, my knowledge on this subject has progressed to the point it has merited inclusion in the Little Book. Shingled is a term that represents overlapping, traditionally in the context of roof tiles. It apparently derives from the Latin word “scindula” (this is probably the only time you will ever find me referencing Latin in the Little Book, I promise!), meaning ‘a split piece of wood’. Hopefully you now have an image in your head of the way that roof tiles overlap (think of a church spire - that works for me), and if you can apply this image to the layout of the data tracks which are written to in a Hard Disk Drive then you are way on the way to understanding how SMR technology works. By overlapping the data tracks, this increases the Areal Density through an improved Tracks Per Inch (TPI) ratio when writing data to the drive. The key benefit of SMR; Increased Capacity on an already aggressively priced (cost per GB) media.

34


NCE Computer Group

Helium Drives and what’s next for HDD? We touched on this topic briefly in the last edition of the Book, and since then the market for this technology has (excuse the pun) taken off. The concept is that the enclosures surrounding the drive mechanism are filled with helium instead of air. This reduces the friction and vibration on the disk platters, resulting in a lower power requirement for the drive. Less vibration means the platter density can be improved and an increased number of platters provides an increased capacity. If you’ve read the previous section on SMR technology then you may be thinking “why don’t they simply combine the two”? The answer is that they are, and the arrival of the 10TB SMR Helium drive represents the first in this new generation. The next breakthrough in HDD technology looks as though it will come in the form of Heat Assisted Magnetic Recording (HAMR), using a laser to change the magnetic properties for a short period allowing writing to take place in a far more efficient manner. Western Digital have already demonstrated HAMR, albeit not in a mass production sense. However we should also factor in that other muted “breakthroughs” in HDD technology such as Two Dimensional Magnetic Recording (TDMR) and Thermally Assisted Magnetic Recording (TAMR) haven’t materialised as yet – so let’s focus on the here and now rather than the crystal ball of storage.

35


Little Book Of Data Storage Vendor Focus:

Quantum New York Stock Exchange: QTM Vendor Focus: Quantum Corporation Founded: 1980 Headquarters: San Jose, California Portfolio: Object Storage Solutions, Tape Drives & Automation, Deduplication Appliances, File System and Archive Solutions Achieving storage density is a well-trodden path, with the challenges far bigger than simply how many drives can you fit in the physical space available. There are other considerations; cooling of the drives (with operating temperatures paramount to reliability) and vibration being the most obvious of the set. Tiered storage solutions with a unified and scalable architecture have also reset the rulebook, especially when applying the performance drive (2.5” Small Form Factor) and the capacity drive (3.5” Large Form Factor) to this formula. The Quantum QXS Primary Storage portfolio features the Ultra 48 - part of the 4004 Series, providing incredible storage density with support for 48 of the SFF (2.5”) drives in a mere 2U of rack space. This succeeded in “turning heads” in the storage industry; offering resilience (with dual “active-active” controllers), a wide variety of drive options - with support for SSD, 15k, 10k and 7.2k rpm drives and flexible connectivity through the interchangeable ports of the controllers (allowing Fibre Channel, iSCSI and SAS to presentation to the host or fabric). This has been followed by the Ultra 56, again part of the 4004 Series, supporting 56 LFF (3.5”) drives in the 4U rack architecture. By combining the Ultra 48 & Ultra 56 (interconnected through SAS ports) you have a solution that supports 104 drives in 6U combining performance and capacity, and a Unified Solution for Tiered Storage requirements. The 4004 Series, as with the other members of the AssuredSAN portfolio, also has the capability to provide Snapshots (supporting up to 1000 snapshots with AssuredSnap), Mirrors (supporting up to 1024 volume copies with AssuredCopy) and Remote Replication (with the AssuredRemote feature) all at the Controller layer without the need for any additional hardware required.

36


NCE Computer Group

Roadmaps for HDD As mentioned in previous editions of the Book, gaining access to this information (and more importantly the accuracy of what you are then told) is perhaps one of the biggest challenges in the storage industry. We tend to believe what we’ve seen and what is actually published openly rather than the vapourware that can feature heavily on corporate slide decks. On that basis, here’s what we can categorically say exists and will exist from an HDD perspective (not a huge change from what we shared in the previous edition but with Helium drives and SMR drives now available we expect those to have an effect on the information provided below before too long...): Drive Capacity

2.5” Small Form Factor (SFF) SATA/NL SAS

SAS

146GB

15,000 rpm

300GB

10,000 rpm 15,000 rpm

450GB

10,000 rpm

500GB

15,000 rpm 7,200 rpm

900GB

10,000 rpm 7,200 rpm

1.2TB

10,000 rpm 7,200 rpm

10,000 rpm 5,400 rpm

1.8TB 2TB

SAS

15,000 rpm

10,000 rpm

750GB

1.5TB

SATA/NL SAS

7,200 rpm

600GB

1TB

3.5” Large Form Factor (LFF)

10,000 rpm 5,400 rpm

7,200 rpm

3TB

7,200 rpm

4TB

7,200 rpm

6TB

7,200 rpm

8TB

7,200 rpm

10TB

7,200 rpm

37


Little Book Of Data Storage Vendor Focus:

Nexsan by Imation NASDAQ: IMN Founded: 1999 (in Derby, UK) Headquarters: Oakdale, Minnesota, USA Portfolio: Manufacturer & Developer of Hybrid, High Density and Archive Storage The long-standing relationship between NCE and Nexsan (now part of Imation) and associated growth of both businesses during that period isn’t a coincidence. We have both established that quality, reliability, density, scalability and continuity are fundamental to what you, the customer, seek in a storage solution. By meeting these objectives we have been able to challenge the more recognised brands who don’t always meet the criteria when you look beyond the badge. Initially Nexsan came to market with catchy product names like the ATAboy (which became the SATAboy), the ATAbeast (which became the SATAbeast), the SATAblade and ATAbaby. These were all notable for their storage density and also the savings on power and cooling that was provided through the ability to spin down the drives using AutoMAID technology. Commercially, the cost per GB was also extremely aggressive when compared to these aforementioned “recognised brands”. This template enabled Nexsan to build a brand name, reputation and customer base that remains loyal to this day. Fundamentally, same principles still apply to the E-Series from Nexsan. Within the E-Series V/VT family you will find the following: Type

Supported drives

Rack format

E18V

18 x LFF (3.5”) drives

2U of rack space

E32V

32 x SFF (2.5”) drives

2U of rack space

E48VT

48 x LFF (3.5”) drives

4U of rack space

E60VT

60 x LFF (3.5”) drives

4U of rack space

A choice of presentation including 1Gb iSCSI, 10Gb iSCSI, 16Gb Fibre Channel and 6Gb SAS coupled with support for either all or a mixture of drive types (SSD/15k/10k/7.2k) positions the E-Series as an ideal storage platform.

38


NCE Computer Group

RAID: The Levels I’ve faced some challenges in my life but trying to make RAID levels an interesting subject has to be up there with the best of them. So, roll with me on this one and we’ll get through it together! You never know, between us we may find it knowledge that we find useful somewhere down the line... So, let’s focus on the term itself: RAID, meaning Redundant Array of Independent (or previously Inexpensive) Disks. The key word in the whole phrase is Redundant, as this implies that a failure can occur and the Disk Array will still remain operational. Although we know that RAID 50 is a RAID level offered in the storage industry I am thankful to say that there aren’t 50 RAID levels to be covered in this section. In truth there are only a few that are typically used or offered by RAID manufacturers today, and some manufacturers (let’s use NetApp as an example) have their own exclusive RAID level – RAID-DP just to be different! Here’s a snapshot of what each RAID level provides: :

RAID Level

Main Feature

Parity

RAID-0

Block-Level striping

No

RAID-1

Mirroring

No

RAID-2

Bit-Level striping

Dedicated Parity (on a single drive)

RAID-3

Byte-Level striping

Dedicated Parity (on a single drive)

RAID-4

Block-Level striping

Dedicated Parity (on a single drive)

RAID-5

Block-Level striping

Distributed Parity (can tolerate one drive failure)

RAID-6

Block-Level striping

Double Distributed parity (can tolerate two drive failures)

RAID-10 (1+0)

Mirroring + Block-Level striping No

39


Little Book Of Data Storage

Under the hood of... a Hard Disk Drive On the outside, an HDD appears relatively simple - with a case and a circuit board. However the technology has been described as one of the most complex electronic devices in use. The PCB (Printed Circuit Board) is the most visible part of the drive and this comprises of chips that ■■ Control the speed of the drive; ■■ Control the reading, writing and positioning; ■■ Talk to the host; ■■ Buffer for performance; ■■ and finally contain information such as; drive serial number, firmware etc. Once the enclosure is removed, the first thing that you will notice are the circular silver disk/s that take up the majority of the internal space in the drive. These are known as platters. Usually these are ceramic with a metal coating, although older drives used both glass and metal media. These platters hold all of the data written to the drive. The metal layer is magnetic and is formatted into microscopic servo tracks. The heads within the drive can read these tracks to know their precise location. In the centre of the platter is the Spindle Motor, responsible for spinning the platter at high speed. This is controlled by the aforementioned “speed control chip” on the PCB. Below these platters the “Head Block Assembly” (HBA) can be found. This is where the heads responsible for all the reading and writing, known as the armature, are located. When the drive is switched on, the platters spin up to their defined speed. Once achieved the PCB tells the armature to move the heads out onto the platter. This microscopic movement is all controlled via the voice coil. The heads move out across the platter, and due to the speed of the disks and the size of the heads, they literally fly above the surface. The distance between the platter and the heads is microscopic with the result that even a single speck of dust or a fingerprint on the platter will cause the heads to crash into the platter. The heads pick up the magnetic signals and convert these to electronic signals which are then passed to the operational amplifier; the amplifier then translates the signals to a more usable level and sends them back to the PCB via a very slender ribbon cable. 40


NCE Computer Group

RAID: Who’s Who?

41


Little Book Of Data Storage

Object Storage Data can be managed in different ways, and we have already covered the File Storage and Block Storage earlier in this publication. Object Storage represents an alternative to both of the aforementioned approaches by ■■ abstracting the storage from the administrators and applications; ■■ collating the data into (unsurprisingly given the name!) an object; ■■ encapsulating three core components: the data, metadata and a unique global identifier. Within the object store the data and metadata are separated, allowing increased scalability and flexibility on the underlying storage platform. This approach, coupled with the hardware transparency that object storage provides, means that it is perfectly suited to unstructured data sets. The names that have grown and secured market-share in Object Storage sector include some of those more recognised brands in the industry and some “new kids on the block” from a storage perspective -as you will see from the vendors listed below:

42


NCE Computer Group Vendor Focus:

Quantum New York Stock Exchange: QTM Vendor Focus: Quantum Corporation Founded: 1980 Headquarters: San Jose, California Portfolio: Object Storage Solutions, Tape Drives & Automation, Deduplication Appliances, File System and Archive Solutions Quantum continue to be one of the most recognised brand names in storage and have evolved from being a business that were traditionally associated with disk and tape technology to an organisation that provide solutions to address a wide variety of challenges. One such solution, namely “Lattus”, is a next generation object storage platform. The scale-out architecture of Lattus consists of three core components or nodes; the Access Node (providing file system CIFS/NFS presentation and access), the Controller Node (the “brains” of the object storage solution) and the Storage Node (providing the underlying disk storage for the object data). Tagged as the “Forever Disk Archive”, Lattus can scale to store hundreds of PetaBytes via a flat object namespace. It also supports data spread across geographically dispersed sites, providing huge flexibility for distributed businesses with remote offices. Peace of mind is provided by using Forward Error Correction (FEC) to guarantee data integrity and support for the HTTP REST interface (including Amazon S3 support) provides the flexibility of the technology to easily integrate into your environment. Are you intrigued? If you’d like to find out more about the Lattus solution from Quantum please don’t hesitate to contact NCE in the first instance.

43


Little Book Of Data Storage

Data Protection Back to the Future! As someone who grew up in the eighties I am all for the retro style that reminds us of how things used to be. Fashionable names have a habit of returning but who would have thought that two of the previously forgotten brands in the world of backup would be return again. Yes; both Arcserve and Veritas are back! Sad as it may seem, I have a copy of the first ever Little Book – the Little Book of Backup from back at the turn of the century and Pages 28 & 29 cover Backup and Data Management Software - with Veritas and their NetBackup and Backup Exec software taking pride of place alongside Computer Associates with a product called Arcserve. These were the two dominant players of that generation, and if you were looking to protect a Novell Netware, Linux or Microsoft Windows Physical Servers (which were the mainstream operating systems of that generation) the chances were that you’d be using one brand or the other.

How ironic it is that 15 years later symantec announced that they have split their business, with “Veritas Technologies Corporation” being the new name of the information management business unit of the company. Shortly after this announcement, Private Equity Firm the Carlyle Group agreed an $8BN deal for Veritas Technologies Corporation meaning that the new company will now be completely independent of Symantec. Within the portfolio of Veritas Technologies Corporation sit the Backup Exec and NetBackup technology. Does this represent a new lease of life for them? Watch this space...

44


NCE Computer Group

Data Protection Software: Who’s Who?

45


Little Book Of Data Storage Vendor Focus:

Arcserve

Founded: 2014 Headquarters: Minneapolis, Minnesota, USA Portfolio: Unified Data Protection Software It would be an insult to label Arcserve as a new name in the storage industry; a product used by 43,000 end users worldwide across more than 50 different countries certainly isn’t one I would class as “new”. Nevertheless there have been some hugely significant and strategic changes both in the product and the company that signify that Arcserve and the Unified Data Protection (UDP) software are here to stay.

“ Arcserve UDP is far more than what it was when it started life as a backup product, offering replication, high availability and source side global deduplication technologies (and of course, backup technology!) within one solution...”

Some of you may remember the name Arcserve (or perhaps the Arcsolo – the baby brother of the product) from the days when it was the de-facto backup platform for Novell Netware under the stewardship of Cheyenne Software. Or perhaps you know of the product that flew under the Computer Associates (or CA Technologies as they became) banner from 1996 until recently when Marlin Equity Partners acquired the business and formed Arcserve LLC?

With the new company came the new platform in the form of Universal Data Protection (UDP) providing Assured Recovery for both virtual and physical environments. This has proved to be a key differentiator when compared to the competition, as the word “unified” bridging the virtual and physical estate is a rare, and somewhat unique, attribute to have. Arcserve UDP is far more than what it was when it started life as a backup product, offering replication, high availability and source side global deduplication technologies (and of course, backup technology!) within one solution. Easily configured data protection plans through the intuitive interface make the user experience far better than ever before.

46


NCE Computer Group

The licensing model has also won many accolades with those considering the UDP solution. At the core sit four editions – Standard, Advanced, Premium and Premium Plus. The Standard edition provides image-based protection for file servers with tape migration. The key differentiator between this and the Advanced edition is that the Advanced Edition also includes image-based protection for Application Servers. The Premium edition provides all of the Advanced edition features along with file-based protection capabilities to both disk and tape, in addition to image based protection. Finally there’s the Premium Plus edition again including all of those great features that come with the Premium edition and also High Availability and application level replication capabilities. Once you’ve decided on the UDP Edition that suits you best it’s then simply a case of deciding whether you’d like to opt for the per CPU Socket or per Terabyte licensing. Far easier than the incremental client, option and agent licenses that has been a thorn in our side for so long.

47


Little Book Of Data Storage Customer Story:

The Open University MK:Smart Research Initiative The Open University’s mission is to be open to people, places, methods and ideas. It promotes educational opportunity and social justice by providing high-quality university education to all who wish to realise their ambitions and fulfil their potential. Through academic research, pedagogic innovation and collaborative partnership it seeks to be a world leader in the design, content and delivery of supported open learning. The MK:Smart research initiative, which aims to support economic growth in Milton Keynes, has been led by The Open University. “Milton Keynes is one of the UK’s fastest growing cities,” explains Paul Alexander, Technical Operations Lead for the MK:Smart project. “Our challenge is in minimising strain on the infrastructure as the city grows.” The MK Data Hub is a sophisticated data management platform that can process vast amounts of data relevant to city systems. It was primarily designed by The Open University and BT and is being hosted in the heart of the city, by the University of Bedfordshire, at University Campus Milton Keynes (UCMK). The Data Hub provides an infrastructure for acquiring, managing and sharing multiple terabytes of data sourced from city systems. It will hold data about energy and water consumption, transport, weather and pollution sourced from satellite technology, sensor networks, social and economic datasets, social media and specialised apps. “We need to ensure the data is constantly available for any project using the information feeds in their own application and for the people running those applications,” Paul explains. “Our team needs historical data to be available so we can analyse trends.” With the initiative continually evolving and uncovering new sources to draw data from, MK:Smart needed a heterogeneous and scalable recovery management solution. “We needed a solution that could both backup and recover data over a SAN,” Paul adds.

48


NCE Computer Group

After extensive research into all available backup and recovery solutions, Paul and his team decided to implement Arcserve Unified Data Protection (UDP) in partnership with trusted partner NCE. “Arcserve UDP was the only single solution able offer the support for a heterogeneous environment we needed.” A successful proof of concept with UDP demonstrated compression rates of 40 per cent and deduplication rates of 30 per cent. Once again, Paul explains “We expect deduplication rates to increase further as we take more data on board. Rates this high mean that we can retain backups for longer and access data and logs for longer – which is crucial with a research initiative where you need to analyse all issues that transpire.” MK:Smart’s technical operations team implemented the core Arcserve UDP in less than three days; they then fine-tuned it to optimise backup and recovery capabilities across the diverse data stores. The solution safeguards data across twelve virtual servers and six physical servers, protecting not only data sourced from the city’s systems, but also Microsoft Exchange mailboxes and Active Directory accounts, data produced by applications, file stores and web servers. Every Friday Arcserve UDP automatically backs up the entire MK Data Hub in less than 30 minutes. Incremental backups are run on weekdays at multiple intervals throughout the day and last no more than 10 minutes. Backups are replicated to a disaster recovery site elsewhere in Milton Keynes. The Arcserve solution is helping to maximise the availability of the MK Data Hub, safeguarding the reputation – and ultimately the success – of the MK:Smart initiative. The technical operations team can now ensure the timely backup and recovery of data across the initiative’s diverse and ever-expanding SANconnected data stores. “Our recovery is as quick as our backup. Arcserve UDP has a unique ability to restore data over a SAN, which means it can recover our virtual machines as fast as it can back them up – in less than 10 minutes” says Paul. “The interface is set out logically and is a pleasure to use. It tells you clearly where any issues are so you can address them straight away.” Paul concludes, “With Arcserve UDP we can ensure the MK:Smart data remains continuously available to developers and citizens – which is critical to protecting the reputation and success of the initiative.”

49


Little Book Of Data Storage

Deduplication This topic has gathered momentum as the Little Book has matured and subsequently this isn’t the first time that it has been covered. One thing that is apparent when we talk to customers on this subject is their confusion with the terminology and the importance of where exactly the Deduplication takes place. With that in mind, I have tried to snapshot some of the phrases associated with Deduplication and explain what each means:

Inline deduplication looks for duplicate blocks of data as the data is ingested to the target device. This method of data deduplication requires less disk space than post-process deduplication because duplicate data is removed as it enters the system. SSD technology provides the required performance to deliver inline deduplication (something spinning disk cannot handle as it has always had to be scheduled post process – see below) and so you will see this form of deduplication used with flash & hybrid (using SSD for the Inline deduplication) systems to increase the capacity that they can store. Post-process deduplication also looks for duplicated data blocks and replaces them with a pointer to the first iteration of that block. Unlike inline deduplication, post-process deduplication doesn’t begin processing backup data until after it has all arrived at the backup target. Source based deduplication is the removal of duplicated data

blocks from data before transmission to the backup target, resulting in reduced bandwidth and storage demands. It can however cause increased demands on servers (or “source”) as the deduplication workload is shifted out to that layer.

Target deduplication is the removal of duplicate blocks from a backup transmission as it passes through an appliance sitting between the source and the backup target. Global deduplication is a method of preventing duplicate blocks when backing up data to multiple distributed deduplication devices.

50


NCE Computer Group Vendor Focus

ExaGrid Systems, Inc Founded: 2002 Headquarters: Westborough, Massachusetts Portfolio: Disk-based Backup Storage Appliances with data deduplication Using a tagline “Stress free backup storage” is a brave thing to do, especially if you spend your working life interacting with those tasked with undertaking backups. Stress typically comes as standard with the responsibility of protecting the company data on a daily basis. However with over 7,000 deployments worldwide and a strong list of recognised customer names and associated case studies, ExaGrid have established themselves as one of the leading names in the Deduplication Appliance sector suggesting that there is substance to their “Stress-free” tagline. ExaGrid appliances present themselves as standard NAS shares (using CIFS or NFS) ; the hardware is based on Intel Quad Core XEON processors, with RAID 6 + Hot Spare storage using enterprise class SATA or SAS drives. The appliances use a zone-level data deduplication across all received backups, meaning that only the changed bytes from the backup are stored instead of storing full copies. A dedicated landing zone within the Appliance keeps the most recent backups in their full form for fast and immediate tape copy - other solutions only store deduplicated/dehydrated data that needs to be reassembled for every tape copy. They are fully supported and certified by the major traditional backup applications (such as Arcserve UDP or Veritas using the OpenStorage technology – OST feature), VMware and Hyper-V backup utilities (such as Veeam), directto-disk SQL dumps, Oracle RMAN backups, and specific UNIX utilities such as TAR. Thus the ExaGrid appliances can reduce the backup window, improve local restores and deliver instant VM recoveries. And let’s not lose sight of the “Grid” element of ExaGrid meaning that in a distributed (GRID) architecture comprising of multiple ExaGrid appliances, the processor, memory, bandwidth and disk is aggregated meaning that if an appliance fails the backup jobs continue to run by using the remaining, active, appliances. 51


Little Book Of Data Storage

LTO-7 Pull up a chair and make yourself comfortable, I want to tell you a story... Once upon a time in Silicon Valley, California, there were three ambitious vendors who set out on a road to develop a brand new tape format. They wanted to make it an open standard that would allow tapes to be interchanged between each other’s drives and drive down the cost of tape storage forever more. When it was born the three vendors were very proud of their new technology. They decided to name it LTO-1 as this was the first generation of the new family. Years later and they have seen the first four slower and lower capacity generations move out, and up until recently the two more recent additions to the family, the brothers LTO-5 and LTO-6, have ruled the roost. However the three vendors (let’s call them the consortium for ease) recently announced the arrival of the next generation of the family, LTO-7. The LTO-7 boasts a whopping 6TB of native (uncompressed) storage capacity and streams at around 300Mb/s (uncompressed), making the older members of the family extremely envious. LTO-7 also has one of the distinguishing features of its two residential little brothers, LTFS (the Linear Tape File System), something that none of their predecessors benefited from. The consortium are very proud of their new arrival and this has prompted them to outline their intentions to bring a further three more members into the LTO family in the coming years. And, with an NCE maintenance contract to support them, they will all live happily ever after. The End If you are interested in LTO7 technology, please don’t hesitate to contact NCE for pricing. For more information on the LTO family, please visit: http://www.lto.org/

52


NCE Computer Group

Tape Storage: Who’s Who?

NCE continue to provide and support a wide variety of Tape (typically LTO) based solutions. Many of the automated offerings on the market currently are of a unified specification and architecture, and it is on this basis that we have constructed a Tape Automation matrix to represent what is available at the time of publishing this edition of the Little Book. We hope that this is of help to you; please contact NCE for more detail and a quote on any of those products listed below: Maximum Number of LTO drives supported

Maximum Number of LTO Cartridges supported

Size (Rack Units)

1 drive

8 slots

1U

1 drive

9 slots

1U

2 drive

16 slots

2U

2 drives

24 slots

2U

2 drives

30 slots

4U

2 drives

40 slots

3U

4 drives

40 slots

4U

2 drives

41 slots

5U

2 drives

48 slots

4U

3 drives

50 slots

6U

4 drives

60 slots

8U

5 drives

80 slots

6U

6 drives

80 slots

6U

6 drives

114 slots

10U

6 drives

133 slots

14U

8 drives

170 slots

16U

53


Little Book Of Data Storage Vendor Focus:

Quantum New York Stock Exchange: QTM Founded: 1980 Headquarters: San Jose, California Portfolio: Tape Drives & Automation, Deduplication Appliances, File System, Object Storage and Archive Solutions Having dug out the original first edition of the Little Book for a chapter earlier in this edition I looked back at some of the names that were in the Tape Automation market back when LTO first came to market. Advanced Digital Information Corporation (ADIC), ATL Products, Benchmark, Certance and M4 Data; whatever happened to them? Ironically they were all bought by a company that remain in this sector to this day: Quantum. This is a great representation of the heritage and pedigree that Quantum have in developing and manufacturing reliable and scalable tape automation. As the leading name in Enterprise and Mid-range Tape Libraries, Quantum and their Scalar range of LTO based products are able to meet all requirements from the 2U rack mountable 8 Slot SuperLoader 3 at the entry-level through to the i6000 scaling up to 75PB in the Enterprise environment. Quantum’s StorNext AEL Archives use automated, policy-based tiering combined with the cost-effective Scalar tape storage architecture to deliver petabytes of data accessible to users through a simple file system interface, representing a long-term solution for digital archives.

54


NCE Computer Group

Tape Storage Media Guide LTO1

100GB Native Capacity, 15MB/sec (54GB/hr) uncompressed data throughput

LTO2

200GB Native Capacity, 35MB/sec (126GB/hr) uncompressed data throughput

LTO3

400GB Native Capacity, 80MB/sec (288GB/hr) uncompressed data throughput

LTO3

WORM variant available - same specification as above

LTO4

800GB Native Capacity, 120MB/sec (432GB/hr) uncompressed data throughput

LTO4

WORM variant available - same specification as above

LTO5

1.5TB Native Capacity, 140MB/sec (504GB/hr) uncompressed data throughput with LTFS

LTO5

WORM variant available - same specification as above

LTO6

2.5TB Native Capacity, 160MB/sec (576GB/hr) uncompressed data throughput with LTFS

LTO7

Up to 6.4TB Native Capacity, 315Mb/sec (1.134TB/hr) uncompressed data throughput with LTFS

LTO

Universal Cleaning Cartridge for LTO Drives

55


Little Book Of Data Storage Vendor Focus:

Nexsan by Imation NASDAQ: IMN Founded: 1999 (in Derby, UK) Headquarters: Oakdale, Minnesota, USA Portfolio: Manufacturer & Developer of Hybrid, High Density and Archive Storage Given my objective to translate IT jargon into terminology that you can relate to, let me try to draw a comparison for you when positioning the Assureon technology from Nexsan by Imation. We all have material things that are of value to us - treasured photographs, keepsakes, important documents (such as your birth certificate, driving licence or passport - items that need to be kept in a safe or secure place. They are important to us and hold a great deal of personal value. Businesses also have electronic data that meets this criteria. Such data needs to be kept in a safe or secure place. Hopefully you will now have an appreciation of the role that the Assureon plays by providing Secure Archive Storage. It is possible that your company is governed by regulatory or corporate compliance, stipulating how long you have to retain specific data for (and these sort of rules don’t just apply to those in the legal sector, believe me!). Not only must you retain it but also be able to prove that it has not and cannot be tampered with or altered. Data integrity, protection, privacy, security, longevity and availability with full audit trails are factors that differentiate the Assureon from other “open” storage platforms. Each file is fingerprinted and stored twice within the Assureon on separate RAID sets, or, better yet, on separate Assureon archive storage systems that are geographically separated if required. Background data integrity auditing uses the fingerprints and duplicate copies to ensure file authenticity without the need for administrative intervention. Self-healing integrity checks and file availability audits along with digitally signed metadata files and third-party secure time stamps work together for the utmost protection of files within Assureon – providing the assured aspect that you’d expect with a technology carrying a name the word in the name. Perhaps you would like help in identifying where your high-value data resides? Why not use the Assureon Data Discovery Tool, available for download at: http://nex.sn/nce 56


NCE Computer Group

What Could Possibly Go Wrong? I have yet to meet a vendor that can guarantee 100% (not 99.9999999%) uptime for their technology. No matter how resilient they try to make it, there is always a chance that it can fail. And when will it fail? Typically when you don’t need it to! It may seem strange to say - but without failures the NCE business would be a very different one. The foundation of our business has been built on service - providing an emergency response to site to fix the failures of the technology in your hour of need. We are effectively offering an insurance policy for your data centre that includes the cost of the skilled labour and parts to get you operational once again. With over 30 years of providing this safety net I thought it would be an idea to ask representatives of our engineering team what they tend to see fail when maintaining and repairing storage technology, more specifically Hard Disk Drive technology: “A fairly common fault is a dropped or knocked HDD causing the heads to crash. When the heads have crashed the drive can no longer read the data or find the servo track. The most common symptom is a knocking sound coming from the drive, a sound made by the heads moving across the platter uncontrollably. This can lead to scoring of the platter. If the heads break but do not come off they scrape across the platter and every scrape removes data meaning that the data is, quite literally, turned to dust. Should this occur data recovery is impossible to achieve.” “Another HDD fault is what is known in the industry as ‘Sticktation’. This occurs when the heads have hit the platter but, instead of bouncing back up, they stick to the platter and the platter can no longer spin. Normally you would hear a high pitch buzz as the voice coil tries to move the heads… and fails.” Please note the words of one of our lead engineers: “The key thing to any fault with a HDD is, if you have valuable data, STOP the drive and seek help. Continuing to power it on could be destroying the data.” NCE are here to help, please contact us to discuss our service portfolio. 57


Little Book Of Data Storage Vendor Focus:

Barracuda Networks New York Stock Exchange: CUDA Founded: 2003 Headquarters: Campbell, California Portfolio: Security, Application Delivery and Data Protection Solutions Watching from a distance, we saw Barracuda build their brand and reputation in security (with spam and virus firewalls and web filters) and application delivery (with load and link balancers). The acquisitions of Yosemite Technologies in 2009 (an established name in backup software) and C2C systems UK in 2014 (an established name in archiving software) – both of whom have previously featured in the Little Book - has accelerated the Barracuda Networks name and profile in the storage market. If you feel that the logo looks familiar and you are not one of their 150,000 customers, then you may not apply it to any of the above aspects. Perhaps you’ve seen it at a US airport or at a major golf tournament, on a Pro Tour cycling team or on an IndyCar; all areas where the Barracuda Networks brand has featured. From the foundation of backup that the Yosemite acquisition brought, Barracuda have introduced a range of Data Protection products including Barracuda Backup Virtual Appliance (Vx). This software solution can be deployed in virtual environments to leverage existing compute and storage infrastructures, using both inline target-based deduplication and compression to reduce storage and bandwidth requirements, and to easily scale as data grows. Licensing for Barracuda Backup Vx is provided through a per-TB capacity subscription. Replication is built into each Barracuda Backup Vx subscription; you can choose whether to replicate to the Barracuda Cloud (with the subscription including “Unlimited” Cloud Storage and extended retention through offsite vaulting) or to a Barracuda Backup Receiver Vx hosted on your own virtual infrastructure for disaster recovery.

58


NCE Computer Group If it’s a mixture of a physical and virtual estate that you are looking to protect, then the Barracuda Yosemite Server Backup Software provides protection for Windows Server (2003, 2008 & 2012), Linux (Red Hat, SUSE and Ubuntu), VMware and Hyper-V VM’s, along with MS Exchange and MS SQL application support.

“ ArchiveOne is designed for the “real world” where email data resides not just in Exchange or Office 365...”

Barracuda ArchiveOne (formerly the technology from C2C systems) provides comprehensive Archiving, eDiscovery and Information Management capabilities. The intuitive management console provides a single search capability across live and archived data for files and email. ArchiveOne is designed for the “real world” where email data resides - not just in Exchange or Office 365, but also in PST files or end user systems, where it is not feasible to archive all data before enforcing retention policies or conducting search and discovery exercises.

“ From the foundation of backup that the Yosemite acquisition brought, Barracuda have introduced a range of Data Protection products including Barracuda Backup Virtual Appliance (Vx).”

The software integrates seamlessly with Outlook and Outlook Web Access (OWA) for email management and allows users to access archived data from their desktop and mobile devices – even when they are offline. Transparency means that users access their files in exactly the same way regardless of whether a file has been archived or not.

Complementary technology (again a product that came through the acquisition of C2C) is offered in the form of Barracuda PST Enterprise. This will discover PST files on network servers and end user systems, and migrate this data to a secure location such as Exchange, Office 365 or Barracuda Message Archiver. If any of the Barracuda Networks technology has caught your eye or imagination, please don’t hesitate to contact NCE to discuss it in further detail.

59


Little Book Of Data Storage

Customer Case Study: Park Resorts For many of us who have grown up in Britain, the mention of a childhood family holiday conjures up an image and memories that will live with us for the rest of our life. And today that experience lives on - with Park Resorts creating amazing memories by providing Caravan, Lodge or Chalet Holidays and Holiday Home Ownership at their 49 Parks in some of the most stunning coastal and lakeside locations in the UK. Founded in 2001, Park Resorts have grown the business by delivering the best British holiday experience for their customers. However, there is a lot more to the largest operator of caravan parks in the UK than perhaps you realised. The business is underpinned by an IT infrastructure that allows this distributed and dynamic organisation to stay ahead of the competition. Michael Kennedy is the IT Infrastructure Manager at Park Resorts and a key part of Michael’s role is providing a Data Protection strategy for the business. Michael turned to trusted partner NCE for the solution to meet this challenge. “Speaking to NCE it emerged that what we were asking for wasn’t a unique request; we wanted a software solution that could protect both our Physical and Virtual Server environment – and using two separate, independent, products to achieve the same goal simply didn’t make sense. Deduplication was also an important feature in what we wanted as this would mean we would get more out of our storage target, as we (as with every business) have a significant amount of duplicated files in our environment. We also needed the Bare Metal Recovery (BMR) functionality to help with migrations, upgrades and instant restores.” Unified Data Protection (UDP) from Arcserve offered the flexibility and feature set that Park Resorts required. Since investing in the solution from NCE, a Premium Partner of Arcserve, Michael has nothing but positives to say about the solution “The implementation went exceptionally well. I managed to move all of our virtual and physical servers over to the Arcserve UDP environment much quicker than expected and without issue. I have never put a product in so quickly, and when it worked it just worked. Ironically, I was the slowest bit! It

60


NCE Computer Group has impressed me and it has been a game changer for our business. We have gone from a very manual incomplete data protection strategy with extreme vulnerability, to an automated, complete data protection strategy by investing in Arcserve UDP.” “We are backing up ten times as much data with Arcserve UDP in a quarter of the time it used to take. We couldn’t do a full backup with the previous solution; we had to stagger the backups across numerous windows just to achieve a full backup. The deduplication ratios we are getting are between 70-80% and these have, without doubt, played a major part in the backup efficiency. I know it’s running, I know it’s doing its stuff and I can rely on it. The previous software we used meant that I used to ask the service desk team to log the results of all the backups every day. We don’t need to do that anymore. It has freed up resource as a result, saving hidden costs to the business. Historically we only had a small window of recovery, whereas now we have the ability to roll back and deliver a point in time recovery to meet the users requests. It’s fantastic!” The Arcserve UDP licensing model was also a positive when Michael investigated Arcserve UDP “The capacity licence provides us with the flexibility that we require as we’re no longer restricted or tied to a specific licence that relates to a specific feature or application. The predecessor that we used was notorious for making it incredibly difficult to understand what you had and what you needed. Trying to wade through their licensing was ridiculous; it was time consuming and difficult to manage. I’m a big fan of the a la carte approach that this new licensing model has, and I hope that other vendors will adopt it – it will certainly make my life a lot easier if they do!”

61


Little Book Of Data Storage Vendor Focus

Spectra Logic Founded: 1979 Headquarters: Boulder, Colorado Portfolio: Deep Storage Solutions featuring Disk and Tape Technology There are few vendors that can profess to being in the industry for longer than NCE, but Spectra Logic is one of this select group. It may come as no surprise that the alliance between NCE and Spectra Logic is also a very long standing one and we have seen Spectra Logic evolve from a company that was an established name in manufacturing reliable, feature-rich tape automation (an area that still features heavily in their portfolio) to the market leader in “Deep Storage”. Spectra Logic has engineering at the core. Chairman and CEO Nathan Thompson founded the company whilst studying electrical engineering and computer science at the University of Colorado in Boulder, where the company Headquarters remains to this day, maintaining this technically differentiating strength as a fundamental attribute of the business. The true attention grabber in the Spectra Logic Deep Storage portfolio is the “Spectra Verde DPE” NAS (CIFS and NFS included) offering based on the Shingled Magnetic Recording (SMR) drives (a topic that is covered in more detail elsewhere in the book). ZFS software RAID provides triple-parity (RAID Z3) with continuous data checksum for the disk storage. Verde DPE provides high-density bulk storage for reliable, high-capacity preservation of digital assets.

62


NCE Computer Group The naming conventions at Spectra Logic have always been intriguing; the Treefrog, Bullfrog and Gator historically featured in the portfolio together with the Verde – a very loud green coloured product that won’t be missed in your server rack. BlackPearl is the latest to emerge from the Spectra Logic random name generator (I thought that BlackPearl had a chart topping dance music hit with “Naked in the Rain” in 1990 but it transpires that this was BluePearl!). I’m sure that Spectra have some logic in this somewhere? To give it its full name, the BlackPearl Deep Storage Gateway from Spectra Logic sits in front of deep storage tape libraries and allows users to move data anywhere within their network using simple HTTP commands. Spectra Logic’s command set, DS3, is an extension of the S3 cloud storage interface (associated with Amazon) providing features for bulk data movement, increasing workflow efficiency. Combined with the Spectra BlackPearl Deep Storage Gateway, Verde DPE can deliver an end to end cost-efficient storage platform leveraging the DS3 integration with the Spectra Logic range of tape libraries to provide a combination of disk and tape protection. Expandable to 7.4 PB raw in a single rack, this represents an affordable solution for bulk storage and archive of large, unstructured files. Please don’t hesitate to contact NCE if you would like to discuss the Spectra Logic technology in more detail.

63


Little Book Of Data Storage

The Future... of Storage For those of you that are long-standing readers of this book and have stuck with us from the previous editions, a big thank you for your loyalty. Over 50,000 copies have been circulated since the first edition was published and I hope that the content continues to be of value to you in your role out in the “real world” (away from the datasheet “speeds and feeds”, magic quadrants and lab tested results!). It would be easy to get sucked into all the hype that flies around in our industry and try to convince you that the future will see a radically changed storage landscape with ground breaking approaches and technology. Sometimes best to look back over the lifetime of this book to date before we look forwards. We have seen new technologies emerge and on arrival predict the demise of the technology they have set out to replace. Ultimately both have survived and coexist in the storage ecosystem happily working alongside each other. We have seen what started out as Hierarchical Storage Management (HSM) become Information Lifecycle Management (ILM) and then morph into Tiered Archiving solutions – fundamentally still all achieving the same objective but keeping the marketing department active as the transition to a new terminology took place. We have seen what was historically known as a centralised or consolidated storage environment become tagged as an “on-premise cloud storage solution”. Perhaps what we are actually seeing isn’t as revolutionary as it proclaims to be? Nevertheless, the established components and developers at the foundation of the aforementioned storage ecosystem continue to improve the established technology. New technology does, occasionally, surface (Something to watch out for is 3D Xpoint as an example). And, in all honesty, that means that the process of writing and editing this book is made somewhat easier as the underlying template from the last edition remains valid. So, this time next year will storage be any different to how it is this year? Of course it will, and the next edition will be taking shape to bring you up to speed on it. See you then!

64


NCE Computer Group

Sales Buzzword Bingo The Customer Sales Game! If you have made it to this page of the book, you deserve to have a little bit of fun! Here’s something that may cheer up your day, the idea is that next time you are visited by a sharply dressed, straight out of the training academy, sales executive you can (discreetly) open your Little Book to this page and take part in “Sales Buzzword Bingo”! Below is your checklist, and in keeping with the rules of Bingo you need to work towards ticking off every phrase on the card. Trade shows or vendor conferences can be very rewarding if you are looking to check these all off in one hit. Good luck!

The “Sales Buzzword Bingo” Card Low hanging fruit Win-win situation

Let’s touch base Drop me a mail Think outside the box

It’s on my radar

Nirvana

I think we need to Park this

I don’t have the bandwidth

Bang for your buck Take this offline

Winning hearts and minds

No brainer

Drill-down

Paradigm

Reach out “X” as a Service

Blue sky thinking

In the interests of Health and Safety, please try to avoid standing up and shouting “House” or “Bingo” when you complete the card as it may confuse or frighten the visiting sales person.

65


Little Book Of Data Storage

Glossary of Terms

66

AIT

Advanced Intelligent Tape

AFA

All Flash Array

API

Application Programming Interface

ATA

Advanced Technology Attachment

BYOD

Bring Your Own Device

CAS

Content Addressed Storage

CDP

Continuous Data Protection

CIFS

Common Internet File System

CNA

GBIC

Gigabit Interface Converter

HAMR

Heat Assisted Magnetic Recording

HBA

Host Bus Adapter

HDD

Hard Disk Drive

IDE

Integrated Drive Electronics

IP

Internet Protocol

IPO

Initial Public Offering (Share issue)

ISV

Independent Software Vendor

Converged Network Adapter

JBOD

Just a Bunch of Disks

CoD

Capacity on Demand

LRM

Library Resource Module

CPU

Central Processing Unit

LTFS

Linear Tape File System

D2D2T

Disk to Disk to Tape

LTO

Linear Tape Open

DAS

Direct Attached Storage

LUN

Logical Unit Number

DAT

Digital Audio Tape

LVD

Low Voltage Differential

DBA

Database Administrator

MEM

Memory Expansion Module

DLT

Digital Linear Tape

MLC

Multi Level Cell (SSD)

DR

Disaster Recovery

NAND

Negated AND (Flash)

DSD

Dynamically Shared Devices

NAS

Network Attached Storage

ECC

Error Correcting Code

NCE

National Customer Engineering

eMLC

enhanced Multi-Level Cell (SSD)

NFS

Network File System Network Interface Card

FCoE

Fibre Channel over Ethernet

NIC nm

Nanometer (Fibre Channel)

FTP

File Transfer Protocol

OEM

GBE

Gigabit Ethernet

Original Equipment Manufacturer


NCE Computer Group

OSD

Object-based Storage Device

SMR

Shingled Magnetic Recording

P2V

Physical to Virtual

SSD

Solid State Drive

PEP

Part Exchange Program

TAMR

RAID

Redundant Array of Independent Disks

Thermally Assisted Magnetic Recording

TB

Terabyte

ROI

Return on Investment

RPM

Revolutions per minute

TDMR

Two Dimensional Magnetic Recording

RPO

Recovery Point Objective

TLC

Triple Level Cell

RTO

Recovery Time Objective

UDO

Ultra Density Optical

SaaS

Storage as a Service

SAN

Storage Area Network

VADP

vStorage API for Data Protection

SAS

Serial Attached SCSI

VM

Virtual Machine

SATA

Serial Advanced Technology Attachment

VCB

VMware Consolidated Backup

SCSI

Small Computer Systems Interface

VCP

VMware Certified Professional

SFP

Small Form-Factor Pluggable

VSS

Volume Snapshot Service

VTL

Virtual Tape Library

SLA

Service Level Agreement

Vvols

VMware Virtual Volumes

SLC

Single Level Cell (SSD)

WAFL

Write Anywhere File Layout

SLS

Shared Library Services

SMB

Server Message Block

WEEE

Waste Electrical and Electronic Equipment

WORM

Write Once Read Many

Printed by Park Lane Press on FSC certified paper, using fully sustainable, vegetable oil-based inks, power from 100% renewable resources and waterless printing technology. Print production systems registered to ISO 14001: 2004, ISO 9001: 2008 and EMAS standards and over 95% of waste is recycled.

67


@nceeurope www.linkedin.com/company/nce-computer-group NCE Computer Group Europe USA 1866 Friendship Drive, El Cajon, California CA 92020

t: +44 (0)1249 813666 f: +44 (0)1249 813777 e: info@nceeurope.com

t: +1 619 212 3000 f: +1 619 596 2881 e: 4info@ncegroup.com

www.nceeurope.com

www.ncegroup.com

ONAL ATI RN

CE R

FIC TI

ATION

INT E

United Kingdom 6 Stanier Road, Calne, Wiltshire, SN11 9PX

ISO 9001 AND 14001 REGISTERED FIRM

The Little Book of Data Storage - 12th Edition

Read, download and share the Little Book of Data Storage, by visiting: www.nceeurope.com Stay informed with NCE:


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.