All-Flash Storage Promises a Go-To Strategy for MSPs

Managed service providers (MSPs) need to offer fast, affordable storage. Storage is a perennial concern for enterprises of all sizes, and many are considering offloading their storage to reduce costs and headaches. But to win storage business, MSPs face formidable players like AWS, Azure, and Google Cloud. To compete, they must offer high-value, cost-effective solutions.

To this end, forward-thinking MSPs should consider all-flash solutions, which are positioned to furnish the performance, scalability, and availability that enterprises demand. The capital costs of flash storage are still greater than spinning disk, but its price per gigabyte continues to drop and it’s a matter of time before it achieves parity with the legacy technology.

Flash storage platforms can trim operating costs. They consume less power than their mechanical-drive counterparts and require less space, providing substantial cost-savings every month.

Flash also offers much greater IOPS than spinning disks and this performance gap will only widen with the adoption of NVMe (Non-Volatile Memory express) drives. NVMe eliminates the bottlenecks caused by storage subsystems engineered for slower spinning disks. Additionally, flash is getting denser all the time, which boosts scalability and further conserves space. Finally, the use of technologies like dual controllers and virtualization strategies will help ensure flash storage-systems offer the levels of availability that the large cloud providers tout.

Increasingly, all-flash storage solutions present MSPs the advantages and economies to compete against AWS and the other behemoth cloud repositories.

Storage Upheavals Are Opportunities for MSPs (Managed Service Providers)

For years, the storage business had been reasonably stable. Primary storage was local and backed-up data were nearby or at remote sites along with archived data. Production files were rapidly accessible, and governance and compliance demands were more or less met. The choices were finite and life was, for the most part, relatively orderly.

RAID Fault Tolerance - ACNC

But things change over time. Clouds changed the economics of storage from a capital expense to an operating cost. They offer repositories that are less expensive than do-it-yourself solutions and are virtually infinite in capacity. Moreover, there are now public and private clouds to choose from and the advent of object storage delivers a metadata-based file system that is ideal for stashing billions or even trillions of files.

Add to this software-defined storage (SDS), which abstracts storage from the underlying hardware. SDS offers control and scalability but requires expertise to deploy effectively. Throw in solid-state and in-memory storage, which can greatly enhance the performance of storage systems, as well as the rise of containers, and the storage business has become a vexing mélange of technologies and options.

Which is fertile ground for MSPs. Rather than acquire new layers of IT know-how, enterprises of all sizes and in all verticals can turn to MSPs to navigate the new world of storage. MSPs can manage services in both the data center and cloud to meet the needs for primary, backup, and archival storage cost-effectively.

Opportunistic MSPs can assess organizations’ demands for performance and scalability, and oversee migrations of data to public or private clouds. They can ensure that security, compliance, and governance needs are always met, and implement disaster recovery strategies that satisfy each client’s business mandates. Moreover, MSPs can leverage hardware resources by using virtualization to consolidate multiple resources into a single, cost-efficient cloud offering.

By offering storage as a service, MSPs can find that change and upheaval present business opportunities.

Hybrid vs Public Clouds; Which makes sense for you?

If your organization is not utilizing some form of a cloud, the odds are it soon will. Your question will be what kind cloud—public, private, or hybrid? Private clouds can be dedicated clouds provided by vendors in which no resources are shared by any other customer. They also can be onsite solutions, but building and operating a cloud data center is costly and demands solid IT services. Large enterprises deploy private clouds when security, compliance, and control are paramount, requiring that data be kept within enterprise firewalls or a vendor’s dedicated firewalls. Public and hybrid clouds are for everyone else.

Public clouds offer compelling benefits. They let you outsource IT infrastructure, reducing both hardware expenses and the operational costs of maintaining skilled IT staff. They provide dependability, automated deployments, and scalability. They enhance cost-efficiencies by allowing you to pay only for the resources you use, when you use them. They enable testing environments to be quickly constructed and deconstructed. You can manage your resources and assets in the cloud yourself, or have a managed cloud services provider do it for you.

Moreover, public clouds can slash the relentless costs of storing ever-increasing data troves. Rather than continually invest in on-site storage capacity, you can convert short-term, long-term, backup, and archival storage costs to a more economical operating expense in public clouds. When users must access large files without latency, they can turn to emerging on-premise caching solutions that sync with cloud stores.

There isn’t one definition for hybrid clouds, but they generally offer the best of both worlds by combining public and private cloud services. You can still control key applications and data sets within the confines of the enterprise network while keeping other applications and long-term data sets in the cloud. You can use the public cloud as an insurance policy, expanding services onto the cloud during peak periods when your compute and/or storage demands exceed your onsite capabilities. This is known as cloud bursting. A hybrid cloud can help operational continuity during planned or unplanned outages, blackouts, and scheduled maintenance windows. Hybrid clouds also offer security options. You can keep the apps and data that demand vigorous protections onsite where you have complete control, while running less security-sensitive apps in a public cloud. You even can use a hybrid cloud to experiment with incrementally migrating infrastructure offsite.

Hybrid clouds offer many substantial benefits but bear in mind that they can present management challenges. You’ll need to federate disparate environments (although some vendors argue that management is simplified when the datacenter and cloud both use similar technologies). Scripts need to span public and private infrastructure, policy changes must be applied consistently, and operations and tasks automated across multiple environments. Fortunately, there are cloud management solutions available, with more to come.

Align a cloud solution with your performance needs, security requirements, and budget. You don’t have to put everything in a cloud, but you’ll need to deploy and manage what you don’t. Make a balance sheet of your current and projected capital and operating costs to determine which solution makes the most business sense for your organization.

HDD vs SSD Storage

How do solid-state drives (SSDs) stack up to traditional hard-disk drives? Like many things in life, it depends on money and what you need. SSDs deliver greater performance than HDDs, especially faster, more predictable read times. SSD performance will only increase as vendors deploy non-volatile memory express (NVMe), a protocol designed to unleash their full potential. Familiar protocols like SCSI, SATA, and Fibre Channel were engineered for HDDs and constrain the capabilities of SSDs.

SSDs deliver higher IOPS, increased throughput, and reduced latency over HDDs. When evaluating SSDs, consider the needs of your business-critical applications. Apps that benefit from greater IOPS include virtual desktop infrastructure (VDI), server virtualization, and systems that have many concurrent users. Apps that require high throughput are those that move large files and data sets, such as data analytics, video processing, and medical imagery. Applications that are sensitive to latency typically are comprised of multiple sub-applications in a stack or across nodes, such as transactional processing systems. They also include clustered databases, streaming apps, and high-performance computing solutions.

SSDs can also improve resource utilization, supporting, for example, many more virtual machines than HDDs without performance compromises. Their speed also enables functionalities like data deduplication and compression to be performed on active storage, rather than on backups and archives.

SSDs offer additional advantages over HDDs. They need meaningfully less energy to run, which decreases power costs. They also run cooler than HDDs, lessening cooling expenses. By slashing energy requirements, AFAs operate more economically, substantially reducing ownership costs. Another consideration is density; With SSDs, you can squeeze more capacity into a rack unit, conserving space.

But before investing heavily in SSDs, there are other factors you should consider. Blazing fast SSDs might eliminate one bottleneck, but they won’t mitigate other bottlenecks in the network and elsewhere. Application and data delivery are only as fast as their slowest links.

Another is cost. Although their costs are dropping, SSDs are still more expensive to buy than tried-and-true HDDs. This is why hybrid arrays have emerged, in which SSDs handle active data while HDDs economically store inactive and backup data.

Until the costs per gigabyte of flash storage aligns with HDDs, consider SSDs for storage that demands performance and HDDs for everything else. Hybrid arrays might be your best solution. However, it is now easier than ever to justify using all-flash arrays (AFAs). It is only a matter of time when data centers are all flash and long-term storage resides on SSDs. With their many advantages, SSDs can be the smart, future-proof investment.

AFAs, hybrid arrays, and HDD arrays are each optimized to meet specific needs. Evaluate what you require, benchmark your systems, and work with vendors to determine the right solution for you. An AFA may or may not make sense for you now. However, they are in your future; it’s just a matter of when.

All-Flash Storage Arrays (AFAs)

For years, the world relied on hard disk drives (HDDs) to store digital information. These electro-magnetic devices worked well and their costs per gigabyte continually decreased. However, they had inherent performance constraints because of the time they required to read or write data on spinning metal platters with magnetic coatings. Burgeoning data troves and new generations of sophisticated applications rendered their read/write times, even when measured in milliseconds, a computing and business bottleneck.

HIGH PERFORMANCE ALL FLASH FC SAN STORAGE – JETSTOR 826FXD

New chip-based technologies emerged that resulted in solid-state drives (SSDs) without any moving parts. They’re much faster than HDDs and provide more predictable response times for time-critical applications, but their costs were high and their storage capacities limited relative to HDDs. They were cost effective in hybrid storage arrays. In these solutions, SSDs cached active data on tier 0 while HDDs still bore the brunt of the storage load.

The costs of SSDs are inexorably decreasing while their capacities increase. Fifteen terabyte SSDs are available and larger sizes are on the horizon. SSDs are beginning to be practical for longer-term storage and for applications that are less speed-critical. As a result, what has emerged is a new storage paradigm known as the all-flash storage array.

Initially, AFAs were too costly for all but enterprise usage, but this is changing. Moreover, AFAs are beginning to leverage the performance of their SSDs. Hybrid arrays generally rely on the same protocols that were developed for HDDs, such as SCSI, SATA, and Fibre Channel. These are incapable of supporting the blistering speeds of SSDs. The industry responded with non-volatile memory express (NVMe), a faster, more efficient protocol developed specifically for flash drives. Moreover, storage system software is becoming multi-threaded so CPU cores can cost-effectively keep pace with flash speeds.

Are AFAs finally practical for small- and medium-sized organizations? They’re still pricey compared to traditional HDD arrays, especially those that are designed for capacity. Yet you must consider more than acquisitions costs. In our next blog, we’ll touch upon some of the considerations and use cases to determine if AFAs make sense for you.

And the trends for 2017…

Projecting trends for 2017

At the beginning of every year, it’s customary to pull out the crystal ball and project trends for the new year.  Often, however, promising solutions fail to catch on because of reasons like their technologies are not yet mature or economics still favor other strategies.  With that said, below are some trends that we’re confident are good bets for 2017.

Flash Flash Flash…High capacity flash at 15TB per drive.

In our prior blog, we discussed the arrival of affordable flash storage.  Samsung has released 2.5-inch SAS flash drives with 15TB of capacity and Seagate is preparing to introduce 60TB 3.5-inch SAS drives.  Other advances are not far behind.  As their capex and opex decrease, flash drives are taking on an ever-expanding role in tier 1 storage.  Small and mid-size organizations are now adopting all-flash arrays, thereby gaining substantial performance boosts, slashing energy costs, and keeping pace with larger competitors.

Welcome to NVMe

This trend piggybacks on the proliferation of all-flash storage.  Solid-state drives are constrained by a bottleneck—legacy storage buses.  SAS and SATA were designed for spinning disks, not for the blistering speeds of SSDs.  These widely-used protocols lack the bandwidth to fully support SSDs, thereby seriously compromising the performance of SSD storage.  To the rescue is non-volatile memory express, or NVMe.  The industry created the technology to provide both far more bandwidth than SAS and SATA and greatly enhanced queuing.

If you think SSDs are fast now, just wait until you unleash their full potential.  Your wait won’t be long.

Software-defined storage

The market can’t seem to agree on a universal definition of software-defined storage (SDS), but a lot of people are talking about it anyway.  SDS separates storage services and functionality from the actual storage hardware.  Rather than rely on individual hardware solutions for services and functionality, administrators use software to centrally manage and control storage across the environment.  They rely on hardware only to physically house the data and software for everything else.  SDS reduces complexity, but its value proposition extends further.  Organizations can use commodity hardware while obtaining much of the functionality and performance of costlier arrays.  Bear in mind that SDS is not storage virtualization.  SDS provides management and control services regardless if the storage is virtualized.  With the declining cost of commodity hardware and the persistent need for additional capacity, more and more enterprises are turning to cost-saving SDS solutions to manage and protect their data.

Software-defined data centers

In a software-defined data centers (SDDC), the entire infrastructure is controlled by software and delivered as a service.  Networking, storage, and servers are virtualized, thereby decoupling provisioning, configuration, security, administration, and other processes from the hardware.  Transforming a conventional data center into an SDDC is a large undertaking and demands the transformation of IT roles as well, but 2017 will see it happen more often because of the benefits.  SDDCs promise greater agility, cost savings due to a reliance on commodity hardware, less complexity, and better deployment of cloud services.

32G Fibre Channel

Pundits have prognosticated the eventual demise of Fibre Channel as a storage networking standard.  File-based unstructured data is growing more rapidly than block-based data and the former relies on Ethernet, as do cloud and hyperconverged architectures.  However, the introduction of 32G FC will keep the standard healthy in 2017.  The increased use of flash drives, as well as the roll-out of NVMe and related technologies, puts the performance bottlenecks on the storage network, ensuring FC, especially at 32G speeds, will maintain a healthy presence in many data centers.  32G FC solutions were introduced last year—switches from Cisco and Brocade and adapters from Qlogic and Broadcom—and we’ll be seeing vendors of storage arrays support 32G FC.

Affordable Flash Storage Has Arrived

The evolution of flash drives is inevitable.  Like other disruptive technologies, it offers substantial advantages over legacy solutions—in this case, spinning disks—but its initial costs limited its adoption to such enterprise applications as high-speed transactional processing.  Over time, factors like economies of scale kicked in and the cost of flash decreased.  Who still uses a laptop with a mechanical drive?  As a result, the market share for flash drives increased considerably last year as major storage vendors reportedly sold more all-flash arrays than hybrid or spinning-disk solutions.

 

Flash technology is also improving.  NAND flash began with single-level cells that store one bit of information.  Vendors then developed multi-level cells in which each cell can store two bits of information.  Now, we’re seeing new NAND technology that stores three bits of information per cell.  Providers will be offering this next generation of flash known as 3D NAND flash, which will further improve density and thus capacity.

The market has already seen the introduction of 2.5-inch SAS flash drives with 15TB of capacity and Seagate is rolling out 3.5-inch SAS drives with a capacity of 60TB.  We can expect further advances going forward.  What this all means is the economics of flash storage are becoming practical for tier 1 workloads and it’s only a matter of time before they make sense for tier 2 storage, obviating hybrid arrays.  In fact, would anyone bet against flash drives someday supporting even archival storage?

Make no mistake.  Most data still reside on spinning disks, but the trends are clear and inexorable.  Flash storage is becoming denser, higher capacity, and more cost-effective.  Over the foreseeable future, solid-state drives will become less expensive than their spinning disk counterparts.  It’s inevitable that spinning disks will go the way of VCRs and cathode ray tubes.

We’re not going to see all-flash data centers immediately, but that day may be closer than many pundits think.  In terms of I/O performance and energy consumption, everyone will be a winner when that day does arrive.  Meanwhile, all-flash arrays are increasingly within the reach of small and mid-size organizations.  It’s only a matter of time before you migrate your primary storage to all-flash platforms.  Your organization will be more effective and cost-efficient when you do.

What are you waiting for?

Smart Choices for Building, or Building Out, Networked-Attached Storage

NAS-Units

Networked-attached storage (NAS) remains a sound strategy for delivering applications and shared storage, especially for enterprises that seek to avoid the costs and complexities of storage-area networks (SANs). AC&NC now offers some compelling NAS choices for your consideration. Whether you want to implement NAS for a workgroup or scale out a NAS infrastructure for your entire company, AC&NC offers a roster of very competitive, high-availability JetStor® RAID solutions.

The JetStor NAS 400S V2 Storage Appliance delivers shared storage to both clients and servers in Windows, UNIX, Apple Macintosh, VMware or mixed environments. The device supports four 3.5″ SATA drives or 2.5″ SSD drives for up to 32 terabytes of storage in a compact 1U chassis.

The 1U JetStor NAS/iSCSI 405SD V2 Storage Appliance is a cost-effective way to increase storage on the network. Supporting advanced storage management and application features like VAAI and thin-provisioning, it offers five 3.5″ SATA drives or 2.5″ SSD drives for up to 40 terabytes of capacity and can be easily deployed in any IT environment.

The JetStor NAS 800S V2 Unified storage system also supports advanced management and application capabilities and can accommodate eight 3.5″ SATA, 3.5″ SAS, and 2.5″ SSD drives for up to 64 terabytes of storage in a 2U enclosure.

For high-capacity NAS applications, the JetStor NAS 1200S 12G Unified storage system offers twelve 3.5″ SAS or SATA drives, also in a 2U chassis. For robust data protection, the solution, like those above, supports such features as hot spares and automatic hot rebuilds.

The function-rich JetStor NAS 1600S 12G Unified storage system provides 16 bays that support 3.5″ SAS and SATA drives and 2.5″ SAS and SSD drives for extraordinary flexibility.

All AC&NC NAS arrays deliver high availability, support for functionality like the ZFS file system, and exceptional data protection. The JetStor 400S, JetStor 405SD, and JetStor 800S systems support RAID 0, 1, 5, 6, and 10, and the JetStor NAS 1200S and JetStor NAS 1600S solutions also support RAID 50 and 60.

With their high performance, reliability, and versatility, the JetStor systems are ideal as media, database, and mail servers. They are also effective for such applications as digital imaging and video recording, CCTV, forensics email archiving, file server data archiving, disk-to-disk backup, and virtualization. Full, incremental, and differential disk backups are simple and convenient.

In addition to many other features, the JetStor arrays feature the RAID Manager browser-based management and monitoring software, which enables you to easily and remotely configure and manage the JetStor solutions. You can even have them email you event notifications.

With JetStor, NAS is not just alive and well—it’s better than ever.

OpenStack & Ceph—Storage that is Function-rich, Flexible, and Free

OpenStack is a viable solution for many enterprise data centers. OpenStack is a free, open-source software platform that enables organizations to construct and manage public and private clouds. Its components manage processing, networking, and storage on hardware resources across the data center. Many of the biggest IT firms as well as thousands of OpenStack community members support the software. Among its many users are Intel, Comcast, eBay, PayPal, and NASA, which helped to develop the platform. As such, many view OpenStack as the future of cloud computing.

OpenStack offers flexible storage options, particularly when coupled with Ceph which it supports. Ceph is a free software storage platform that stores data on a distributed storage cluster. It’s highly scalable—beyond petabytes to exabytes—avoids a single point of failure, and is self-healing. What’s remarkable about Ceph is what lies at its core. Its Reliable Autonomic Distributed Object Store (RADOS) allows you to provide object, block or file system storage on a single cluster. This greatly simplifies management when you require applications that use different storage systems. Moreover, thanks to its functionality and resilience, you can use commodity hardware to build out your storage cluster for further cost-savings.

Excellent examples of such solutions are the JetStor® RDX48, Storage Server (www.acnc.com/p/JetStor_RDX48) and JBODs (www.acnc.com/c/JBOD) from AC&NC. The JetStor RDX48 is a 4U 48-bay device that supports both spinning disks and SSDs. Its SSD caching accelerates storage and retrieval and its caching controller automatically identifies and relocates frequently accessed data to the SSDs for the quickest access and transfer speeds possible. Supporting RAID 5, RAID 6, proprietary RAID 7.3, and RAID N+M, the array delivers enough throughput for HPC applications and it never drops a frame even with concurrent HD or 2K–8K digital video streams.

AC&NC offers similarly robust JBODs that range in size up to 4U 80-bay systems, ensuring every storage need can be met economically and easily. The reliability, performance, and scalability of these AC&NC solutions make them ideal complements to OpenStack/Ceph deployments.

What is Erasure Coding and When Should it Be Used?

shutterstock_90058201RAID has long been a mainstay of data protection. RAID protects against data loss from bad blocks or bad disks by either mirroring data on multiple disks in a storage array or adding parity blocks to the data, which allows for the recovery of failed blocks. Now another approach, erasure coding, is gaining traction.

Erasure coding uses sophisticated mathematics to break the data into multiple fragments and then it places the fragments in different locations. Locations can be disks within an array or, more likely, distributed nodes. If the fragments are spread over 16 nodes, for example, any ten can recover all the data. In other words, the data is protected even if six nodes, fail. This also means that if a node(s) fails, all the other nodes participate in replacing it, which makes erasure coding not as CPU-constrained as rebuilds are using RAID in a single array.

Erasure coding is finding many applications. It is often used, for example, for object storage, and vendors of file- and block-level storage are beginning to leverage the technology. Erasure coding has proven useful for scaled-out storage, protecting petabytes of lukewarm, back-up or archival data stored across multiple locations in the cloud. Of course, mirroring remains an option for these applications. This approach, however, requires double the storage capacity, but it does eliminate rebuilds.

RAID is still a strong strategy for safeguarding active, primary data. Data remains safe within the data center and rebuilds won’t tax available WAN bandwidth. To determine whether RAID or erasure coding is best for you, assess the impact each would have on your data protection needs.