And the trends for 2017…

Projecting trends for 2017

At the beginning of every year, it’s customary to pull out the crystal ball and project trends for the new year.  Often, however, promising solutions fail to catch on because of reasons like their technologies are not yet mature or economics still favor other strategies.  With that said, below are some trends that we’re confident are good bets for 2017.

Flash Flash Flash…High capacity flash at 15TB per drive.

In our prior blog, we discussed the arrival of affordable flash storage.  Samsung has released 2.5-inch SAS flash drives with 15TB of capacity and Seagate is preparing to introduce 60TB 3.5-inch SAS drives.  Other advances are not far behind.  As their capex and opex decrease, flash drives are taking on an ever-expanding role in tier 1 storage.  Small and mid-size organizations are now adopting all-flash arrays, thereby gaining substantial performance boosts, slashing energy costs, and keeping pace with larger competitors.

Welcome to NVMe

This trend piggybacks on the proliferation of all-flash storage.  Solid-state drives are constrained by a bottleneck—legacy storage buses.  SAS and SATA were designed for spinning disks, not for the blistering speeds of SSDs.  These widely-used protocols lack the bandwidth to fully support SSDs, thereby seriously compromising the performance of SSD storage.  To the rescue is non-volatile memory express, or NVMe.  The industry created the technology to provide both far more bandwidth than SAS and SATA and greatly enhanced queuing.

If you think SSDs are fast now, just wait until you unleash their full potential.  Your wait won’t be long.

Software-defined storage

The market can’t seem to agree on a universal definition of software-defined storage (SDS), but a lot of people are talking about it anyway.  SDS separates storage services and functionality from the actual storage hardware.  Rather than rely on individual hardware solutions for services and functionality, administrators use software to centrally manage and control storage across the environment.  They rely on hardware only to physically house the data and software for everything else.  SDS reduces complexity, but its value proposition extends further.  Organizations can use commodity hardware while obtaining much of the functionality and performance of costlier arrays.  Bear in mind that SDS is not storage virtualization.  SDS provides management and control services regardless if the storage is virtualized.  With the declining cost of commodity hardware and the persistent need for additional capacity, more and more enterprises are turning to cost-saving SDS solutions to manage and protect their data.

Software-defined data centers

In a software-defined data centers (SDDC), the entire infrastructure is controlled by software and delivered as a service.  Networking, storage, and servers are virtualized, thereby decoupling provisioning, configuration, security, administration, and other processes from the hardware.  Transforming a conventional data center into an SDDC is a large undertaking and demands the transformation of IT roles as well, but 2017 will see it happen more often because of the benefits.  SDDCs promise greater agility, cost savings due to a reliance on commodity hardware, less complexity, and better deployment of cloud services.

32G Fibre Channel

Pundits have prognosticated the eventual demise of Fibre Channel as a storage networking standard.  File-based unstructured data is growing more rapidly than block-based data and the former relies on Ethernet, as do cloud and hyperconverged architectures.  However, the introduction of 32G FC will keep the standard healthy in 2017.  The increased use of flash drives, as well as the roll-out of NVMe and related technologies, puts the performance bottlenecks on the storage network, ensuring FC, especially at 32G speeds, will maintain a healthy presence in many data centers.  32G FC solutions were introduced last year—switches from Cisco and Brocade and adapters from Qlogic and Broadcom—and we’ll be seeing vendors of storage arrays support 32G FC.

Affordable Flash Storage Has Arrived

The evolution of flash drives is inevitable.  Like other disruptive technologies, it offers substantial advantages over legacy solutions—in this case, spinning disks—but its initial costs limited its adoption to such enterprise applications as high-speed transactional processing.  Over time, factors like economies of scale kicked in and the cost of flash decreased.  Who still uses a laptop with a mechanical drive?  As a result, the market share for flash drives increased considerably last year as major storage vendors reportedly sold more all-flash arrays than hybrid or spinning-disk solutions.


Flash technology is also improving.  NAND flash began with single-level cells that store one bit of information.  Vendors then developed multi-level cells in which each cell can store two bits of information.  Now, we’re seeing new NAND technology that stores three bits of information per cell.  Providers will be offering this next generation of flash known as 3D NAND flash, which will further improve density and thus capacity.

The market has already seen the introduction of 2.5-inch SAS flash drives with 15TB of capacity and Seagate is rolling out 3.5-inch SAS drives with a capacity of 60TB.  We can expect further advances going forward.  What this all means is the economics of flash storage are becoming practical for tier 1 workloads and it’s only a matter of time before they make sense for tier 2 storage, obviating hybrid arrays.  In fact, would anyone bet against flash drives someday supporting even archival storage?

Make no mistake.  Most data still reside on spinning disks, but the trends are clear and inexorable.  Flash storage is becoming denser, higher capacity, and more cost-effective.  Over the foreseeable future, solid-state drives will become less expensive than their spinning disk counterparts.  It’s inevitable that spinning disks will go the way of VCRs and cathode ray tubes.

We’re not going to see all-flash data centers immediately, but that day may be closer than many pundits think.  In terms of I/O performance and energy consumption, everyone will be a winner when that day does arrive.  Meanwhile, all-flash arrays are increasingly within the reach of small and mid-size organizations.  It’s only a matter of time before you migrate your primary storage to all-flash platforms.  Your organization will be more effective and cost-efficient when you do.

What are you waiting for?

Smart Choices for Building, or Building Out, Networked-Attached Storage


Networked-attached storage (NAS) remains a sound strategy for delivering applications and shared storage, especially for enterprises that seek to avoid the costs and complexities of storage-area networks (SANs). AC&NC now offers some compelling NAS choices for your consideration. Whether you want to implement NAS for a workgroup or scale out a NAS infrastructure for your entire company, AC&NC offers a roster of very competitive, high-availability JetStor® RAID solutions.

The JetStor NAS 400S V2 Storage Appliance delivers shared storage to both clients and servers in Windows, UNIX, Apple Macintosh, VMware or mixed environments. The device supports four 3.5″ SATA drives or 2.5″ SSD drives for up to 32 terabytes of storage in a compact 1U chassis.

The 1U JetStor NAS/iSCSI 405SD V2 Storage Appliance is a cost-effective way to increase storage on the network. Supporting advanced storage management and application features like VAAI and thin-provisioning, it offers five 3.5″ SATA drives or 2.5″ SSD drives for up to 40 terabytes of capacity and can be easily deployed in any IT environment.

The JetStor NAS 800S V2 Unified storage system also supports advanced management and application capabilities and can accommodate eight 3.5″ SATA, 3.5″ SAS, and 2.5″ SSD drives for up to 64 terabytes of storage in a 2U enclosure.

For high-capacity NAS applications, the JetStor NAS 1200S 12G Unified storage system offers twelve 3.5″ SAS or SATA drives, also in a 2U chassis. For robust data protection, the solution, like those above, supports such features as hot spares and automatic hot rebuilds.

The function-rich JetStor NAS 1600S 12G Unified storage system provides 16 bays that support 3.5″ SAS and SATA drives and 2.5″ SAS and SSD drives for extraordinary flexibility.

All AC&NC NAS arrays deliver high availability, support for functionality like the ZFS file system, and exceptional data protection. The JetStor 400S, JetStor 405SD, and JetStor 800S systems support RAID 0, 1, 5, 6, and 10, and the JetStor NAS 1200S and JetStor NAS 1600S solutions also support RAID 50 and 60.

With their high performance, reliability, and versatility, the JetStor systems are ideal as media, database, and mail servers. They are also effective for such applications as digital imaging and video recording, CCTV, forensics email archiving, file server data archiving, disk-to-disk backup, and virtualization. Full, incremental, and differential disk backups are simple and convenient.

In addition to many other features, the JetStor arrays feature the RAID Manager browser-based management and monitoring software, which enables you to easily and remotely configure and manage the JetStor solutions. You can even have them email you event notifications.

With JetStor, NAS is not just alive and well—it’s better than ever.

OpenStack & Ceph—Storage that is Function-rich, Flexible, and Free

OpenStack is a viable solution for many enterprise data centers. OpenStack is a free, open-source software platform that enables organizations to construct and manage public and private clouds. Its components manage processing, networking, and storage on hardware resources across the data center. Many of the biggest IT firms as well as thousands of OpenStack community members support the software. Among its many users are Intel, Comcast, eBay, PayPal, and NASA, which helped to develop the platform. As such, many view OpenStack as the future of cloud computing.

OpenStack offers flexible storage options, particularly when coupled with Ceph which it supports. Ceph is a free software storage platform that stores data on a distributed storage cluster. It’s highly scalable—beyond petabytes to exabytes—avoids a single point of failure, and is self-healing. What’s remarkable about Ceph is what lies at its core. Its Reliable Autonomic Distributed Object Store (RADOS) allows you to provide object, block or file system storage on a single cluster. This greatly simplifies management when you require applications that use different storage systems. Moreover, thanks to its functionality and resilience, you can use commodity hardware to build out your storage cluster for further cost-savings.

Excellent examples of such solutions are the JetStor® RDX48, Storage Server ( and JBODs ( from AC&NC. The JetStor RDX48 is a 4U 48-bay device that supports both spinning disks and SSDs. Its SSD caching accelerates storage and retrieval and its caching controller automatically identifies and relocates frequently accessed data to the SSDs for the quickest access and transfer speeds possible. Supporting RAID 5, RAID 6, proprietary RAID 7.3, and RAID N+M, the array delivers enough throughput for HPC applications and it never drops a frame even with concurrent HD or 2K–8K digital video streams.

AC&NC offers similarly robust JBODs that range in size up to 4U 80-bay systems, ensuring every storage need can be met economically and easily. The reliability, performance, and scalability of these AC&NC solutions make them ideal complements to OpenStack/Ceph deployments.

What is Erasure Coding and When Should it Be Used?

shutterstock_90058201RAID has long been a mainstay of data protection. RAID protects against data loss from bad blocks or bad disks by either mirroring data on multiple disks in a storage array or adding parity blocks to the data, which allows for the recovery of failed blocks. Now another approach, erasure coding, is gaining traction.

Erasure coding uses sophisticated mathematics to break the data into multiple fragments and then it places the fragments in different locations. Locations can be disks within an array or, more likely, distributed nodes. If the fragments are spread over 16 nodes, for example, any ten can recover all the data. In other words, the data is protected even if six nodes, fail. This also means that if a node(s) fails, all the other nodes participate in replacing it, which makes erasure coding not as CPU-constrained as rebuilds are using RAID in a single array.

Erasure coding is finding many applications. It is often used, for example, for object storage, and vendors of file- and block-level storage are beginning to leverage the technology. Erasure coding has proven useful for scaled-out storage, protecting petabytes of lukewarm, back-up or archival data stored across multiple locations in the cloud. Of course, mirroring remains an option for these applications. This approach, however, requires double the storage capacity, but it does eliminate rebuilds.

RAID is still a strong strategy for safeguarding active, primary data. Data remains safe within the data center and rebuilds won’t tax available WAN bandwidth. To determine whether RAID or erasure coding is best for you, assess the impact each would have on your data protection needs.

The Scale-Out Virtues of Object Storage

File- and block-based storage are well established, but a third alternative, object storage, is hardly new technology. Today, there are countless petabytes of object storage in public clouds, and hundreds of millions of Facebook, Google, and Twitter users routinely rely on the technology. So what is object storage? It’s a method for saving unstructured data in discrete units called objects. Objects are stored in containers in a flat structure rather than the more familiar nested tree structures of file and block storage. Each object is identified by a unique ID, which enables the data to be found without knowing its physical location.


Whereas most file systems are limited by the number of files, directories, and hierarchical levels they support, object storage can scale almost infinitely. Metadata is kept directly with the object, eliminating the need for lookups in relational databases and further enabling immense scalability. An added benefit is you can easily apply security or retention policies to the metadata.

When should object storage be deployed? The technology is not ideal for transactional or frequently changing data because of performance issues. Block-based solutions provide greater read/write speeds for disk-intensive transactional data. Object storage shines, however, for relatively static data, backups, and cold archival data. Object storage is cost-effective for these applications because it works on commodity hardware.

In a world generating torrents of data every day, the scalability and economy of object storage are attractive, particularly in public and private clouds. What makes this fit even better is object storage pools are accessed via the HTTP protocol or an API, rather than Fibre Channel for block-based storage, for example, or NFS for file-based storage. Sites like Google and Facebook store vast troves of images and videos in object storage.

A key advantage of object storage is how easily it scales. Object storage systems scale out horizontally simply by adding additional nodes. Moreover, because object storage is location agnostic, nodes within a storage pool can be geographically distributed. You can access objects that physically reside on a node anywhere without going through a central controller, delivering a truly global infrastructure in which data can be accessed via the WAN or the Internet. NAS systems also scale out horizontally, but their expansion is restricted by their hierarchical file structures which grow in complexity as they increase in size. Also, NAS systems work best on local networks in the data center. Again, the flat structure of object storage offers almost astronomical scalability for storing unstructured data.

If you need to store petabytes of files that are accessed only occasionally, object-based storage might be your best bet. For large, expanding data troves, such as those found in healthcare, media/entertainment, and other industries, the technology can provide the economical advantages of tape but with less effort and superior retrieval speeds.

Peering into the Future: Storage in 2016

It’s the start of a new year and time for pundits to predict what we can expect in the storage market going forward. Let’s touch upon some of the more prominent trends.

RAID Fault Tolerance - ACNC

SSDs versus HDDs

Foremost, of course, is the inexorable adoption of SSDs by both the consumer and enterprise markets. The drivers are well reported. Superior I/O performance, decreasing cost per gigabyte, and lower energy consumption, which help offset the capital costs of flash-based drives.

Additionally, the capacity of SSDs continues to grow. 2TB SSDs are increasingly popular and 15TB SSD drives just been announced. This year, expect 512GB USB drives that fit in your pocket. There’s no reason why SSDs can’t eventually become more capacious than HDDS. As flash technologies improve, it’s only a matter of time before their prices drop quicker than HDDs.

In the more immediate future, all-flash arrays will continue to proliferate, replacing HDD arrays particularly for mission critical applications. As their per gigabyte costs come down, they will increasingly be used for other applications. SSDs will eventually replace HDDs as certainly as CDs and DVDs replaced audio and video cassettes.

Does this mean HDDs are extinct? Hardly. As long as they’re the cheapest way to store large data troves, they’ll continue to be around. For this reason, HDDs will still outsell SSDs in 2016. Moreover, they’ll continue to improve. We should see 10TB HDDs in the standard 3.5-inch form factor this year and it is predicted that in the next few years, their capacity will grow to 20TB, if not larger.

Additionally, innovations like Shingled Magnetic Recording (SMR) will increase the capacity of HDDs, although SMR will be used only for particular applications because of its limits on random reads. SMR will make spinning disks more cost-effective for archiving and cold data storage. Considering that LTO drives are slow and no longer offer significantly larger storage capacities, purpose-build HDDs will further undermine tape for archiving. It is reasonable to project, however, that SSDs will eventually replace even high-capacity HDDs for such applications.

Bigger Ethernet pipes

This long anticipated advance is finally arriving. 25GB Ethernet is hitting the market and it is backwards compatible with 10GB Ethernet. The cost of 25GB Ethernet will soon be competitive with 10GB, making it the de facto choice for new deployments. Can 50GB and 100GB Ethernet be far away?

Software-Defined Storage (SDS)

After countless articles and discussions, and many proofs of concepts, SDS should have its breakout year 2016. Decoupling storage functionality from physical devices offers too many advantages to not see widespread deployment in production environments.

Infrastructure convergence

More and more, storage is viewed not as a standalone component of the IT infrastructure, but as tightly wound with compute, network, and applications. For this reason, a trend that is arguably on its way to becoming the status quo is infrastructure convergence. Aggregating storage, compute, and network components from a single vendor and its partners simplifies ownership and management. We’ll start to see convergence not only in environments like ROBOs, but also in data centers.

Cloud storage

The cost of cloud storage continues to fall and now is very cost-competitive for both businesses and consumers. The limits of Internet bandwidth are mitigated by the availability of continuous data protection (CDP) solutions, which makes the cloud very appealing for secondary or tertiary backups. Assuming security is up to snuff, expect to see more enterprises storing data in clouds as well as at primary and secondary sites.

ARM servers

ARM processors are already used in consumer electronic devices because they are smaller in size, less complex, and consume less power. We’ll also see them in servers this year as their characteristics make them quite useful in data centers.

Non-Volatile Memory (NVM)

NVM is another technology that eventually will be making the rounds throughout the data center. NVM is non-volatile memory, which means data is saved when a device is turned off or loses its power. Though not as quick as RAM, NVM is faster than NAND flash. Its current costs will impede its adoption, but expect to see hybrid systems in which NVM provides a new tier of ultra-high speed storage. In these solutions, NVM will provide the blazing performance with NAND flash taking the role once held by spinning disks.

Check with us next year to see how well our crystal ball worked.

Storing 4K Digital Content

The evolution of yesteryear’s grainy, black and white televisions to today’s dazzling high-definition monitors confirms that we prefer our content displayed in higher resolution. This is why filmmakers and videographers are starting to shoot in 4K, which offers four times the pixels of 1080p.shutterstock_159972956

Granted, consumers currently have no means to play 4K content, but 4K-capable monitors are available, affordable, and selling today. Additionally, as Internet bandwidth becomes more plentiful, content distributors will start offering shows and films in 4K resolution. Consequently, it is only a matter of time before 4K content is consumed at home, and productions that shoot at this resolution are simply anticipating the future.

But storage becomes a key consideration. The original footage must be stored while editors work on compressed copies, which also must reside somewhere. Storage also has to deliver multiple streams of content to workstations. When working with 4K, however, four times the pixels means four times the information of 1080p.  Videomaker magazine worked out the numbers ( An hour of standard-definition digital video needs about 12.7GB of storage, or some 217MB per minute. An hour of uncompressed 4K content requires almost 110GB of storage, or about 2GB per minute.

Consequently, with everything being equal, 4K footage demands far more capacious storage solutions than footage shot in 1080p. Fortunately, there are codecs that allow editors to work in compressed 4K. Plus, the cost of storage continues its inexorable decline.

When it comes to capacity, the most cost-effective strategy remains spinning disks. Solid-state drives (SSDs) offer superior I/O performance and their prices are dropping, but they still cost substantially more per gigabyte than spinning disks. But traditional hard drives must offer the throughput to stream multiple streams of higher resolution footage without latency. Disks should have a rotational speed of at least 7200 rpm and offer at least 1GigE connectivity, if not 10GigE or 4/8/16 Fibre Channel.

Another issue is whether to configure storage for NAS or a SAN. Either approach will work. The decision rests on whether data will be streamed from one storage system (NAS) or from a cluster of storage solutions (SAN). Of course, either option demands enough storage for all the data.

The key takeaway is 4K productions need greater storage capacity and performance to support streaming very large files. Go with SSDs for their speed if budgets permit, but otherwise, make sure your spinning disks can deliver sufficient I/O throughput. Finally, contain production costs by selecting storage solutions that are robust, yet economical to purchase, scale, and operate. 4K is the future and the future starts today.

Hyperconvergence: Convergence in Warp Drive

Hyperconvergence is a relatively new technology paradigm that has been garnering attention. To understand hyperconvergence, consider traditional and converged IT infrastructures. The traditional infrastructure is comprised of server, storage, and networking systems, all of which are siloed and, consequently, often require separate administrative groups to manage. Converged infrastructures advance this approach by coupling compute, storage, and networking with a virtualization software layer. Converged solutions offer virtues like simplified datacenter design and can be offered by a single vendor. Although they may consist of bundled components from multiple vendors, there’s one phone number to call when an issue arises.

Hyperconvergence takes the integration paradigm JetStor Hyperconvergenceeven further. This strategy tightly consolidates compute, storage, networking, and virtualization resources in a commodity hardware box supported by a single vendor. Going well beyond virtualized servers and storage, hyperconvergence also integrates other key functionalities such as backup, replication, data deduplication, WAN optimization, and public cloud gateways, rendering standalone solutions unnecessary. In essence, the IT infrastructure consists of homogeneous, self-sufficient appliances.

This everything-in-a-box approach offers advantages. Again, there’s a single vendor and because all the boxes, or nodes, are aggregated, they’re managed as a single federated system. Everything shares a common interface and is administered via common tools, eliminating the need for multiple teams.

Expansion is simplified. Instead of scaling up by adding more compute or storage resources, you scale out by adding more boxes to the environment. Scalability is granular and can be done appliance by appliance, stabilizing IT budgets. The downside, however, is you can’t add only more computing performance when needed or only more storage. With each node containing all resources, buying more computing power means also buying more storage. But because the appliances are x86 server chassis, their costs are reasonable.

Hyperconvergence is not a panacea for all IT environments. Because the nodes don’t share between them, applications generating large volumes of data that require storage might be better served by storage arrays, for example. Hyperconvergence is a good solution for smaller sites or distributed locations where IT resources are sparse and the costs of enterprise storage arrays can’t always be justified. Here, hyperconvergence can deliver IT services efficiently and cost-effectively.

Scale-out NAS Makes Affordable NAS Also Scalable

Suppose you run a media production company that generates large multimedia data files. Your data trove grows daily, demanding very scalable storage. What are your options? A storage area network (SAN)­ is one, but you don’t want the costs and complexities of building and owning a SAN. Moreover, you don’t need hundreds of users to access the files. Network attached storage (NAS) offers greater economy and relative simplicity, but traditionally, NAS hasn’t been associated with scalability.
A new generation of solutions has emerged, however, that gives NAS the scalability it’s always been missing.Scale-out NAS

Called scale-out NAS, this architecture uses management software to federate storage systems into a unified storage pool. In other words, multiple devices can be consolidated into a single, distributed storage system. Files are easily accessed regardless of their physical location and storage can be increased by adding more drives or even entire arrays. Connecting nodes is generally non-disruptive and the addition of more CPUs and RAM increases performance.

Because the storage components can be commodity products, scale-out NAS is a cost-effective approach for meeting extraordinary or unpredictable data growth. Many cloud providers use scale-out to cohere vast numbers of devices, often x86 computers, into a single large node.

Don’t confuse scale-out NAS with scale-up NAS. The latter is limited to a single form factor or, more precisely, a single storage controller. This means expanding storage requires adding additional drives to a one non-clustered array. Scalability is limited and enterprises must buy larger arrays than they currently need to anticipate future growth.

For companies burdened with economically saving large quantities of data, such as media companies like your production firm, healthcare providers or enterprises engaged in data-intensive, high-performance computing (HPC), scale-out NAS deserves strong consideration.

When evaluating scale-out NAS, make sure your solution meets your needs. It should support fully featured clustering as well as simultaneous NAS and SAN access. By supporting iSCSI, Fibre Channel, and InfiniBand interfaces, it offers the versatility to operate in any environment. Your solution should deliver five nines availability, supporting triple parity RAID to ensure performance even if three drives fail. Additionally, it needs to be easy to deploy and operate, freeing you or your staff from the hassle of specialized training.