At the beginning of every year, it’s customary to pull out the crystal ball and project trends for the new year. Often, however, promising solutions fail to catch on because of reasons like their technologies are not yet mature or economics still favor other strategies. With that said, below are some trends that we’re confident are good bets for 2017.
Flash Flash Flash…High capacity flash at 15TB per drive.
In our prior blog, we discussed the arrival of affordable flash storage. Samsung has released 2.5-inch SAS flash drives with 15TB of capacity and Seagate is preparing to introduce 60TB 3.5-inch SAS drives. Other advances are not far behind. As their capex and opex decrease, flash drives are taking on an ever-expanding role in tier 1 storage. Small and mid-size organizations are now adopting all-flash arrays, thereby gaining substantial performance boosts, slashing energy costs, and keeping pace with larger competitors.
Welcome to NVMe
This trend piggybacks on the proliferation of all-flash storage. Solid-state drives are constrained by a bottleneck—legacy storage buses. SAS and SATA were designed for spinning disks, not for the blistering speeds of SSDs. These widely-used protocols lack the bandwidth to fully support SSDs, thereby seriously compromising the performance of SSD storage. To the rescue is non-volatile memory express, or NVMe. The industry created the technology to provide both far more bandwidth than SAS and SATA and greatly enhanced queuing.
If you think SSDs are fast now, just wait until you unleash their full potential. Your wait won’t be long.
The market can’t seem to agree on a universal definition of software-defined storage (SDS), but a lot of people are talking about it anyway. SDS separates storage services and functionality from the actual storage hardware. Rather than rely on individual hardware solutions for services and functionality, administrators use software to centrally manage and control storage across the environment. They rely on hardware only to physically house the data and software for everything else. SDS reduces complexity, but its value proposition extends further. Organizations can use commodity hardware while obtaining much of the functionality and performance of costlier arrays. Bear in mind that SDS is not storage virtualization. SDS provides management and control services regardless if the storage is virtualized. With the declining cost of commodity hardware and the persistent need for additional capacity, more and more enterprises are turning to cost-saving SDS solutions to manage and protect their data.
Software-defined data centers
In a software-defined data centers (SDDC), the entire infrastructure is controlled by software and delivered as a service. Networking, storage, and servers are virtualized, thereby decoupling provisioning, configuration, security, administration, and other processes from the hardware. Transforming a conventional data center into an SDDC is a large undertaking and demands the transformation of IT roles as well, but 2017 will see it happen more often because of the benefits. SDDCs promise greater agility, cost savings due to a reliance on commodity hardware, less complexity, and better deployment of cloud services.
32G Fibre Channel
Pundits have prognosticated the eventual demise of Fibre Channel as a storage networking standard. File-based unstructured data is growing more rapidly than block-based data and the former relies on Ethernet, as do cloud and hyperconverged architectures. However, the introduction of 32G FC will keep the standard healthy in 2017. The increased use of flash drives, as well as the roll-out of NVMe and related technologies, puts the performance bottlenecks on the storage network, ensuring FC, especially at 32G speeds, will maintain a healthy presence in many data centers. 32G FC solutions were introduced last year—switches from Cisco and Brocade and adapters from Qlogic and Broadcom—and we’ll be seeing vendors of storage arrays support 32G FC.