Virtual Desktop Infrastructures Need Very Fast Storage

Virtual desktop infrastructure (VDI), also known as desktop-as-a-service (DaaS), has been widely adopted. VDI simplifies IT management and backups, facilitates security, and reduces hardware and operating costs. VMware, Citrix, Amazon, Parallels, and others offer VDI solutions, each with its own implementation and features. Some run in the data center, some in the cloud, and some are hybrids. Some target small businesses, some large enterprises, and others for organizations in-between.  

All, however, depend on robust storage to work well.

To reduce costs with VDI deployments, organizations generally place as many virtual machines as possible onto the fewest number of physical servers. They then connect the servers to a shared storage system. This creates I/O issues, which undermines the predictable performance that users came to expect when their OSs, applications, and files resided on their workstations or laptops.

Over the course of the day, hundreds or thousands of users accessing applications and data, or storing and searching for files, means that shared storage must have substantial IOPS capabilities to keep pace. The biggest pressure comes from boot storms, a tsunami wave when the bulk of users simultaneously arrive in the morning and log in, expecting their desktops to be instantly available.

Not long ago, the only storage solution was spinning disks.  Yet, even with tricks like short-stroking spindles, traditional arrays were hard-pressed to keep up with VDI demands. Especially boot storms. Hard drives may be relatively inexpensive, but they’re not the smart investment for VDI storage platforms. All-flash arrays are. They deliver far superior I/O capabilities and they keep getting bigger and less expensive. They also consume less power and run far cooler than traditional platforms, slashing energy costs.

For a while, hyper-converged infrastructures (HCIs) were used with VDI, but the shortcoming of HCI soon became apparent for this use case. HCI doesn’t scale well for specific workloads like VDI. Although vendors are trying to remedy this, you must pay for an entire HCI block even if you only need more storage or CPU power.

An all-flash array enables you to efficiently balance storage with your VDI workload. You’ll gain easy, cost-effective scalability, power consumption so low you’ll be hard-pressed to quantify it in context of your data center, and the hyperspeed performance that dissipates boot storms and keeps users productive.

DropBox Turns to On-Premise Storage Rather than the Cloud, Saves $Millions

A story published on GeekWire should make many organizations rethink their storage strategies. It recounts how DropBox bucked industry trends by moving its popular file-storage service away from the cloud—AWS’s S3 storage service—to its own infrastructure ( By investing in its own data centers rather than spending on third-party infrastructure, DropBox saved $39.5 million in 2016 and $35.1 million in 2017. What DropBox discovered through its “Infrastructure Optimization” project is on-premise storage designed for an enterprise’s specific needs can be much more efficient than relatively generic cloud offerings.

There are persistent arguments for on-premise storage. You don’t have the security worries when your data leave the confines of your firewalls—and your control—to traverse the Internet to somebody else’s network. You lack concerns about sketchy neighbors on multi-tenant clouds, which is what most commercial clouds are. Additionally, moving data across your on-premise resources is simpler and faster than moving data between clouds. Your compliance, governance, and peace-of-mind needs can be best met when you maintain local ownership of your data.

Data are far more rapidly accessible when stored locally rather than somewhere else in the country or, worse, somewhere else on the planet. On-premise data benefits such established needs as time-sensitive transactional processing, backups, and data recovery, and there are emerging applications for which speed and safety will be paramount.

For example, analytics now go well beyond Hadoop-style, big-data projects as even small organizations increasingly use analytics to extract more value from their data. Analytics offers the knowledge to improve everything from IT and operational efficiencies to marketing and customer relations. But to avoid latency, especially when real-time or near real-time analysis is required, compute and storage must be close together, not separated by the Internet. The argument for local storage becomes even stronger with the adoption of technologies like NVMe and NVMe over Ethernet, which will greatly speed data movements across local networks and expedite data analytics.

Examples like DropBox show that public clouds are not always the most cost-effective solution for data storage. The best bet is keeping data on premises creating a private cloud, or a hybrid cloud, replicating them to a second site for backup and recovery, and archiving everything on a slow but inexpensive cloud service like private Cloud providers or Amazon Glacier. You’ll gain control, performance, and security. And you can economize on your IT expenses.

Hyper-convergence vs Convergence

Once upon a time, enterprises bought the components needed to deliver IT services, cobbled them together, and with a little sweat and aggravation, got them to work. Demands for more robust services prompted companies to turn to best-of-breed solutions, but this resulted in a mélange of systems that presented management and interoperability issues. These problems were exacerbated by virtualization technologies that must span devices.

warp drive Hyperconvergence

In response, converged solutions arrived on the market. These are turnkey systems that include everything IT required—servers, networking, storage, hypervisors, and management capabilities. The components come from various vendors, but they are all pre-tested to ensure interoperability and are supported by a single vendor. Converged solutions are quick to deploy and easier for IT staffs to maintain, although larger enterprises with a separate server, storage, and networking teams require organizational restructuring. Regardless, fully-converged, single-vendor solutions allow IT organizations to do more at less cost.

But a new approach—hyper-converged infrastructure (HCI)—further integrates components to better support virtualized services, particularly software-defined storage (SDS). HCI is appliance-based and software driven, and like converged solutions is supported by a single vendor. The appliances are commodity hardware boxes each integrating compute, storage, networking, and virtualization technologies. Unlike converged systems, the storage, compute, and networking of HCI are so tightly fused together, they cannot be broken down into separate components. Each appliance is a node in the system and all services are centrally controlled. Storage is decoupled from the hardware so the storage across all the nodes appears as one virtualized pool. Scaling means simply adding additional appliances.

HCI isn’t always a slam dunk. When IT department needs to scale just one resource, like computing or storage, it will have to pay for boxes that contain all the resources. Absolutely mission-critical applications might perform better on dedicated hardware, isolated from other apps that could consume essential bandwidth. HCI also might not make sense for ROBOs.

Yet, HCI offers many benefits over converged infrastructures, such as superior scalability, flexibility, control, and ease of use. IT can deploy the most advanced SDS functionality and automation, and achieve remarkable efficiencies. It reduces latency, better exploits the performance of solid-state drives, and leverages software-defined infrastructure. It might your best choice…until something better comes along.

Cloud Storage Hosting

Storage has always been a primary reason why companies turn to cloud computing. Clouds are ideal for backing up data and storing archival data. This is underscored by the advent of object storage, which makes vast data stores practical. Over time, providers offered additional use cases such as Software as a Service (SaaS), which delivers applications from the cloud, and Infrastructure as a Service (IaaS), which augments or replaces the whole data center.

Solutions must be cost-effective for companies and yet profitable for providers. The keys are economies of scale and efficiencies. Virtualization is the sauce that makes clouds work. Providers use virtualization to efficiently pool storage resources and enable multi-tenancy, thus leveraging hardware and lowering the costs of storage and computing. They can deliver scalability and services like cloud bursting, increasing the value of their offerings to customers. Now it’s up to providers to deliver performance, security, and compliance to ensure clouds make more business sense than do-it-yourself data centers.

All-Flash Storage Promises a Go-To Strategy for MSPs

Managed service providers (MSPs) need to offer fast, affordable storage. Storage is a perennial concern for enterprises of all sizes, and many are considering offloading their storage to reduce costs and headaches. But to win storage business, MSPs face formidable players like AWS, Azure, and Google Cloud. To compete, they must offer high-value, cost-effective solutions.

To this end, forward-thinking MSPs should consider all-flash solutions, which are positioned to furnish the performance, scalability, and availability that enterprises demand. The capital costs of flash storage are still greater than spinning disk, but its price per gigabyte continues to drop and it’s a matter of time before it achieves parity with the legacy technology.

Flash storage platforms can trim operating costs. They consume less power than their mechanical-drive counterparts and require less space, providing substantial cost-savings every month.

Flash also offers much greater IOPS than spinning disks and this performance gap will only widen with the adoption of NVMe (Non-Volatile Memory express) drives. NVMe eliminates the bottlenecks caused by storage subsystems engineered for slower spinning disks. Additionally, flash is getting denser all the time, which boosts scalability and further conserves space. Finally, the use of technologies like dual controllers and virtualization strategies will help ensure flash storage-systems offer the levels of availability that the large cloud providers tout.

Increasingly, all-flash storage solutions present MSPs the advantages and economies to compete against AWS and the other behemoth cloud repositories.

Storage Upheavals Are Opportunities for MSPs (Managed Service Providers)

For years, the storage business had been reasonably stable. Primary storage was local and backed-up data were nearby or at remote sites along with archived data. Production files were rapidly accessible, and governance and compliance demands were more or less met. The choices were finite and life was, for the most part, relatively orderly.

RAID Fault Tolerance - ACNC

But things change over time. Clouds changed the economics of storage from a capital expense to an operating cost. They offer repositories that are less expensive than do-it-yourself solutions and are virtually infinite in capacity. Moreover, there are now public and private clouds to choose from and the advent of object storage delivers a metadata-based file system that is ideal for stashing billions or even trillions of files.

Add to this software-defined storage (SDS), which abstracts storage from the underlying hardware. SDS offers control and scalability but requires expertise to deploy effectively. Throw in solid-state and in-memory storage, which can greatly enhance the performance of storage systems, as well as the rise of containers, and the storage business has become a vexing mélange of technologies and options.

Which is fertile ground for MSPs. Rather than acquire new layers of IT know-how, enterprises of all sizes and in all verticals can turn to MSPs to navigate the new world of storage. MSPs can manage services in both the data center and cloud to meet the needs for primary, backup, and archival storage cost-effectively.

Opportunistic MSPs can assess organizations’ demands for performance and scalability, and oversee migrations of data to public or private clouds. They can ensure that security, compliance, and governance needs are always met, and implement disaster recovery strategies that satisfy each client’s business mandates. Moreover, MSPs can leverage hardware resources by using virtualization to consolidate multiple resources into a single, cost-efficient cloud offering.

By offering storage as a service, MSPs can find that change and upheaval present business opportunities.

Hybrid vs Public Clouds; Which makes sense for you?

If your organization is not utilizing some form of a cloud, the odds are it soon will. Your question will be what kind cloud—public, private, or hybrid? Private clouds can be dedicated clouds provided by vendors in which no resources are shared by any other customer. They also can be onsite solutions, but building and operating a cloud data center is costly and demands solid IT services. Large enterprises deploy private clouds when security, compliance, and control are paramount, requiring that data be kept within enterprise firewalls or a vendor’s dedicated firewalls. Public and hybrid clouds are for everyone else.

Public clouds offer compelling benefits. They let you outsource IT infrastructure, reducing both hardware expenses and the operational costs of maintaining skilled IT staff. They provide dependability, automated deployments, and scalability. They enhance cost-efficiencies by allowing you to pay only for the resources you use, when you use them. They enable testing environments to be quickly constructed and deconstructed. You can manage your resources and assets in the cloud yourself, or have a managed cloud services provider do it for you.

Moreover, public clouds can slash the relentless costs of storing ever-increasing data troves. Rather than continually invest in on-site storage capacity, you can convert short-term, long-term, backup, and archival storage costs to a more economical operating expense in public clouds. When users must access large files without latency, they can turn to emerging on-premise caching solutions that sync with cloud stores.

There isn’t one definition for hybrid clouds, but they generally offer the best of both worlds by combining public and private cloud services. You can still control key applications and data sets within the confines of the enterprise network while keeping other applications and long-term data sets in the cloud. You can use the public cloud as an insurance policy, expanding services onto the cloud during peak periods when your compute and/or storage demands exceed your onsite capabilities. This is known as cloud bursting. A hybrid cloud can help operational continuity during planned or unplanned outages, blackouts, and scheduled maintenance windows. Hybrid clouds also offer security options. You can keep the apps and data that demand vigorous protections onsite where you have complete control, while running less security-sensitive apps in a public cloud. You even can use a hybrid cloud to experiment with incrementally migrating infrastructure offsite.

Hybrid clouds offer many substantial benefits but bear in mind that they can present management challenges. You’ll need to federate disparate environments (although some vendors argue that management is simplified when the datacenter and cloud both use similar technologies). Scripts need to span public and private infrastructure, policy changes must be applied consistently, and operations and tasks automated across multiple environments. Fortunately, there are cloud management solutions available, with more to come.

Align a cloud solution with your performance needs, security requirements, and budget. You don’t have to put everything in a cloud, but you’ll need to deploy and manage what you don’t. Make a balance sheet of your current and projected capital and operating costs to determine which solution makes the most business sense for your organization.

HDD vs SSD Storage

How do solid-state drives (SSDs) stack up to traditional hard-disk drives? Like many things in life, it depends on money and what you need. SSDs deliver greater performance than HDDs, especially faster, more predictable read times. SSD performance will only increase as vendors deploy non-volatile memory express (NVMe), a protocol designed to unleash their full potential. Familiar protocols like SCSI, SATA, and Fibre Channel were engineered for HDDs and constrain the capabilities of SSDs.

SSDs deliver higher IOPS, increased throughput, and reduced latency over HDDs. When evaluating SSDs, consider the needs of your business-critical applications. Apps that benefit from greater IOPS include virtual desktop infrastructure (VDI), server virtualization, and systems that have many concurrent users. Apps that require high throughput are those that move large files and data sets, such as data analytics, video processing, and medical imagery. Applications that are sensitive to latency typically are comprised of multiple sub-applications in a stack or across nodes, such as transactional processing systems. They also include clustered databases, streaming apps, and high-performance computing solutions.

SSDs can also improve resource utilization, supporting, for example, many more virtual machines than HDDs without performance compromises. Their speed also enables functionalities like data deduplication and compression to be performed on active storage, rather than on backups and archives.

SSDs offer additional advantages over HDDs. They need meaningfully less energy to run, which decreases power costs. They also run cooler than HDDs, lessening cooling expenses. By slashing energy requirements, AFAs operate more economically, substantially reducing ownership costs. Another consideration is density; With SSDs, you can squeeze more capacity into a rack unit, conserving space.

But before investing heavily in SSDs, there are other factors you should consider. Blazing fast SSDs might eliminate one bottleneck, but they won’t mitigate other bottlenecks in the network and elsewhere. Application and data delivery are only as fast as their slowest links.

Another is cost. Although their costs are dropping, SSDs are still more expensive to buy than tried-and-true HDDs. This is why hybrid arrays have emerged, in which SSDs handle active data while HDDs economically store inactive and backup data.

Until the costs per gigabyte of flash storage aligns with HDDs, consider SSDs for storage that demands performance and HDDs for everything else. Hybrid arrays might be your best solution. However, it is now easier than ever to justify using all-flash arrays (AFAs). It is only a matter of time when data centers are all flash and long-term storage resides on SSDs. With their many advantages, SSDs can be the smart, future-proof investment.

AFAs, hybrid arrays, and HDD arrays are each optimized to meet specific needs. Evaluate what you require, benchmark your systems, and work with vendors to determine the right solution for you. An AFA may or may not make sense for you now. However, they are in your future; it’s just a matter of when.

All-Flash Storage Arrays (AFAs)

For years, the world relied on hard disk drives (HDDs) to store digital information. These electro-magnetic devices worked well and their costs per gigabyte continually decreased. However, they had inherent performance constraints because of the time they required to read or write data on spinning metal platters with magnetic coatings. Burgeoning data troves and new generations of sophisticated applications rendered their read/write times, even when measured in milliseconds, a computing and business bottleneck.


New chip-based technologies emerged that resulted in solid-state drives (SSDs) without any moving parts. They’re much faster than HDDs and provide more predictable response times for time-critical applications, but their costs were high and their storage capacities limited relative to HDDs. They were cost effective in hybrid storage arrays. In these solutions, SSDs cached active data on tier 0 while HDDs still bore the brunt of the storage load.

The costs of SSDs are inexorably decreasing while their capacities increase. Fifteen terabyte SSDs are available and larger sizes are on the horizon. SSDs are beginning to be practical for longer-term storage and for applications that are less speed-critical. As a result, what has emerged is a new storage paradigm known as the all-flash storage array.

Initially, AFAs were too costly for all but enterprise usage, but this is changing. Moreover, AFAs are beginning to leverage the performance of their SSDs. Hybrid arrays generally rely on the same protocols that were developed for HDDs, such as SCSI, SATA, and Fibre Channel. These are incapable of supporting the blistering speeds of SSDs. The industry responded with non-volatile memory express (NVMe), a faster, more efficient protocol developed specifically for flash drives. Moreover, storage system software is becoming multi-threaded so CPU cores can cost-effectively keep pace with flash speeds.

Are AFAs finally practical for small- and medium-sized organizations? They’re still pricey compared to traditional HDD arrays, especially those that are designed for capacity. Yet you must consider more than acquisitions costs. In our next blog, we’ll touch upon some of the considerations and use cases to determine if AFAs make sense for you.

And the trends for 2017…

Projecting trends for 2017

At the beginning of every year, it’s customary to pull out the crystal ball and project trends for the new year.  Often, however, promising solutions fail to catch on because of reasons like their technologies are not yet mature or economics still favor other strategies.  With that said, below are some trends that we’re confident are good bets for 2017.

Flash Flash Flash…High capacity flash at 15TB per drive.

In our prior blog, we discussed the arrival of affordable flash storage.  Samsung has released 2.5-inch SAS flash drives with 15TB of capacity and Seagate is preparing to introduce 60TB 3.5-inch SAS drives.  Other advances are not far behind.  As their capex and opex decrease, flash drives are taking on an ever-expanding role in tier 1 storage.  Small and mid-size organizations are now adopting all-flash arrays, thereby gaining substantial performance boosts, slashing energy costs, and keeping pace with larger competitors.

Welcome to NVMe

This trend piggybacks on the proliferation of all-flash storage.  Solid-state drives are constrained by a bottleneck—legacy storage buses.  SAS and SATA were designed for spinning disks, not for the blistering speeds of SSDs.  These widely-used protocols lack the bandwidth to fully support SSDs, thereby seriously compromising the performance of SSD storage.  To the rescue is non-volatile memory express, or NVMe.  The industry created the technology to provide both far more bandwidth than SAS and SATA and greatly enhanced queuing.

If you think SSDs are fast now, just wait until you unleash their full potential.  Your wait won’t be long.

Software-defined storage

The market can’t seem to agree on a universal definition of software-defined storage (SDS), but a lot of people are talking about it anyway.  SDS separates storage services and functionality from the actual storage hardware.  Rather than rely on individual hardware solutions for services and functionality, administrators use software to centrally manage and control storage across the environment.  They rely on hardware only to physically house the data and software for everything else.  SDS reduces complexity, but its value proposition extends further.  Organizations can use commodity hardware while obtaining much of the functionality and performance of costlier arrays.  Bear in mind that SDS is not storage virtualization.  SDS provides management and control services regardless if the storage is virtualized.  With the declining cost of commodity hardware and the persistent need for additional capacity, more and more enterprises are turning to cost-saving SDS solutions to manage and protect their data.

Software-defined data centers

In a software-defined data centers (SDDC), the entire infrastructure is controlled by software and delivered as a service.  Networking, storage, and servers are virtualized, thereby decoupling provisioning, configuration, security, administration, and other processes from the hardware.  Transforming a conventional data center into an SDDC is a large undertaking and demands the transformation of IT roles as well, but 2017 will see it happen more often because of the benefits.  SDDCs promise greater agility, cost savings due to a reliance on commodity hardware, less complexity, and better deployment of cloud services.

32G Fibre Channel

Pundits have prognosticated the eventual demise of Fibre Channel as a storage networking standard.  File-based unstructured data is growing more rapidly than block-based data and the former relies on Ethernet, as do cloud and hyperconverged architectures.  However, the introduction of 32G FC will keep the standard healthy in 2017.  The increased use of flash drives, as well as the roll-out of NVMe and related technologies, puts the performance bottlenecks on the storage network, ensuring FC, especially at 32G speeds, will maintain a healthy presence in many data centers.  32G FC solutions were introduced last year—switches from Cisco and Brocade and adapters from Qlogic and Broadcom—and we’ll be seeing vendors of storage arrays support 32G FC.