The “New Normal” Demands Storage that’s Affordable and Reliable

Much has been said about today’s “new normal.” Life and commerce are certainly disrupted, but not everything has changed. Data continue to be generated every day and are still foundational to business success. Even in these times, there’s a lot more data today than there were yesterday. That will never change. And as in the “old normal,” this information must be stored and processed.

Thus, it should be no surprise that amid economic uncertainty, more and more small and mid-sized enterprises are eschewing costly storage platforms from big-name vendors in favor of systems that have proven to get the job done economically and easily. Today’s business environment all but necessitates such solutions. When their vendors also offer flexible payment options, these systems present a compelling value proposition.

It makes sense to economize with robust solutions that work quickly and simply. Such systems impose no compromises. They’re available in all capacities and some can easily scale to petabyte size. When light-speed performance is needed for applications like VDI or virtual machines, there are all-flash platforms that are very competitively priced against big-vendor systems. For greater economy, hybrid arrays offer a flash tier for high-speed processing and a less-costly spinning-disk tier for backup. For further savings, such solutions can reduce ownership costs by being easy to install and manage.

These are uncertain times and no one knows what the next “normal” will look like. But what is certain is that both today and tomorrow, organizations need to shrewdly maximize their storage budgets. This is why so many are turning to high-value solutions from smaller, more flexible vendors. They realize that “new normal” demands nothing less than such practicality.

All-Flash vs Hybrid Storage – A Buyers Guide to IT Managed Services

Data Storage is a critical component of the MSP infrastructure. The storage must deliver high performance efficient capacity utilization and scale easily. It must have a modular design and scalability to enable purchasing only what is needed, when it is needed. No single point of failure should exist, and failed components must be replaceable without interruption of services.  AC&NC offers US based support via email / phone and remote configuration troubleshooting for its JetStor all-flash and hybrid storage solutions which are powered by QSAN.

A core focus of MSPs is in ensuring business services continuity by providing a solid Service Level Agreements (SLA), customers should focus on their businesses, not the IT infrastructure or the operating system.  We identify 5 key factors in choosing the most effective MSP storage:

  • High Availability
  • Data Backup and Disaster
  • Data Backup and Disaster Recovery
  • Easy Management
  • Performance
  • Capex Efficiency

High Availability

Technical progress keeps bringing storage appliances to new levels of reliability, although hardware failures still occur, they should not disrupt your services and applications. No single point of failure, modular design, has become an industry standard, allowing swapping degraded hardware components (Controllers, PSUs, FANs), without service interruptions. AC&NC offers RMA service, parts replacement and spares here in the US.

The brilliance of QSAN Storage is in unified hardware design for all Flash and Hybrid SAN series and JBODs, where all chassis components are interchangeable: Trays, PSUs, FANs, even controllers. Smart high-availability architecture increases efficiency in storing spare parts and reduces risks of down time even further.

Data Backup and Disaster Recovery

More than 65% of SMB has been impacted by cyberattacks, Clients’ data is the most important assert that MSP must protect. It is a critical point for MSP to have a reliable backup strategy. Recovery Point Objective (RPO) and Recovery Time Objective (RTO) are two crucial factors when planning backup and disaster recovery. Today, MSPs must evaluate the potential disruptions to business operations as a result of disaster, determine RTO and RPO goals and create a strong and appropriate disaster recovery plan.

QSAN storage has been certified with all major software providers: VMware, Citrix, MS Hyper-V, KVM, Veeam, Commvault, Acronis, NAKIVO, and others. Moreover, QSAN includes its own powerful backup software, build-in backup and DR capable of backing up data from a server or from another peer QSAN storage.

824iXD for Veeam, Nakivo, & Commvault backup

Easy management

Time is money, and storage is becoming a commodity, therefore it should take minimal efforts and time for deployment and maintenance. Storage products with reduced IT complexity and simplified management are one of priorities for modern MSPs.

Another key component of modern IT network is Remote Monitoring and Management (RMM), and it is essential for most MSPs. A proper RMM offers a set of comprehensive tools to monitor and improve all devices connected to the network and quickly fix any occurring issues. QSAN storage provides standard communication protocols such as RESTful API or SNMP to seamlessly integrate into different RMM systems.

QSAN storage is extremely simple to manage and the secret is in intuitive user interface jointly built with key customers of the company. Also, QSAN has recently released its first Centralized Management System that helps managing all its storage devices through one dashboard, greatly reduces IT management efforts, and allows to focus on the core business services.

Performance

Data storage for business continuity is based on principles of sufficient performance. Running mission-critical applications without delays, offering high enough speeds for IOT and AI require powerful hardware and extreme software optimization. More MSPs get involved in the trend of providing high-performance IT infrastructure. Key storage performance indicators include:

Throughput – The amount of data can be transfer in a period of time. Throughput is the best measurement tool for application like media server or streaming service.

IOPs – IO operations per second, which means the number of storage operation can be done in one second.  This index is a great way to measure the performance of application like database

Latency – Latency is the time between sending a command and receiving a response. It means how quick an application can reply to the request. Time sensitive service like on-line transaction processing system requires very low latency to provide real-time service.

Based on identified performance indicators, MSPs can choose the right storage solution to meet the requirements. QSAN offers incredible levels of optimization. Performance QSAN arrays demonstrate is unseen among SMB Storage. Up to 700,000 IOPS under 1 millisecond (RND Read, 4K) can be achieved by hybrid and flash flagship controllers, offering the most futureproof performance. The wide range of controllers allows MSP select the ideal storage:

Hybrid storage: QSAN storage provides the flexibility for different performance requirements in SAN and NAS environment. High-speed Solid-State Drives (SSD) can be applied for mission critical applications, and low cost, high density hard drives (HDD) for backup or archive.

Flash storage: An all flash array (AFA) only uses the high speed SSD for applications that require low latency and high IOPs. QSAN entry-level AFA is ideal and a good start for small-to-medium size growing business, MSP can adopt to higher-level AFA after business becomes matured and requires higher-level device.

“We are running JetStor 826iXD AFA 60TB, JetStor XF2026D AFA 50TB for Hyper-V MS cluster hosting, servicing 120 VM’s for over three years. Performance has been amazing vs others we have used in the past. We are also running JetStor 824iXD HDDs 300TB unit for Veeam backups. It offers us excellent price performance as a backup appliance. We are extremely happy with a service and support by AC&NC team.” ~Frank Hofsteden – Molnii.com

Capex Efficiency

An essential challenge for MSP is the need to grow revenue and reduce costs. Effective spending on IT infrastructure is a key in reduction of the overall capex and increase in business profitability.

Having more powerful storage allows to run more VMs or VDIs per enclosure, and positively affects spending on IT infrastructure. However, for backup and disaster recovery it is more efficient to use lower end storage.

Greatness of QSAN Storage Solutions is in product diversity and again unified hardware design. QSAN customers can deploy some of the industry fastest hybrid or all-flash SANs as primary storage, use entry SANs for backup with native or 3rd party replication, and NASs for disaster recovery sites. All can be managed through one centralized management software, and SANs will share most of hardware components (fans, PSUs, trays).

AC&NC is offering 12 month no interest payment plans, as well as 24 and 36 month plans. All SANs and NASs can be scaled up with JBODs. SANs offer virtual RAID allowing to add drives directly to RAID group without a need of moving any data. AC&NC provides full configuration testing and warranty service with SSDs and HDDs.

XCubeFAS

QSAN offer the fastest entry-level all-flash array. With Active-Active design and ultra-high performance of XCubeFAS, MSPs can provide uncompromising performance to fulfill clients’ mission-critical application needs. AC&NC is offering special configurations for MSPs with three models available AFA 50TB, 100TB and 200TB of All Flash with ability to grow to multiple PBs levels.

XCubeSAN

XCubeSAN delivers comprehensive storage functionality and brings enterprise-level features to the SMB and Enterprise businesses with high performance, high availability, and unparalleled scalability. Get hybrid storage solution here.

“Stellar has been using Qty 10 JetStor’s AFA to power VMware VDI and DR Services for its customers for the past 2 years. The unbeatable performance for the price truly differentiates itself in the SAN Storage Market”  ~ Wayne Johnson, CEO, stellar.tech

XCubeNAS

XCubeNAS provides all data service you need for everyday operation. With enterprise-level file system and advanced business features, XCubeNAS provides a reliable storage solution and offer a completed backup service for MSPs. Learn more about XCubeNAS.

MSP customers running JetStor 826 1PB All Flash

Some Predictions for 2020

The new year is a time for renewed hope and optimism. What can we expect in 2020?

NVMe Finally Arrives

Although anticipated for years, the widespread adoption of NVMe is finally happening. NVMe helps to unlock the potential of SSDs, but SSDs are only recently becoming a staple in data centers thanks to their decreasing costs. However, while SSDs deliver superior performance over spinning disks, they’re hampered by the limits of SATA/SAS connectivity. SATA/SAS were designed for traditional hard drives; NVMe is tailored for the greater speeds of SSDs.

In 2020, vendors are offering storage arrays with NVMe-enabled tiers, and all-flash NVMe platforms are coming. The sales of NVMe systems are growing at a greater rate than sales of SATA/SAS, hybrid or all-disk arrays.

We predict that NVMe will follow the same path as SSDs in that the protocol initially will be reserved for high-performance enterprise workloads before eventually becoming commonplace for general storage duties, even for smaller enterprises. Why invest in new solutions only to let legacy technologies impede their performance?

Life on the Edge

Predicting the proliferation of the IoT is hardly bold. It’s happening. Some forecasts have as many as 20 billion IoT devices in service this year. Torrential data streams will become diluvial. Most data are already generated outside the data center from branch offices and mobile and IoT devices. The problem is constantly moving all this data into data centers or clouds to extract their value is impractical.

Enter edge computing. The concept means collecting, storing, and analyzing data locally at the network’s perimeter. Vast volumes of data don’t have to be moved to a central location for processing, thereby reducing bandwidth consumption and gaining business intelligence much faster. But computing and storage in the data center have always been challenges. Pushing computing and storage to the edge presents even more challenges. It means pushing workloads into the wild.

We forecast a new generation of solutions from storage and server vendors to address the particular demands of edge computing. Their technologies must enable analytics and even AI on the edge, coupled with fast, robust storage. Their systems must forward only the business intelligence gleaned from all the data and deploy data lifecycle management. Feeds from video surveillance, for example, need to be stored only so long. Moreover, everything must be secure. No one, let alone an IT professional, will likely be onsite to safeguard against hacks and theft.

Yet the upside for effective new solutions, if not new paradigms, is boundless.

SDS & HCI Are Here to Stay

Software-defined storage (SDS) and hyper-converged infrastructure (HCI) are maturing technologies that address many pain points in enterprise infrastructures. SDS uncouples storage resources from their underlying hardware, consolidating data silos for greater flexibility, efficiencies, agility, and control.

With SDS, the location of storage hardware becomes irrelevant, which enables SDS solutions to address the complexities and headaches of managing hybrid clouds where storage pools are nowhere near each other, yet are intrinsically linked by business needs.

HCI offers its efficiencies as well by merging compute, storage, and networking into single appliances. These devices are easier to manage than storage arrays, making them ideal for remote sites.

What we’re eager to see is how vendors leverage SDS and HCI for edge computing needs. Will we see HCI nodes distributed across the landscape that can be controlled from the data center? Doing everything in one device is appealing, especially if all the devices can be centrally managed. Perhaps SDS and HCI will help make edge computing an effective solution for gaining business intelligence.

Storage Security is More Critical Than Ever

When isn’t data security critical? After all, the goal of ne’er-do-wells has always been to either steal data or disrupt its availability. What’s changing for 2020 is the world is even more tumultuous, hackers are more sophisticated and sometimes in the service of nation-states, and more and more data are being generated outside the relative safety of the data center. The corollaries to these trends are more regulatory safeguards, such as the California Consumer Privacy Act (CCPA), and increasing corporate liabilities. The pressures are greater than ever to protect the privacy and integrity of data.

We’ve seen advances in cloud security and data backup and recovery, but more and more data is flowing from outside enterprise and cloud data centers and this trend promises to increase, as will the risks. Branch offices rarely have onsite IT personnel, and the IoT sensors and probes scattered across the hinterlands must fend for themselves.

Our prognostication is the marketplace will respond with innovation. Solutions will emerge, many based on SDS and HCI technologies, that will ensure the integrity of data from creation to business intelligence, regardless of how remote the sources may be. What we don’t predict, at least for 2020, is the passage of a cohesive U.S. federal privacy law to preempt a hodgepodge of state laws. Drafting an effective bill has its complications, but despite the need, we don’t anticipate one soon.

The Voracious Appetites of Machine Learning & Artificial Intelligence

Like the Internet of Things, machine learning (ML) and its more august sibling, artificial intelligence (AI), are upon us. The former is impacting business and IT operations, and the latter will impact nearly all of society. When most of us think of them, we envision incomprehensible algorithms and the brawny CPUs and GPUs all but running on nitro that bring them to life. Who thinks of quotidian storage?

The fact is the very foundation of AI and ML is data, lots of it, and the data must be stored somewhere. The data coming out of AI and ML are only as good as the data going into them. And the more data, the better. The larger the datasets, the more accurate will be the pattern recognition, correlations, analyses, and decision-making. The more data, the smarter our machines will be. This is true for any use case or workload, from sequencing genomes, improving agricultural yields, and scientific research to fraud detection, customer support, and self-driving automobiles.

Additionally, AI and ML generate data. Once AI/ML applications process their source data, the results will need to be safely stored and reused for further analyses.

Feeding the gluttony of AI and ML for data presents challenges. Data will come from many sources, such as business operations within the enterprise and IoT and social media from outside the enterprise. Data repositories must be extremely scalable, while still being cost-efficient, which often means hybrid infrastructures combining on-premise and cloud storage. Object storage will be a common solution for its ability to present vast troves of data in a single namespace.

Additionally, using high-octane GPUs will be wasted if the storage is a bottleneck. For this reason, AI and ML can best be served by flash drives, particularly for real-time use cases like assessing financial transactions.

Finally, AI and ML will improve storage itself. Vendors have already started to include logic in their offerings to better understand and manage enterprise environments. Armed with AI and ML, administrators will determine usage and detect patterns, and make more informed decisions about I/O patterns and data lifecycles. They’ll more accurately project future capacity needs and even perhaps predict failures, permitting proactive measures to safeguard operations.

The bottom line: if you’re planning on ML or AI applications in your enterprise, strongly consider the storage that will enable them. In storage, as in life itself, the one thing that never changes is that things are always changing.

Surviving the IoT Flood

Last month’s blog addressed edge computing and how it supports the Internet of Things.  Now, let’s look a bit further into the storage demands of IoT. IoT storage can’t possibly be covered in a blog, but here are some thoughts.

We’ve all been introduced to an IoT-embellished future. Smart homes, buildings, cities, and cars. Industrial, transportation, environmental, and scientific sensors. Sensors that tell us what’s in the soil, what’s in the air, and what’s in the water. Sensors in our clothes and, eventually, even ourselves. A 2017 white paper by IDC forecast that by 2025, IoT devices worldwide will generate some 40 zettabytes of real-time data (www.seagate.com/files/www-content/our-story/trends/files/Seagate-WP-DataAge2025-March-2017.pdf). For those who are counting, that’s 40 billion terabytes. The data onslaught will only intensify.

These data will need to be moved, stored, and shared. And, of course, value must be extracted from them. Otherwise, why waste the time and money to collect them in the first place?

Where will zettabytes of data reside? Where should your IoT data reside? It all depends on the data and how they’re used. Complicating things, there’s a vast diversity of IoT data, ranging from tiny file logs to huge video surveillance files.

Some data needs to be processed immediately for safety or well-being. Think critical avionics or data exchanged between smart cars approaching an intersection. Similarly, healthcare providers must know immediately when a device detects that an at-home patient is suffering from a medical crisis.

Some data need to be processed soon, such as sensors in an industrial device that can indicate an impending failure. Some data can be processed later. An oil exploration firm’s geological data, for example, needs to be analyzed carefully over time. Or a vending machine can send pings to the datacenter just whenever a purchase is made or inventory needs to be restocked. Evaluating data for insights into consumer preferences and behaviors can be done periodically at the datacenter.

Data that is analyzed in real time may not need to be stored, other than in a temporary cache. Other data need to be retained for set periods of time, such as the aforementioned video surveillance files. Some data can be discarded right after analytics. When the value of the data is to reveal outliers, for example, data points that indicate otherwise can be discarded after analysis. Do you really need to store the sensor reading in your refrigerator informing you that you need milk?

The nature and purpose of the data will determine where they are stored and for how long. It would make sense to store video surveillance files in object storage in a cloud, but large data streams can congest pipelines to cloud repositories. This is also true of data streamed across substantial geographical distances to the enterprise datacenter. Consequently, with large and ongoing data streams, it can make sense, as discussed in last month’s blog, to adopt an edge computing strategy and store these data in mini-datacenters located in general proximity to the data’s sources. Processing can then be done locally.

For critical data that demand real-time analytics, edge computing is a practical option. What would be forwarded to the enterprise datacenter are just the results of the analytics. Of course, you still need to determine if these data must be retained after processing.

For critical data collection and transactions, fast solid-state storage is a must. How much latency is tolerable between two cars communicating with other as they approach an intersection?

The point is you may have to address many kinds of data with many kinds of processing and storage needs. Presently, your storage options basically are clouds, edge computing, or your datacenter, where the value and insights from IoT data will probably be ultimately realized. But does your datacenter have the resources to house a steady flow of IoT data? Is your WAN up to the task without impeding business operations? Resorting to clouds is cost-effective for some IoT data, but moving data from many disparate sources to clouds still takes time.

Once you figure out your IoT management and storage needs, you must address security requirements. How important is each IoT data stream and what is a commensurate level of security?

Finally, you can start pondering what IoT data to backup and how. Once you figure all of this out, go home and spend some time with your family.

As you do so, carry this thought. The IoT certainly presents a host of challenges, but it is also a revolution in the making that offers unprecedented opportunities for prosperity and well-being.

Edge Computing—Bringing Value to IoT

There are already many billions of Internet of Things (IoT) devices in the business, consumer, civic, science, and industrial sectors. They’re scattered across the planet in assembly lines, vehicles, hospitals, cities, homes, environments, and even our clothing. They continuously generate vast amounts of data and their numbers will proliferate almost exponentially.

Edge computing exists to give meaning to that data. It is the IT infrastructure that connects datacenters and corporate offices to the real world that IoT devices sense. To understand edge computing is to understand the issues it addresses to make IoT useful.

JetStor 826AFA – Storage at the Edge

One is what to do with all this remotely-sensed data? There’s too much to stream to clouds or enterprise datacenters. The pipelines are too small, if available at all, for all the countless devices.

Moreover, is every data point of value? Often, device owners need to know only of outliers, such as when a machine is being overtaxed or when a patient suffers a medical event.

And does all the data need to be saved? Video surveillance, for example, must be saved for specified periods of time, or compliance demands might require some data to be preserved. But is this true for everything? If so, storage costs would skyrocket.

Edge computing offers decentralized processing at the edge, in proximity to the sensors and devices, to power analytics. Analytics convert raw data into real-time actionable intelligence. It extracts business, scientific, industrial, and personal value from the streams. The analytics generally occur at small, local data centers that are linked to the IoT devices. From these edge-computing sites, unnecessary data are weeded out and only processed data—the information of real value—are sent to clouds or corporate data centers. Edge computing helps to ensure insights are delivered rapidly and it lowers storage and networking costs.

Of course, storage is key to the process. The local data centers, often consisting of a few racks, require enough storage to store data streams and support analytics. For these deployments, far-sighted organizations will rely on all-flash storage solutions. Flash storage lowers energy consumption and offers the reliability needed for sites that often lack on-premise staff. Moreover, flash delivers the performance needed for real-time analytics.

In summary, edge computing helps to make sense of IoT data to improve efficiencies, productivity, safety, health, and knowledge.

Virtual Desktop Infrastructures Need Very Fast Storage

Virtual desktop infrastructure (VDI), also known as desktop-as-a-service (DaaS), has been widely adopted. VDI simplifies IT management and backups, facilitates security, and reduces hardware and operating costs. VMware, Citrix, Amazon, Parallels, and others offer VDI solutions, each with its own implementation and features. Some run in the data center, some in the cloud, and some are hybrids. Some target small businesses, some large enterprises, and others for organizations in-between.  

All, however, depend on robust storage to work well.

To reduce costs with VDI deployments, organizations generally place as many virtual machines as possible onto the fewest number of physical servers. They then connect the servers to a shared storage system. This creates I/O issues, which undermines the predictable performance that users came to expect when their OSs, applications, and files resided on their workstations or laptops.

Over the course of the day, hundreds or thousands of users accessing applications and data, or storing and searching for files, means that shared storage must have substantial IOPS capabilities to keep pace. The biggest pressure comes from boot storms, a tsunami wave when the bulk of users simultaneously arrive in the morning and log in, expecting their desktops to be instantly available.

Not long ago, the only storage solution was spinning disks.  Yet, even with tricks like short-stroking spindles, traditional arrays were hard-pressed to keep up with VDI demands. Especially boot storms. Hard drives may be relatively inexpensive, but they’re not the smart investment for VDI storage platforms. All-flash arrays are. They deliver far superior I/O capabilities and they keep getting bigger and less expensive. They also consume less power and run far cooler than traditional platforms, slashing energy costs.

For a while, hyper-converged infrastructures (HCIs) were used with VDI, but the shortcoming of HCI soon became apparent for this use case. HCI doesn’t scale well for specific workloads like VDI. Although vendors are trying to remedy this, you must pay for an entire HCI block even if you only need more storage or CPU power.

An all-flash array enables you to efficiently balance storage with your VDI workload. You’ll gain easy, cost-effective scalability, power consumption so low you’ll be hard-pressed to quantify it in context of your data center, and the hyperspeed performance that dissipates boot storms and keeps users productive.

DropBox Turns to On-Premise Storage Rather than the Cloud, Saves $Millions

A story published on GeekWire should make many organizations rethink their storage strategies. It recounts how DropBox bucked industry trends by moving its popular file-storage service away from the cloud—AWS’s S3 storage service—to its own infrastructure (www.geekwire.com/2018/dropbox-saved-almost-75-million-two-years-building-tech-infrastructure/). By investing in its own data centers rather than spending on third-party infrastructure, DropBox saved $39.5 million in 2016 and $35.1 million in 2017. What DropBox discovered through its “Infrastructure Optimization” project is on-premise storage designed for an enterprise’s specific needs can be much more efficient than relatively generic cloud offerings.

There are persistent arguments for on-premise storage. You don’t have the security worries when your data leave the confines of your firewalls—and your control—to traverse the Internet to somebody else’s network. You lack concerns about sketchy neighbors on multi-tenant clouds, which is what most commercial clouds are. Additionally, moving data across your on-premise resources is simpler and faster than moving data between clouds. Your compliance, governance, and peace-of-mind needs can be best met when you maintain local ownership of your data.

Data are far more rapidly accessible when stored locally rather than somewhere else in the country or, worse, somewhere else on the planet. On-premise data benefits such established needs as time-sensitive transactional processing, backups, and data recovery, and there are emerging applications for which speed and safety will be paramount.

For example, analytics now go well beyond Hadoop-style, big-data projects as even small organizations increasingly use analytics to extract more value from their data. Analytics offers the knowledge to improve everything from IT and operational efficiencies to marketing and customer relations. But to avoid latency, especially when real-time or near real-time analysis is required, compute and storage must be close together, not separated by the Internet. The argument for local storage becomes even stronger with the adoption of technologies like NVMe and NVMe over Ethernet, which will greatly speed data movements across local networks and expedite data analytics.

Examples like DropBox show that public clouds are not always the most cost-effective solution for data storage. The best bet is keeping data on premises creating a private cloud, or a hybrid cloud, replicating them to a second site for backup and recovery, and archiving everything on a slow but inexpensive cloud service like private Cloud providers or Amazon Glacier. You’ll gain control, performance, and security. And you can economize on your IT expenses.

Hyper-convergence vs Convergence

Once upon a time, enterprises bought the components needed to deliver IT services, cobbled them together, and with a little sweat and aggravation, got them to work. Demands for more robust services prompted companies to turn to best-of-breed solutions, but this resulted in a mélange of systems that presented management and interoperability issues. These problems were exacerbated by virtualization technologies that must span devices.

warp drive Hyperconvergence

In response, converged solutions arrived on the market. These are turnkey systems that include everything IT required—servers, networking, storage, hypervisors, and management capabilities. The components come from various vendors, but they are all pre-tested to ensure interoperability and are supported by a single vendor. Converged solutions are quick to deploy and easier for IT staffs to maintain, although larger enterprises with a separate server, storage, and networking teams require organizational restructuring. Regardless, fully-converged, single-vendor solutions allow IT organizations to do more at less cost.

But a new approach—hyper-converged infrastructure (HCI)—further integrates components to better support virtualized services, particularly software-defined storage (SDS). HCI is appliance-based and software driven, and like converged solutions is supported by a single vendor. The appliances are commodity hardware boxes each integrating compute, storage, networking, and virtualization technologies. Unlike converged systems, the storage, compute, and networking of HCI are so tightly fused together, they cannot be broken down into separate components. Each appliance is a node in the system and all services are centrally controlled. Storage is decoupled from the hardware so the storage across all the nodes appears as one virtualized pool. Scaling means simply adding additional appliances.

HCI isn’t always a slam dunk. When IT department needs to scale just one resource, like computing or storage, it will have to pay for boxes that contain all the resources. Absolutely mission-critical applications might perform better on dedicated hardware, isolated from other apps that could consume essential bandwidth. HCI also might not make sense for ROBOs.

Yet, HCI offers many benefits over converged infrastructures, such as superior scalability, flexibility, control, and ease of use. IT can deploy the most advanced SDS functionality and automation, and achieve remarkable efficiencies. It reduces latency, better exploits the performance of solid-state drives, and leverages software-defined infrastructure. It might your best choice…until something better comes along.

Cloud Storage Hosting

Storage has always been a primary reason why companies turn to cloud computing. Clouds are ideal for backing up data and storing archival data. This is underscored by the advent of object storage, which makes vast data stores practical. Over time, providers offered additional use cases such as Software as a Service (SaaS), which delivers applications from the cloud, and Infrastructure as a Service (IaaS), which augments or replaces the whole data center.

Solutions must be cost-effective for companies and yet profitable for providers. The keys are economies of scale and efficiencies. Virtualization is the sauce that makes clouds work. Providers use virtualization to efficiently pool storage resources and enable multi-tenancy, thus leveraging hardware and lowering the costs of storage and computing. They can deliver scalability and services like cloud bursting, increasing the value of their offerings to customers. Now it’s up to providers to deliver performance, security, and compliance to ensure clouds make more business sense than do-it-yourself data centers.