Affordable Flash Storage Has Arrived

The evolution of flash drives is inevitable.  Like other disruptive technologies, it offers substantial advantages over legacy solutions—in this case, spinning disks—but its initial costs limited its adoption to such enterprise applications as high-speed transactional processing.  Over time, factors like economies of scale kicked in and the cost of flash decreased.  Who still uses a laptop with a mechanical drive?  As a result, the market share for flash drives increased considerably last year as major storage vendors reportedly sold more all-flash arrays than hybrid or spinning-disk solutions.

 

Flash technology is also improving.  NAND flash began with single-level cells that store one bit of information.  Vendors then developed multi-level cells in which each cell can store two bits of information.  Now, we’re seeing new NAND technology that stores three bits of information per cell.  Providers will be offering this next generation of flash known as 3D NAND flash, which will further improve density and thus capacity.

The market has already seen the introduction of 2.5-inch SAS flash drives with 15TB of capacity and Seagate is rolling out 3.5-inch SAS drives with a capacity of 60TB.  We can expect further advances going forward.  What this all means is the economics of flash storage are becoming practical for tier 1 workloads and it’s only a matter of time before they make sense for tier 2 storage, obviating hybrid arrays.  In fact, would anyone bet against flash drives someday supporting even archival storage?

Make no mistake.  Most data still reside on spinning disks, but the trends are clear and inexorable.  Flash storage is becoming denser, higher capacity, and more cost-effective.  Over the foreseeable future, solid-state drives will become less expensive than their spinning disk counterparts.  It’s inevitable that spinning disks will go the way of VCRs and cathode ray tubes.

We’re not going to see all-flash data centers immediately, but that day may be closer than many pundits think.  In terms of I/O performance and energy consumption, everyone will be a winner when that day does arrive.  Meanwhile, all-flash arrays are increasingly within the reach of small and mid-size organizations.  It’s only a matter of time before you migrate your primary storage to all-flash platforms.  Your organization will be more effective and cost-efficient when you do.

What are you waiting for?

Smart Choices for Building, or Building Out, Networked-Attached Storage

NAS-Units

Networked-attached storage (NAS) remains a sound strategy for delivering applications and shared storage, especially for enterprises that seek to avoid the costs and complexities of storage-area networks (SANs). AC&NC now offers some compelling NAS choices for your consideration. Whether you want to implement NAS for a workgroup or scale out a NAS infrastructure for your entire company, AC&NC offers a roster of very competitive, high-availability JetStor® RAID solutions.

The JetStor NAS 400S V2 Storage Appliance delivers shared storage to both clients and servers in Windows, UNIX, Apple Macintosh, VMware or mixed environments. The device supports four 3.5″ SATA drives or 2.5″ SSD drives for up to 32 terabytes of storage in a compact 1U chassis.

The 1U JetStor NAS/iSCSI 405SD V2 Storage Appliance is a cost-effective way to increase storage on the network. Supporting advanced storage management and application features like VAAI and thin-provisioning, it offers five 3.5″ SATA drives or 2.5″ SSD drives for up to 40 terabytes of capacity and can be easily deployed in any IT environment.

The JetStor NAS 800S V2 Unified storage system also supports advanced management and application capabilities and can accommodate eight 3.5″ SATA, 3.5″ SAS, and 2.5″ SSD drives for up to 64 terabytes of storage in a 2U enclosure.

For high-capacity NAS applications, the JetStor NAS 1200S 12G Unified storage system offers twelve 3.5″ SAS or SATA drives, also in a 2U chassis. For robust data protection, the solution, like those above, supports such features as hot spares and automatic hot rebuilds.

The function-rich JetStor NAS 1600S 12G Unified storage system provides 16 bays that support 3.5″ SAS and SATA drives and 2.5″ SAS and SSD drives for extraordinary flexibility.

All AC&NC NAS arrays deliver high availability, support for functionality like the ZFS file system, and exceptional data protection. The JetStor 400S, JetStor 405SD, and JetStor 800S systems support RAID 0, 1, 5, 6, and 10, and the JetStor NAS 1200S and JetStor NAS 1600S solutions also support RAID 50 and 60.

With their high performance, reliability, and versatility, the JetStor systems are ideal as media, database, and mail servers. They are also effective for such applications as digital imaging and video recording, CCTV, forensics email archiving, file server data archiving, disk-to-disk backup, and virtualization. Full, incremental, and differential disk backups are simple and convenient.

In addition to many other features, the JetStor arrays feature the RAID Manager browser-based management and monitoring software, which enables you to easily and remotely configure and manage the JetStor solutions. You can even have them email you event notifications.

With JetStor, NAS is not just alive and well—it’s better than ever.

OpenStack & Ceph—Storage that is Function-rich, Flexible, and Free

OpenStack is a viable solution for many enterprise data centers. OpenStack is a free, open-source software platform that enables organizations to construct and manage public and private clouds. Its components manage processing, networking, and storage on hardware resources across the data center. Many of the biggest IT firms as well as thousands of OpenStack community members support the software. Among its many users are Intel, Comcast, eBay, PayPal, and NASA, which helped to develop the platform. As such, many view OpenStack as the future of cloud computing.

OpenStack offers flexible storage options, particularly when coupled with Ceph which it supports. Ceph is a free software storage platform that stores data on a distributed storage cluster. It’s highly scalable—beyond petabytes to exabytes—avoids a single point of failure, and is self-healing. What’s remarkable about Ceph is what lies at its core. Its Reliable Autonomic Distributed Object Store (RADOS) allows you to provide object, block or file system storage on a single cluster. This greatly simplifies management when you require applications that use different storage systems. Moreover, thanks to its functionality and resilience, you can use commodity hardware to build out your storage cluster for further cost-savings.

Excellent examples of such solutions are the JetStor® RDX48, Storage Server (www.acnc.com/p/JetStor_RDX48) and JBODs (www.acnc.com/c/JBOD) from AC&NC. The JetStor RDX48 is a 4U 48-bay device that supports both spinning disks and SSDs. Its SSD caching accelerates storage and retrieval and its caching controller automatically identifies and relocates frequently accessed data to the SSDs for the quickest access and transfer speeds possible. Supporting RAID 5, RAID 6, proprietary RAID 7.3, and RAID N+M, the array delivers enough throughput for HPC applications and it never drops a frame even with concurrent HD or 2K–8K digital video streams.

AC&NC offers similarly robust JBODs that range in size up to 4U 80-bay systems, ensuring every storage need can be met economically and easily. The reliability, performance, and scalability of these AC&NC solutions make them ideal complements to OpenStack/Ceph deployments.

What is Erasure Coding and When Should it Be Used?

shutterstock_90058201RAID has long been a mainstay of data protection. RAID protects against data loss from bad blocks or bad disks by either mirroring data on multiple disks in a storage array or adding parity blocks to the data, which allows for the recovery of failed blocks. Now another approach, erasure coding, is gaining traction.

Erasure coding uses sophisticated mathematics to break the data into multiple fragments and then it places the fragments in different locations. Locations can be disks within an array or, more likely, distributed nodes. If the fragments are spread over 16 nodes, for example, any ten can recover all the data. In other words, the data is protected even if six nodes, fail. This also means that if a node(s) fails, all the other nodes participate in replacing it, which makes erasure coding not as CPU-constrained as rebuilds are using RAID in a single array.

Erasure coding is finding many applications. It is often used, for example, for object storage, and vendors of file- and block-level storage are beginning to leverage the technology. Erasure coding has proven useful for scaled-out storage, protecting petabytes of lukewarm, back-up or archival data stored across multiple locations in the cloud. Of course, mirroring remains an option for these applications. This approach, however, requires double the storage capacity, but it does eliminate rebuilds.

RAID is still a strong strategy for safeguarding active, primary data. Data remains safe within the data center and rebuilds won’t tax available WAN bandwidth. To determine whether RAID or erasure coding is best for you, assess the impact each would have on your data protection needs.

The Scale-Out Virtues of Object Storage

File- and block-based storage are well established, but a third alternative, object storage, is hardly new technology. Today, there are countless petabytes of object storage in public clouds, and hundreds of millions of Facebook, Google, and Twitter users routinely rely on the technology. So what is object storage? It’s a method for saving unstructured data in discrete units called objects. Objects are stored in containers in a flat structure rather than the more familiar nested tree structures of file and block storage. Each object is identified by a unique ID, which enables the data to be found without knowing its physical location.

cloud-storage-icon

Whereas most file systems are limited by the number of files, directories, and hierarchical levels they support, object storage can scale almost infinitely. Metadata is kept directly with the object, eliminating the need for lookups in relational databases and further enabling immense scalability. An added benefit is you can easily apply security or retention policies to the metadata.

When should object storage be deployed? The technology is not ideal for transactional or frequently changing data because of performance issues. Block-based solutions provide greater read/write speeds for disk-intensive transactional data. Object storage shines, however, for relatively static data, backups, and cold archival data. Object storage is cost-effective for these applications because it works on commodity hardware.

In a world generating torrents of data every day, the scalability and economy of object storage are attractive, particularly in public and private clouds. What makes this fit even better is object storage pools are accessed via the HTTP protocol or an API, rather than Fibre Channel for block-based storage, for example, or NFS for file-based storage. Sites like Google and Facebook store vast troves of images and videos in object storage.

A key advantage of object storage is how easily it scales. Object storage systems scale out horizontally simply by adding additional nodes. Moreover, because object storage is location agnostic, nodes within a storage pool can be geographically distributed. You can access objects that physically reside on a node anywhere without going through a central controller, delivering a truly global infrastructure in which data can be accessed via the WAN or the Internet. NAS systems also scale out horizontally, but their expansion is restricted by their hierarchical file structures which grow in complexity as they increase in size. Also, NAS systems work best on local networks in the data center. Again, the flat structure of object storage offers almost astronomical scalability for storing unstructured data.

If you need to store petabytes of files that are accessed only occasionally, object-based storage might be your best bet. For large, expanding data troves, such as those found in healthcare, media/entertainment, and other industries, the technology can provide the economical advantages of tape but with less effort and superior retrieval speeds.

Peering into the Future: Storage in 2016

It’s the start of a new year and time for pundits to predict what we can expect in the storage market going forward. Let’s touch upon some of the more prominent trends.

RAID Fault Tolerance - ACNC

SSDs versus HDDs

Foremost, of course, is the inexorable adoption of SSDs by both the consumer and enterprise markets. The drivers are well reported. Superior I/O performance, decreasing cost per gigabyte, and lower energy consumption, which help offset the capital costs of flash-based drives.

Additionally, the capacity of SSDs continues to grow. 2TB SSDs are increasingly popular and 15TB SSD drives just been announced. This year, expect 512GB USB drives that fit in your pocket. There’s no reason why SSDs can’t eventually become more capacious than HDDS. As flash technologies improve, it’s only a matter of time before their prices drop quicker than HDDs.

In the more immediate future, all-flash arrays will continue to proliferate, replacing HDD arrays particularly for mission critical applications. As their per gigabyte costs come down, they will increasingly be used for other applications. SSDs will eventually replace HDDs as certainly as CDs and DVDs replaced audio and video cassettes.

Does this mean HDDs are extinct? Hardly. As long as they’re the cheapest way to store large data troves, they’ll continue to be around. For this reason, HDDs will still outsell SSDs in 2016. Moreover, they’ll continue to improve. We should see 10TB HDDs in the standard 3.5-inch form factor this year and it is predicted that in the next few years, their capacity will grow to 20TB, if not larger.

Additionally, innovations like Shingled Magnetic Recording (SMR) will increase the capacity of HDDs, although SMR will be used only for particular applications because of its limits on random reads. SMR will make spinning disks more cost-effective for archiving and cold data storage. Considering that LTO drives are slow and no longer offer significantly larger storage capacities, purpose-build HDDs will further undermine tape for archiving. It is reasonable to project, however, that SSDs will eventually replace even high-capacity HDDs for such applications.

Bigger Ethernet pipes

This long anticipated advance is finally arriving. 25GB Ethernet is hitting the market and it is backwards compatible with 10GB Ethernet. The cost of 25GB Ethernet will soon be competitive with 10GB, making it the de facto choice for new deployments. Can 50GB and 100GB Ethernet be far away?

Software-Defined Storage (SDS)

After countless articles and discussions, and many proofs of concepts, SDS should have its breakout year 2016. Decoupling storage functionality from physical devices offers too many advantages to not see widespread deployment in production environments.

Infrastructure convergence

More and more, storage is viewed not as a standalone component of the IT infrastructure, but as tightly wound with compute, network, and applications. For this reason, a trend that is arguably on its way to becoming the status quo is infrastructure convergence. Aggregating storage, compute, and network components from a single vendor and its partners simplifies ownership and management. We’ll start to see convergence not only in environments like ROBOs, but also in data centers.

Cloud storage

The cost of cloud storage continues to fall and now is very cost-competitive for both businesses and consumers. The limits of Internet bandwidth are mitigated by the availability of continuous data protection (CDP) solutions, which makes the cloud very appealing for secondary or tertiary backups. Assuming security is up to snuff, expect to see more enterprises storing data in clouds as well as at primary and secondary sites.

ARM servers

ARM processors are already used in consumer electronic devices because they are smaller in size, less complex, and consume less power. We’ll also see them in servers this year as their characteristics make them quite useful in data centers.

Non-Volatile Memory (NVM)

NVM is another technology that eventually will be making the rounds throughout the data center. NVM is non-volatile memory, which means data is saved when a device is turned off or loses its power. Though not as quick as RAM, NVM is faster than NAND flash. Its current costs will impede its adoption, but expect to see hybrid systems in which NVM provides a new tier of ultra-high speed storage. In these solutions, NVM will provide the blazing performance with NAND flash taking the role once held by spinning disks.

Check with us next year to see how well our crystal ball worked.

Storing 4K Digital Content

The evolution of yesteryear’s grainy, black and white televisions to today’s dazzling high-definition monitors confirms that we prefer our content displayed in higher resolution. This is why filmmakers and videographers are starting to shoot in 4K, which offers four times the pixels of 1080p.shutterstock_159972956

Granted, consumers currently have no means to play 4K content, but 4K-capable monitors are available, affordable, and selling today. Additionally, as Internet bandwidth becomes more plentiful, content distributors will start offering shows and films in 4K resolution. Consequently, it is only a matter of time before 4K content is consumed at home, and productions that shoot at this resolution are simply anticipating the future.

But storage becomes a key consideration. The original footage must be stored while editors work on compressed copies, which also must reside somewhere. Storage also has to deliver multiple streams of content to workstations. When working with 4K, however, four times the pixels means four times the information of 1080p.  Videomaker magazine worked out the numbers (www.videomaker.com/article/f6/17189-5-reasons-you-might-want-to-wait-on-4k-ultra-hd-video-production-for-now). An hour of standard-definition digital video needs about 12.7GB of storage, or some 217MB per minute. An hour of uncompressed 4K content requires almost 110GB of storage, or about 2GB per minute.

Consequently, with everything being equal, 4K footage demands far more capacious storage solutions than footage shot in 1080p. Fortunately, there are codecs that allow editors to work in compressed 4K. Plus, the cost of storage continues its inexorable decline.

When it comes to capacity, the most cost-effective strategy remains spinning disks. Solid-state drives (SSDs) offer superior I/O performance and their prices are dropping, but they still cost substantially more per gigabyte than spinning disks. But traditional hard drives must offer the throughput to stream multiple streams of higher resolution footage without latency. Disks should have a rotational speed of at least 7200 rpm and offer at least 1GigE connectivity, if not 10GigE or 4/8/16 Fibre Channel.

Another issue is whether to configure storage for NAS or a SAN. Either approach will work. The decision rests on whether data will be streamed from one storage system (NAS) or from a cluster of storage solutions (SAN). Of course, either option demands enough storage for all the data.

The key takeaway is 4K productions need greater storage capacity and performance to support streaming very large files. Go with SSDs for their speed if budgets permit, but otherwise, make sure your spinning disks can deliver sufficient I/O throughput. Finally, contain production costs by selecting storage solutions that are robust, yet economical to purchase, scale, and operate. 4K is the future and the future starts today.

Hyperconvergence: Convergence in Warp Drive

Hyperconvergence is a relatively new technology paradigm that has been garnering attention. To understand hyperconvergence, consider traditional and converged IT infrastructures. The traditional infrastructure is comprised of server, storage, and networking systems, all of which are siloed and, consequently, often require separate administrative groups to manage. Converged infrastructures advance this approach by coupling compute, storage, and networking with a virtualization software layer. Converged solutions offer virtues like simplified datacenter design and can be offered by a single vendor. Although they may consist of bundled components from multiple vendors, there’s one phone number to call when an issue arises.

Hyperconvergence takes the integration paradigm JetStor Hyperconvergenceeven further. This strategy tightly consolidates compute, storage, networking, and virtualization resources in a commodity hardware box supported by a single vendor. Going well beyond virtualized servers and storage, hyperconvergence also integrates other key functionalities such as backup, replication, data deduplication, WAN optimization, and public cloud gateways, rendering standalone solutions unnecessary. In essence, the IT infrastructure consists of homogeneous, self-sufficient appliances.

This everything-in-a-box approach offers advantages. Again, there’s a single vendor and because all the boxes, or nodes, are aggregated, they’re managed as a single federated system. Everything shares a common interface and is administered via common tools, eliminating the need for multiple teams.

Expansion is simplified. Instead of scaling up by adding more compute or storage resources, you scale out by adding more boxes to the environment. Scalability is granular and can be done appliance by appliance, stabilizing IT budgets. The downside, however, is you can’t add only more computing performance when needed or only more storage. With each node containing all resources, buying more computing power means also buying more storage. But because the appliances are x86 server chassis, their costs are reasonable.

Hyperconvergence is not a panacea for all IT environments. Because the nodes don’t share between them, applications generating large volumes of data that require storage might be better served by storage arrays, for example. Hyperconvergence is a good solution for smaller sites or distributed locations where IT resources are sparse and the costs of enterprise storage arrays can’t always be justified. Here, hyperconvergence can deliver IT services efficiently and cost-effectively.

Scale-out NAS Makes Affordable NAS Also Scalable

Suppose you run a media production company that generates large multimedia data files. Your data trove grows daily, demanding very scalable storage. What are your options? A storage area network (SAN)­ is one, but you don’t want the costs and complexities of building and owning a SAN. Moreover, you don’t need hundreds of users to access the files. Network attached storage (NAS) offers greater economy and relative simplicity, but traditionally, NAS hasn’t been associated with scalability.
A new generation of solutions has emerged, however, that gives NAS the scalability it’s always been missing.Scale-out NAS

Called scale-out NAS, this architecture uses management software to federate storage systems into a unified storage pool. In other words, multiple devices can be consolidated into a single, distributed storage system. Files are easily accessed regardless of their physical location and storage can be increased by adding more drives or even entire arrays. Connecting nodes is generally non-disruptive and the addition of more CPUs and RAM increases performance.

Because the storage components can be commodity products, scale-out NAS is a cost-effective approach for meeting extraordinary or unpredictable data growth. Many cloud providers use scale-out to cohere vast numbers of devices, often x86 computers, into a single large node.

Don’t confuse scale-out NAS with scale-up NAS. The latter is limited to a single form factor or, more precisely, a single storage controller. This means expanding storage requires adding additional drives to a one non-clustered array. Scalability is limited and enterprises must buy larger arrays than they currently need to anticipate future growth.

For companies burdened with economically saving large quantities of data, such as media companies like your production firm, healthcare providers or enterprises engaged in data-intensive, high-performance computing (HPC), scale-out NAS deserves strong consideration.

When evaluating scale-out NAS, make sure your solution meets your needs. It should support fully featured clustering as well as simultaneous NAS and SAN access. By supporting iSCSI, Fibre Channel, and InfiniBand interfaces, it offers the versatility to operate in any environment. Your solution should deliver five nines availability, supporting triple parity RAID to ensure performance even if three drives fail. Additionally, it needs to be easy to deploy and operate, freeing you or your staff from the hassle of specialized training.

RAID Levels & Fault Tolerance

Before choosing a software solution for your business, it’s important that you first decide what exactly you want to receive from the product that you are paying for. Surely, in your search for a storage solution, you’ve come across the term “RAID” or a “Redundant Array of Independent Disks.” Basically, RAID is used when a company needs to improve performance or allow some expanded fault tolerance for a server or a network-attached storage device.

What is Fault Tolerance?

RAID Fault Tolerance - ACNC If you want to find a solution that allows some level of fault tolerance, you are looking for a storage solution that, in the event that something fails, in any operating system, the system can still run properly without failing completely. Fault tolerance in software or storage solutions usually utilizes mirroring. Mirroring means that the system performs operations on more than one system – so that in the event of a failure, the system doesn’t lose any information, and the user can continue working on a separate system.

How Does RAID Affect Fault Tolerance?

RAID storage solutions have different levels – most commonly used are:

  • RAID 0 – provides no fault tolerance, but it increases disk speed 2x or better.
  • RAID 1 – mirrors the data on multiple disks to provide fault tolerance, but requires more space for less data.
  • RAID 5 – strips the disks similar to RAID 0, but doesn’t provide the same amount of disk speed. Has fault tolerance without the loss of any data.
  • RAID 6 – minimum of four disks. Same as RAID 5, but the system can fail twice and not lose any data.
  • RAID 7.3 – This new RAID option answers the need for a triple-parity RAID. With RAID 5 and RAID 6 beginning to become inadequate, this option is beginning to take the steps necessary to provide a more reliable storage option that RAID 5 and RAID 6.
  • RAID 10 – this option is costly, but it combines RAID 0 and RAID 1. The RAID 10 option requires four disks, and can continue to operate without loss of any data so long as the failures occur in different subgroups.

Deciding on a software or hardware version of RAID is equally as important. The software version of a RAID solution supports fewer of the RAID levels you may need than the hardware RAID does.

What RAID Solution Is Best For Me?

Analyze your company. Do you value fault tolerance more than the speed and performance of your system? If so, RAID 1 or RAID 10 may be the best option. If you are more concerned with the performance of your system, RAID 0 and RAID 5 would be a good decision. If you value fault tolerance and system performance equally, spending the extra money for RAID 6 or RAID 10 – and ensuring that your system will not suffer in performance, and your data is safe from system failure – are the better options.

If you are unsure of what your company needs, contact AC&NC and use our live chat to speak with one of our experts today! We’ll help you find the right storage solutions for your business.