Covering Disruptive Technology Powering Business in The Digital Age

Home > DTA news > News > Where and How Will You Store All That Big Data?
Where and How Will You Store All That Big Data?
April 28, 2016 News

The data is piling up and we’ve witnessed an explosion in storage capacity to keep up with it. Way back in the 1980s, the late-great comedian George Carlin foresaw the storage woes we’re facing today: “A house is just a pile of stuff with a cover on it. When you leave your stuff, you gotta lock it up. Wouldn’t want somebody to come by and take some of your stuff. Sometimes you gotta move, gotta get a bigger house. Why? No room for your stuff anymore.”

As you well know, companies are collecting stuff, meaning data, like never before. This is happening, in part, because many of the analog functions required to monitor and manage the physical world are fast becoming digital ones. This isn’t the only source of the surge in data. From traditional ERP and CRM applications to emerging distributed mobile applications, the amount of data generated by business organizations continues to skyrocket. Looking ahead, the problem is only going to get worse.

Data holds value, not just white noise

Enter data from embedded systems and devices (e.g., MP3 players, MRI scanners), often referred to as the Internet of Things. All of this data isn’t just “white noise” generated by devices. It holds value for those who know how to extract it. For example, by searching through that data, businesses are able to provide more relevant content, refine their products, and improve interactions between people and devices. All that data has grown to a level where it’s starting to overwhelm established practices in datacenters. According to the Big Data Study in 2014 by IDGE, 31% of enterprises currently manage more than a petabyte of data, and, on average, they expect their storage needs to grow by 76% over the next 12-18 months.
In this new world, traditional storage systems no longer cut it. There isn’t enough power, space, or time available to sustain traditional storage solutions with the volumes of data that organizations deal with today. The solution to this is something like a weight loss regimen for data storage. There is only one way for a normal, healthy human to lose weight and boost performance: consume less and exercise more. Similarly, the best way to slim down your bloated data storage is with technologies that take a multi-dimensional approach to the problem. These technologies not only perform better, but deliver more value in less space and have the capability to eliminate the need for extra copies of data.

Flash is the best way to boost storage performance

One of the most disruptive technologies in this slim-down space is solid-state or flash storage. For decades, the best way to boost the performance of your applications was to add additional disk drives, or spindles, to your storage environment, even when you didn’t really need the added capacity. With solid-state drives (SSDs) that use flash technology, you can get all of that performance in 10% of the physical space required by hard disk drives (HDDs). Along with the savings on floor space, flash also provides a significant savings on utilities because solid-state drives don’t spin, which means they don’t have to be powered or cooled in the same way that HDDs do.

But getting fit is not just about losing weight, it’s also about creating a healthier, more active version of yourself. Flash not only lets you slim down the number of arrays and the resources used to maintain them, it also transforms your storage capabilities. When performance matters most, nothing can beat an all-flash array built on a flash-optimized architecture.

Not all flash storage is the same

It’s also important to recognize that not all flash is created equal. While performance and space are immediate benefits, it’s important to consider that as you move more workloads to flash you can’t sacrifice the tier-1 data availability and scale that you’ve come to expect from mission-critical storage systems. Looking outside of next-generation storage media, it’s key to take advantage of data services that eliminate wasted space. Compaction technologies like data deduplication applied to flash storage can reduce capacity requirements by 4:1 or even more within a system.

One of the things that consumes so much enterprise storage space is the multiple copies of data sets that are required across systems for disaster recovery, test and development, data warehousing, and backups. These are all based on the same original data set, but operate independently of each other and result in copy after copy. When companies rethink these discrete systems and consolidate onto a highly-scalable, accelerated flash array, they can take advantage of space-efficient snapshot mechanisms to take a full-fidelity virtual snapshot of a dataset and then expose that snapshot to a new application or developer. In reality, no additional copies of the data are ever created, which can drive additional space savings six-fold or more.

All that data can be a valuable resource

When George Carlin talked about stuff, he was really referring to the clutter an individual accumulates. For businesses, stuff isn’t a collection of souvenir shot glasses from every trip you ever took, it’s a valuable resource that can be mined for essential information. If you can perpetually store and effectively utilize all that data, it can be the gateway to better decision making and more profitable business ventures. It’s the stuff that dreams can be made of.

This article was originally published on and can be viewed in full