As De La Soul taught me in 1989, themselves passing on the teachings of the scholar Bob Dorough, three is a magic number. When it comes to data, particularly when attempting to understand and crystallise requirements, and design a solution to meet those requirements, the three-pronged framework I often use can be distilled as: 

In diagrams I will draw those as bottom-up layers with storage providing the foundation, management of the data/storage overlaying that foundation and access/utilisation overlaying that. This isn’t revolutionary, people have been using similar terminology to talk about data for years; old as it is though, it’s just as useful today as it was way back when. So, the framework we use to evaluate has held firm; however, much has changed with what it is we are evaluating. 

Compared to just a few years ago there are many more systems creating much more data; shed-tons of the stuff. From fridges to fish-tanks, enterprise to hyper-scalers; data, data, data. A decade ago, a single institution storing multiple petabytes of their own data was pretty much unheard of. One of our vendors has just sold into a large bank who have 60 petabytes of purely backup data! 60PB! We’ve had to make up new words like zettabyte and yottabyte to define orders of magnitude of data which, just a few years ago we hadn’t even imagined. Data, data, data. 

In addition to new words, we’re having to come up with new solutions to address the data deluge. A couple of years ago I helped pen a response to an RFI for a storage solution. The request was for multi-tenancy, resiliency, scalability; block, file and object protocols supporting data-centre, remote-site and cloud with policy-based tiering, access-control and multi-protocol authentication. They wanted all the storage. 

The response encompassed solutions from a number of vendors and stitched them together with an in-house developed cloud and storage management platform centralising policy-control and orchestration. The proposed solution met the requirements and the feedback from the client was incredibly positive. For reasons not related to the technical solution the project never progressed to the RFP stage. Don’t get me wrong, the solution would’ve done the job but, in many ways, it offended my aesthetic sensibilities. Considering the requirements and the technology available at the time it was just about as simple as it could possibly be; however, it wasn’t close to being simple enough for my liking.

I had this and a number of other career experiences front of mind when we were considering adding DataCore to our portfolio. Their DataCore ONE vision really resonated with me:

“A unified platform to simplify and optimize primary, secondary, and archive storage tiers, all managed under a unified predictive analytics dashboard.”

DataCore’s block-based storage virtualisation software SANSymphony has been around for a number of years and is used by many loyal and happy customers. To enable DataCore to deliver against the ONE vision they’ve recently added vFilO, their distributed file and object storage virtualisation software to the portfolio. vFilO provides visibility and control over widely scattered data spread across NAS, file servers, and object stores through a multi-site, keyword-searchable global namespace. vFilO is very cool indeed. The third piece of the puzzle (did I mention I like three’s?) is Insight Services, a cloud-based, predictive analytics platform providing single pane of glass insight, analysis and control.

Considered through the framework I introduced earlier, DataCore ONE provides: 

Store:

Manage:

Use:

And much, much more…

In summary an incredibly feature rich, flexible and elegantly abstracted storage solution. That’s something.

…and y’all can bet this blog’s not a trick, but showing the function!