Imagine a world where you could look at a single dashboard and get a complete overview of your data; how much you had, where it was, how much it was costing you, if it was secure, how to optimise it, all without any manual intervention from you!  This isn’t something that is there today, but hopefully this will be something to come for the future, but we believe that Datrgy is the first step along that path, designed to give enterprise organisations insight into their data and its usage, across multiple file systems.

Why does that matter?

Well, as the forecasted data deluge comes to fruition, with no sign of stopping, the importance of being able to properly manage your data is coming more and more to the fore.  Data growth is being driven by a number of factors: vendors pushing big data analytics like it’s going out of fashion, companies keeping all data they have ever created, just in case they might need to use it in some undetermined time frame.

If you add to this the fact that the price of storage is coming down, and is generally more efficient than it was 5 years ago, you could be forgiven for thinking that coping with the growth is simple – just buy some more disk; it would be easy to miss a potential ticking time bomb within your infrastructure.  As data growth continues that “simple” approach isn’t sustainable, at some point the pace of growth will start to exceed the decrease in cost of storage, with an ever increasing cost base.

So how do we overcome the problem and where do we start?

Historically it was a little easier, with the majority of data being held centrally, either in a single, or small number of data centres, making the management of that data simpler to control.  Now with data being created in most if not all sites, and collaboration being one of the key business drivers, we’ve moved to a more distributed data landscape and with that lost visibility of our data.  Whilst tools exist to provide visibility into your data, they either focus on the storage (rather than the data) or they’re designed for a single site / datacenter deployment model – they tend not to be able to cope in a distributed infrastructure.

Without question, the starting point has to be getting visibility into the data that you’re storing – if you don’t understand the profile of your data, how can you expect to be able to create a policy to manage it?  There needs to be some way of profiling your data to be able to get a picture of how much data you have, where it resides, how it’s used and how old it is.         

Once you have that profile information you can build appropriate policy, with the first, and easiest place to start, being your aged data, i.e. data that hasn’t been accessed for a long time, and moving that data to a less performant tier of storage (likely object or cloud storage).  In our experience, in most cases, this tends to represent circa 80% of the overall data, which frees up a large proportion of storage costs and leaves capacity in the performance tier where you need it.  Future storage expansions can focus on the lower cost, less performant, larger scale storage tier.

The process itself needs to be a continuous one, executing against your data management policy needs to be done on an ongoing basis.

All of these requirements lead to the fact that a Data Lifecycle Management (DLM) platform is becoming much more of a requirement for all organisations.  The ability to scan your data, wherever it may reside and then be able to make policy based decisions around where to place it without any manual intervention becomes increasingly important.

Datrgy has been designed to provide data visibility and policy enforcement, in a distributed environment, without the need to make large infrastructure investments.

If you’d like to join our upcoming introductory webinar and demonstration of the Datrgy platform, you can sign up here.