top of page

Catalog and Manage the Data Deluge to Increase Innovation - Lee Johns

By @StorageOlogist

We all seem to be battling with extreme conditions lately. The Polar Vortex has hit the US plunging temperatures into unheard of territory and I just returned from a trip to the UK where record rainfall has led to widespread flooding. If you are an IT manager dealing with corporate data you might be thinking “Big deal, I have been dealing with extreme conditions for a while now.”

The fact that data is growing at unprecedented rates is not new and vendors have been bringing new solutions to market that help with data reduction and ease the burden of storing, managing, backing up and recovering data. However despite the usage of technologies such as space efficient snapshots, deduplication, compression and space reclamation, backup and management of data remains an untamed task for many companies.

In a mid-2013 survey by IDC, they discovered the following:-

• In 2012, more than 60 percent of all enterprise disk capacity worldwide was filled with copy data. • By 2016, spending on storage for copy data will approach $50 billion and copy data capacity will exceed 300 exabytes. • By 2016, companies will spend 8 times as much on copy data as they will on storage for big data and analytics. • In the next 12 months, survey respondents expect increased use of data copies for app development and testing, regulatory compliance, multi-user access and long-term archival.

So while current data reduction techniques are good and getting better, there still remains a need for more intelligent data management & backup. The reality is that while data continues to grow, it continues to grow in an uncontrolled way. You need to use effective data reduction techniques, but you also need more knowledge about what data you have and its importance to the organization. At the same time, as the IDC data shows, you need to be incredibly mindful of spiraling backup costs. In many cases the cost of the additional storage, media servers and software required for backup is crippling innovation as it is consuming dollars that could be invested in new projects.

This is where the concept of a catalog of data comes in. We have all heard the old adage “You cannot manage what you cannot measure.” This is just as true for data as for anything else. Having a catalog of all primary and copy data with associated metadata tags for all files, VMs and objects, enables more considered decisions to be made about backup strategies and how best to meet service level agreements. It just makes sense. You have to know what your data is in order to know how to protect it (or not).

With all this in mind I was pleased this week to see Catalogic Software step out in its own right as a new company. Catalogic was formerly Syncsort Data Protection. The Catalogic Software can make a big difference in the cost and efficiency of managing and backing up data.

Because of the reasons discussed previously, the backup and data protection market is a very crowded space today. In my mind, there are three key reasons why Catalogic makes a difference:

1) Reduces complexity for customers with a single consistent backup solution across primary, copy, physical, virtual and cloud environments.

2) Reduces overall TCO. By leveraging existing storage hardware, snapshotting and data reduction techniques, Catalogic can reduce both the operational and capital costs associated with traditional data protection and backup solutions.

Note:- This is currently only using NetApp FAS Series Storage as the backup server.

3) Increases the value of data to an organization by providing a holistic approach to cataloging, managing and protecting data.

So if you are suffering from a data deluge, and my guess is that if you are reading this you are, you might want to join me in welcoming Catalogic to the market and maybe take a look at what they could do for you. If you get data protection policies and processes under control you will be able to focus more time and money on driving new business projects that capitalize on your companies data.

References: IDC Blog by Ashish Nadkarni on quantifying the data copy problem.

bottom of page