By StorageOlogist (Lee Johns)
Sometimes nothing enables you to appreciate a problem like a real world experience. This week I had an incident with my backup of my home data. I have nothing too sophisticated at home. 3 PCs might be online at any one time but that is really it. Still they contain data that is all too valuable to lose. Work documents, 20 years of financial records, photographs, emails etc. Over the years I have used various methodologies to keep everything protected and have not lost any data in 20 years.
This week though a potential disaster struck. My backup server failed and it is scrap. Just to clarify, I have no data in the cloud. My wife is not comfortable with it, and that is the end of that. I will not get into the details of what happened but the server is non recoverable and the data is lost. I am of course lucky enough to have my primary data still intact and my local backups for each PC still in place.
Of course I now had to decide what to do to replace my current backup system, and looking at the infrastructure I realized I had 14TB of backup disk in my current infrastructure for 3.5TB of primary disk in my PCs. On the 3.5TB of primary disk I had approximately 1.5TB of unique data. That is nearly a 10:1 ratio of backup capacity to actual unique data. Of course my problem was I had multiple copies of everything. Too much copy data! I have a small home system, but in business, copy data can be a killer when it comes to backup and recovery times, and the cost of the total infrastructure.
In IT organizations, data copies are made constantly by procedures such as snapshots, backups, replication, DR processes and test and development. This is made even more problematic by the silos of infrastructure that are involved in protecting and managing the data and each creates their own copies. Indeed IDC has stated that some organizations keep over 100 copies of some forms of data.
There are of course technologies attempting to address the issue. Storage vendors may offer zero footprint snapshots and clones. These are read/write virtual volumes that can be instantly mounted to bring data back online. Storage and backup vendors are delivering deduplication technologies to consolidate the data back down at the time of storage or backup. The problem is that each of these is also a silo and may not take into account data in direct attached storage.
One way you could effectively manage your data is to centralize your snapshot infrastructure onto a common target with the ability to quickly mount a snapshot clone from the centralized data. Even better is if this infrastructure has the same support processes, methodologies and toolsets that you currently use for the management of your primary data. Using this methodology, you can get near continuous data protection with extremely fast recovery times, improving both your RPO (Recovery Point Objective) and RTO (Recovery Time Objective).
Recently, NetApp introduced clustered Data ONTAP (cDOT). With the scalability it offers, you can effectively consolidate much more of your primary and backup storage infrastructure in a single system. cDOT enables consolidation of 10’s of Petabytes of data and thousands of volumes, onto a multi-protocol clustered system that provides non-disruptive operations.
With the recently released NetApp cDOT 8.2 you can support up to 8 clustered nodes in a SAN environment and up to 24 in a NAS deployment. This delivers up to 69PB of data storage within a single manageable clustered system. Of course you can also build smaller clustered systems.
Catalogic DPX not only delivers rapid backup and recovery with data reduction, helping solve your copy data problems with near continuous data protection. DPX also facilitates migration of both virtual and physical infrastructures to new cDOT-based systems by snapshotting and backing up your servers to an existing NetApp system, and then recovering them to a cDOT system. This offers considerable time savings over traditional migration methods. DPX will take care of the disk alignment, and can also migrate iSCSI LUNS to vmdk files on the new clustered storage.
With careful planning and the right infrastructure you can consolidate storage management and get copy data under control. If you don’t, you risk spending more on infrastructure, and wasting valuable time managing duplicate data.
See how one Catalogic customer migrated to a Clustered Data ONTAP infrastructure using DPX.
Take a look at the Catalogic Solution Paper, “Solving the Copy Data Management Dilema”, for more information.