We hear it from IT and Infrastructure Directors around the world—backup environments are getting harder to manage. And while most of you aren’t managing the backup processes directly, you’re being forced to do more with less. That often means that departing staff aren’t replaced quickly (or at all). It means your IT budget is shrinking every year while service-level expectations are growing. It means having to make an ROI-based business case for new hardware purchases, even when existing storage space has maxed out.
But here’s a dirty secret about most backup environments—they may appear to be neat and organized, but they’re secretly barnyards. And why do we say that? Because, gentle reader, most are full of hogs.
Now (of course) we’re not talking about literal hogs roaming the corridors of data centers around the world. We’re talking about metaphorical hogs—clients in your network that are hungrily gobbling up precious storage space (or processing cycles) that would be better allocated elsewhere. Sometimes the hog is a client that is storing redundant data that hasn’t been accessed in months, or even years. Sometimes it’s a client that’s running (and storing) full backups when only partial ones are needed.
Every Backup Admin and IT Director knows that these hogs are out there—the problem is often identifying them, especially in environments that span multiple locations and include many different data protection systems. If you could put those hogs on a diet, either by moving more data to long-term storage or freeing up space through other means, it would be a lot easier to get more life out of the storage clients you already have.
So how do you find the hogs? You can start by examining the ratio between the front-end data (the amount of data being backed up) and the amount of data on the primary and backup storage. When reviewing this ratio, which for most companies would be about 1:2, you will likely find that a few particular backup clients exceed this ratio. This generally happens when you’re doing only full backups, with multiple versions being retained. If you have more time, you can also review the logs for each client to see when the data was last accessed, and determine if it meets the criteria for long-term storage.
It goes without saying that these kind of analyses can be time consuming, which is why hogs often go undetected. An easier way to find them is to use a tool like Rocket Servergraph, which includes a built-in “hog report” that helps to eliminate the manual reporting described above. But regardless of how you find the hogs, the next step is taking action.
In our experience working with customers, it’s not uncommon to find databases around 100GB in size taking up terabytes of storage. If you run into a situation like this, there are a few ways to get things under control, including:
- Leveraging incremental backups so you’re only backing up files that have changed, and not copying the same data over and over.
- Improving your deduplication processes so you’re not storing the same data in multiple places. While there’s a case to be made for a reasonable level of redundancy, there’s little benefit in storing the same data dozens of places.
- Reducing the the number of versions you retain. Once data gets beyond a certain age, it may make sense to retain data from one one backup per week, rather than daily, depending on compliance requirements.
So the next time you feel the need to go to your CIO with a request for more hardware, take a look to see if you’re feeding any hogs, then reassess your needs once you’ve slimmed them down.
Want to learn more? Watch our archived Servergraph webinar below.
Latest posts by Rocket Software (see all)
- Now Available: Zero-Footprint Terminal Emulation with Rocket BlueZone Web - November 1, 2017
- Data Backup Reporting - July 5, 2016
- Proactive Backup Reporting - July 5, 2016