Legacy Systems: The Bottom-Line Advantage
In the technology world, you can’t go more than 10 minutes without someone using the word “disruptive.” But you can’t go more than 30 seconds without hearing the word “new.” Absolutely everything in hardware and software is about novelty, and yesterday’s innovation is tomorrow’s doorstop. That 40MB hard drive you bought for $3000 is literally in a museum, alongside your old flip phone and Colecovision console. People love driving 50-year-old cars; no one likes to use a 50-year-old computer.
In this culture of new being better than old, it’s only natural that IT leaders would consider replacing their existing systems as soon as they have a need that isn’t natively supported, especially if those systems haven’t been kept up to date. In fact, that’s often the first order of business for new CIOs when they walk into an IT organization. On paper it makes sense. After all, wouldn’t any good technology leader want to have the best possible tools available? What they don’t realize, however, is that in-place solutions are usually fully capable of meeting today’s needs—provided they get the proper care and feeding. Welcome to the world of legacy systems powering legendary results.
The logic behind rip-and-replace is fairly straightforward: out with the old and in with the new. It sounds great on paper, but when you factor in all of the direct and indirect costs of such a massive undertaking, the financial case for making a change doesn’t always make sense. In fact, organizations that properly maintain and update their legacy systems actually come out ahead on the ledger sheet.
It may seem counterintuitive, but the numbers give IT departments at large organizations a very compelling reason to keep existing systems. That’s because there are so many costs that aren’t often factored in. For example, completely retraining an entire IT department can cost hundreds of thousands, if not millions, of dollars. Then there are the thousands of hours that will be lost to transferring data between systems. And that doesn’t even account for the huge amount of risk inherent in a re-platforming project. The list goes on.
Fans of rip-and-replace often counter that change needs to be made for operational reasons. Again, the data just isn’t there. While the underpinnings of many large systems, including IBM Z and IBM Power, matrix the route back several decades, the actual hardware and operating systems are updated and improved every single year. In fact, more than 75% of the IBM Z mainframes in use today are z14s and z15s, purchased new in the last two years. And IBM just recently announced their new Power10 processor, with a promise of new server models available in 2021—the same tech being used to power the world’s most advanced supercomputers.
This is something that conventional wisdom always gets wrong: the application code may be older, but the modern machines that run those applications can literally perform billions of transactions a day. That’s why not a single bank experienced major outages during the COVID-19 epidemic. It’s why government agencies didn’t grind to a halt, even in the face of record demand.
Two of the biggest factors that matter to IT leaders are performance and cost. When it comes to replacing older systems with newer ones in a quest for “modernity,” legacy systems have a clear advantage both today and for the future.
To learn more about how modernizing legacy systems can help to power legendary results, download our latest research paper here.