What lessons are cloud computing companies learning from the mainframe world?
25 years ago I joined a company called Object Oriented Pty Ltd (now Object Consulting). At the time we were promoting Object Oriented technologies, which was at the bleeding edge of technology. As consultants, we went around telling people how this was going to revolutionize enterprise software (which it did) and how we were part of the movement that would make the mainframe obsolete within the decade (which we didn’t).
Now 25 years on, cloud technology is emerging and I’m experiencing a bit of deja vu. I hear people talking how the cloud is revolutionizing enterprise software, and that the cloud is going to make mainframes obsolete. Such talk isn’t surprising, and while I do believe the first point, based on past experience I very much doubt the latter. But either way, I find the intersection between cloud computing and mainframe computing to be a topic well worth investigating.
I now work for Rocket, a company that builds a lot of enterprise software for mainframes and System z. Being an x86 architecture sort of guy, I thought it wise to get start learning what mainframes and System z are about. So this is a post about my experiences on this journey towards learning System z and how I see it relating to cloud computing.
Having barely started reading about the concepts of its architecture it is fascinating to say the least. My best analogy to date is that learning System z is akin to driving a car with a manual transmission after years of only driving automatics. Being European I can well relate to either.
But back to cloud computing. Last year I was allowed a peek at how Amazon builds their infrastructure. It is truly fascinating–they put a lot of emphasis on measuring their infrastructure; not only on their hardware, but also operations. For instance, when they deploy new software components to their store, they do so in blocks of several thousand virtual machines at a time. As the roll-out progresses, the deployment system tracks a number of metrics, such as sales per second. And if these metrics start to move out of their existing 2 sigma band for the new release, it stops the deployment and initiates an automatic roll back. Their hardware is tracked similarly, with automatic alarms going of when key measures leave their assigned operating band. I don’t see many operations that make such extensive and consequential use of metrics.
Now, learning about System z and its architecture, I am realizing that this is actually nothing new. In a mainframe, if one of its components–be it hardware or software–starts to misbehave, all kinds of automated processes kick in to identify and isolate the issue before taking corrective actions. So in a way, much of what Amazon is doing is effectively re-implementing practices that were recognized as good ideas in the mainframe world long ago.
It would be tempting to say that nothing is new, but that would miss the point. Amazon migrated these concepts into the cloud computing infrastructure and combined it with DevOps approach to create an entirely new capability, one which it needs to stay competitive. This is true innovation. And just like Amazon has learned from and improved on practices existing in the mainframe world, the question is now if (or more likely how) the mainframe world will learn from the cloud, migrating its principles and improving on them to make the mainframe a competitive solution.
I will take this point and stick it on a big Post-It note, where it will remain visible for me and I will revisit it as I learn more about System z.