• October 20, 2018

Rocket Engineering Corner – Mainframe storage management and cloud

Rocket Engineering Corner – Mainframe

A Conversation with Engineers at Rocket Software

 

Topic: Mainframe storage management and cloud

Rocket Software Engineering Participants:

Bryan Smith — Vice President, Research and Development and CTO

Kevin Shaw – Managing Director, Research and Development

Ted Ehrlich – Software Engineer

Don Bolton – Manager, Sales Support

Rocket customers are have a strong interest how storage management and cloud will evolve in Mainframe environments. Bryan Smith, VP of R&D and CTO gathered a small group of Mainframe savvy Rocket Software engineers to discuss these topics, specifically as they pertain to the mainframe space. Here’s what they had to say:

Bryan Smith: What’s new in area of storage management in the mainframe world?

Kevin Shaw: As part of the its “Cloud and Smarter Infrastructure” initiative, IBM just announced Tivoli Advanced Storage Management Suite for z/OS v2.1. This release contains several products to address VSAM management, backup and recovery, tape allocation, and catalog management – and some of these products were developed by Rocket under our partnership agreement. The primary focus of this release is to improve control, automation, and management, but in addition, offering these products as a suite simplifies administration and management and is more appealing to customers.

Bryan: So it looks like the trend for customers is that they want to combine multiple point products into larger solutions for consolidation and overall simplicity. In essence solving larger issues with fewer products.

So this poses the question, are administrators actively demanding more consolidated and “consumable” solutions today, more so than they have in the past? Do they require more guidance to product implementation?

Don Bolton: It may not be so much that they need the guidance, but having newer consolidated solutions does improve the overall environment. We recently had a situation with a customer who was struggling with several older point products. Once they upgraded to the newer solution set, which included a mix of new and existing functionality, their lives were greatly simplified — they had fewer issues, support calls, and overall maintenance. It was definitely the right move.

Bryan: So what needs to be done differently for today’s storage administrators than the storage administrators of 10 years ago?

Ted Ehrich: I think it’s greatly an issue of education – we need to educate administrators on how to utilize tools that will allow them to efficiently analyze large amounts of data. Ten years ago, reports on every data set would be generated and administrators would sift through all these reports. Today the volume is much greater it’s impossible to manually look through huge reports to identify and issues or perform any type of data analysis. There are multiple tools available to automate this process and we need to be sure that administrators are proficient with these tools.

Bryan: Can we educate through the use of product interfaces? In other words, create filters to reduce these reports to a more meaningful size depending on their purpose?

Ted: Yes, we can provide this today, but we must keep in mind that some administrators may want reports on all the data sets in order to feel secure – so it’s back to educating them on what the tools can provide according to their objectives.

Bryan: Moving back to a topic we discussed in our previous session, what’s the future of spinning disks? Are they going to give way to a new media, such as SSD anytime soon?

Kevin: Regarding SSD, the price is dropping rapidly and performance continues to increase, so it’s not really a question that SSDs will eventually replace traditional disk, but when.

Don: We’ve already seen where vendors, including IBM, have introduced flash technology into their host systems for mission critical data that requires fast access. This is particularly important in the mainframe space where most of the mission critical applications live. Including flash in the host system improves access time vs. flash in the external array. More I/O overhead is required to go out to an external storage array. Flash would have been the expensive alternative several years ago, but the price has been dropping so quickly that it now makes sense both technically and financially.

Bryan: The flash technology topic brings up an interesting point. As I see IBM introduce flash memory into System z processors, it appears that we may be going back to the old model of expanded storage within the host. The centralized storage model of data residing on external arrays somewhat simplified the job of the OS by offloading some data management to the array controller. In this environment, the OS manages only processor and memory. By adding flash inside the host, we now have processor, memory, and the I/O subsystem – all of which has to be managed by the operating system.

So is flash allowing the trend back to the more complex environment?

Don: That may very well be the case. I/O is the slowest link in the process. Eliminating external I/O by including flash in the host makes a lot of sense, particularly for applications that require synchronous I/O and random access.

Bryan: That makes sense – applications requiring sequential access wouldn’t see a performance benefit with SSD over spinning disk – but synchronous I/O and random read applications would see significant performance gains.

In environments that have a mixture of spinning disk and flash, what do administrators need for proper monitoring?

Ted: Their requirements would revolve primarily around monitoring automated HSM. They need to have the data automatically moved to the most cost-effective storage tier and they would require a monitoring tool to verify that key data sets are in fact on the fastest storage devices – SMS can be used to ensure that data sets are in the appropriate storage class.

Don: Automation is the key here. With the growing volume of data in a dynamic environment, administrators are integrating various media types – flash, disk, and tape, and they need an automated monitoring system to ensure that data resides on the most cost effective tier.

Bryan: Multi-tiering management is a hot topic. You’ve identified the need to automatically manage where data is placed, and to have the right monitoring tools to ensure that’s the case. With the growing volume of data and the speed with which these environments are changing, it isn’t possible to perform these tasks manually.

An area of interest along these lines is storing one or more storage tiers in the cloud. What’s going on in this space?

Kevin: Although there’s ongoing development with tools and solutions for cloud, enterprise customers are still not completely on board with storing data off-site. For most enterprise customers today, any type of “cloud” solution must physically be located on the premises in the form of a private cloud. The two primary reasons for this are 1) lack of mature enterprise management tools for the cloud and, 2) security. Enterprise customers with extremely sensitive mission critical data need more assurance that the cloud is secure before they adopt the off-premise cloud approach.

Bryan: But there’s a counter to the argument that cloud vendors are not yet secure: Because cloud vendors such as Amazon with AWS, IBM with SoftLayer, and Microsoft with Azure are constantly under attack, they have a team of people who continually identify threats and make appropriate changes. It could be argued that because they devote so much resource to security, they’re even more secure than on premise data repositories.

Kevin: I agree that may in fact be the case, but enterprise customers today still have great concern about giving up control of their data. Even if it were another internal group vs. a 3rd party cloud, those same concerns about control would be a significant factor. This will change over time, but will take more time for enterprise environments.

Don: One issue with cloud for mainframe users is response time. Mainframe applications typically require faster response times than those of open systems. The attraction of System z is direct connect to storage that does in fact provide those fast response times. The introduction of an added network layer would have to be carefully implemented to ensure that response times for all data tiers continues to meet SLAs. In an HSM environment, with multiple tiers, the tier that would be most suitable for a cloud type storage platform would be the “nth” tier – the tier that is seldom accessed — if at all.

Kevin: It’s hard to make the case for primary processing or principle data to be administered in the cloud at this point in time — the proper amount of bandwidth to and from the cloud just isn’t there yet. The cloud is appropriate for storing nth tier data, but for most application processing, data needs the big pipes to and from mainframe. A mainframe with self-contained processing and storage is really its own cloud.

Bryan: Are we saying that mainframe architecture is less efficient than x86 architecture for accessing network or cloud data?

Don: It’s true that mainframe architecture is different from x86 architecture — it’s faster –mission critical applications require faster response times. x86 systems have the same latency issue as mainframes once they go out to the network. The requirement and expectation for mainframe applications is typically more demanding than open system applications that may be accessing storage over the network

Kevin: I agree that the issue is one of expectations, but it’s also about computing philosophy. Mainframe environments provide centralized computing and storage. The idea is to have everything local, controlled, and fast. Applications written for cloud however, are written for a distributed rather than centralized approach. In this scenario, much of the processing and storage can be done in the cloud. The mainframe represents the “other side of the cloud”, meaning that it acts as its own cloud for centralized processing — but then can interface with another cloud when appropriate.

Bryan: So it sounds like mainframe plays an important role in or alongside the cloud. It will be interesting to see how this plays out as we move forward.

Thanks to all of you. I appreciated your insights. This conversation was certainly interesting and fun for me and I hope it was for you as well. Stay tuned as we will cover additional topics in this type of forum.

To all you readers out there – we would love to hear your comments on this subject. Please feel free to click on the comment button to submit your comment. Thank you.

Bryan Smith 21 Posts

Bryan Smith is Vice President of R&D and Chief Technology Officer at Rocket

0 Comments

Leave a Comment

Your email address will not be published. Required fields are marked *