Installing Rocket Mainframe Data Service for Bluemix on Your Mainframe

Share on Facebook0Share on Google+0Tweet about this on Twitter0Share on LinkedIn0Email this to someoneShare on Reddit0

This is part 2 of a five-part series on accessing mainframe data from your Bluemix applications.

  • Part 1: Bluemix Meets Mainframe Data
  • Part 2: Installing Rocket Mainframe Data Service on Your Mainframe (this entry)
  • Part 3: Mapping Your Mainframe Data for Access From Bluemix
  • Part 4: Building an Application in Bluemix That Accesses Mainframe Data
  • Part 5: Bringing Agility to Mainframe Data Access Using Bluemix

If you have not yet read part 1 of this series, please review it now.

So, you have already registered for the Rocket Mainframe Data Service and have received the welcome email. In this part of the series on Rocket Mainframe Data Service, we will cover the details on how to get started with this service.

But first, let’s take a quick detour and discuss the experience.

As you may have already noticed, the email provides you with two links: one is a link to the “Getting Started” guide, and the other is a link to a “script” that can be run on a *nix box. If you come from the mainframe world, this may seem odd as you may be used to a different kind of experience that involves a lot more than two downloads (multiple phone calls and possibly hours of planning). But at Rocket, we are leading the the way in enabling the hybrid cloud on the mainframe (z System). And with that comes a new and improved experience for our mainframe customers.

On the other hand, if you come from the fast-moving and agile world of the cloud, you may be wondering why you even need to download anything. Well, we are attempting to bridge the fast-moving world of cloud with the trusted world of the mainframe and a new experience is warranted. We are confident you won’t find it problematic and may even like how it brings together the mainframe world with the cloud.

With that out of the way, let’s get started. The link to the “Getting Started” guide provides you with the list of steps necessary to get the Rocket Mainframe Data Service up and running on the mainframe. This may seem like a lot of steps, but we have made it simple. Just follow along here and you will be all set.

Overview of the install script

As we took on the task of enabling Bluemix access to the mainframe, one of our primary goals was to make the process of getting the service up and running on the mainframe as simple as possible. In an effort to simplify the install, we have provided a script that will automate the task of installing and starting the Rocket Mainframe Data Service on your mainframe (most of us run scripts on a daily basis and should feel at home doing this). With that said, we do hope that you will provide us with lots of feedback on your experience with this script as we hope to improve it before the end of the Beta period. Before you run the script, please be sure to check with your network administrator (as well as your mainframe system programmer) to ensure it is okay to run it.

Now, we will now focus on how to properly run this script (dsbinstall.sh).

You can run this script on a Linux machine or one of its variants. This was a conscious decision on our part to reduce friction and access to the mainframe for the Bluemix developer. However, we also understand that a system programmer on the mainframe may like to have full control. As such, the script can be run in two modes:

  • Automatic mode will fully install the product and get the service ready to be started
  • Manual mode will create all the necessary jobs that the system programmer will need to run and place it in a ‘dataset‘ on the mainframe but won’t run the jobs.

To run the script in manual mode, you can issue the following command: ./dbsinstall.sh --runMode manual.

The script runs in automatic mode by default (when a --runMode option is not specified). We won’t cover the manual mode in this article.

Running the install script

When you start the script on your machine, it will first download all the required assets for your mainframe from the Rocket download servers. An appropriate prompt will ask you to accept the start of the download. You will see something like the following while the service is being downloaded:

After the download is complete, you will be prompted for a hostname for the mainframe where the service needs to be installed, a username and a password for use during the installation and a high level qualifier for the dataset — that’s mainframe speak for a directory (well kind of, you can read up on datasets if you are so inclined!). You should ask your mainframe system programmer for this information.

After providing this input, the script will provide a nice summary of what it will do. The following is an example from my own installation sample run:

Before proceeding further, you will be asked to specify job control language (JCL) job header. You can just grab this from your mainframe system programmer. If you are interested in learning about what JCL is, please feel free to read up on it. Please note that a default JCL job header is provided for you but your system programmer may want to replace that or alter it. After you save the JCL job header, the script will start working its magic.

First, the script will download the installation package from the Rocket download servers, and it will upload to an appropriate dataset on the mainframe. After the upload is complete (and before running additional jobs on the mainframe to install the service), the script will prompt you and you can cancel anytime.

When running in automatic mode, the script will notify you of the status of all the configuration tasks and jobs that are being run. After the script is done, your system programmer can simply start the service on the mainframe. If one of the jobs fails, the easiest thing to do is reach out to your system programmer and figure out how you may be able to rerun the script. We tried a lot of different environments, but mainframes have existed for five decades, and as such each mainframe (z System) can be unique. Please also drop us a note if this happens, as we want to make sure you can run this script. After a job is submitted while it is running, the script will provide a nice status every few seconds on the job so you can monitor until the finish.

After each job is finished, the script will also allow you to view the job output:

We are going to skip that though, and type “n” to move on to the next task in the script.

For the rest of the jobs that the script runs, the process is the same. It will run the INSTALL job, create the necessary configuration files, run a job to APF authorize the libraries where the install is being done (if necessary), setup the started task for the service, and allocate required datasets for the operation of the service. If you need additional information (or want to know what is being done in each step), please feel free to review the “Getting Started” guide and download the detailed documentation on the service from our site.

Finally, after everything is done, it will ask you for the PROCLIB (a predefined system library where the service started tasks can be saved). You will have to get this information from your system programmer and enter it. The script will echo back the started task name and the library that you will need to provide to your system programmer for the very last step:

Your system programmer can simply issue the command to start the service on the mainframe using the started task name that the script shows you. You will be greeted with the following:

You will also be prompted to download the full documentation on the service (if you prefer). This documentation is primarily mainframe-centric and will be useful if your system programmer needs additional information. It may also be useful for mapping the data (which we will do in Part 3).

Summary of install script tasks

The script provides a summary of all tasks that were completed and additional important details on things your system programmer will want to know (related to security setup):

Pretty simple, huh!

In part 3 of this series, we will discuss mapping of the data – the last step you have to take before you can access your data from Bluemix. Stay tuned!

The following two tabs change content below.
Azeem Ahmed
Azeem Ahmed joined Rocket Software in 2003 as a Software Engineer after having graduated from UT Austin. Over the course of past fifteen years, he has held many engineering and management roles. In his current role as Chief Technologist for Cloud, Azeem helps lead Rocket R&D around Hybrid & Public Cloud, is responsible for Rocket’s Cloud strategy, and leads Emerging Tech Research Group focused on Big Data Analytics & Cloud. In addition to this role, Azeem was instrumental in conceiving the Rocket.Build program – an internal hackathon for Rocket Engineers. He continues to direct that program every year for our engineers.

No comments yet.

Leave a Reply