(Download the video directly as well.)
At the zEnterprise launch event, I talk with IBM Fellow Gururaj Rao about IBM’s new mainframe “system of systems,” the zEnterprise. We talk about the ways the zEnterprise is used and how IBM has optimized the zEnterprise for various of those work-loads. For example, there’s optimizers for as analytics and crypto. We also cover the general features of the zEnterprise, such as running Power-based and x85 blades.
If you don’t want to watch the video above, you can download the video directly or download the audio only. Also, you can subscribe to the RedMonk Media Fire Hose to get this video along with other RedMonk videos and podcasts downloaded automatically.
Michael Coté: Well, hello everybody! Here we are in New York, at the IBM zEnterprise 196 launch. It’s a nice sunny day out there. We actually saw the gigantic, compared to like under the desktop box, a gigantic 196 box earlier, which was pretty exciting. As always, this is Michael Coté with RedMonk, and I have got a guest with myself.
Gururaj Rao: Hello! My name is Guru Rao. I am one of the IBM Fellows, associated with the Systems and Technology Group, working on z technologies and System z into the future.
It has been a very exciting and a very powerful announcement. As Steve Mills characterized, it is one of the most powerful announcements that IBM has ever made.
Using all the tremendous amount of data, exploding data, and putting it to customer value – which is one of the things that’s behind the Smarter Planet – still needs to be done at the same time, when the data center is facing challenges of cost, skills, and bridging the gap between business hurdles. Solving both of these requires making do more with less.
Michael Coté: A lot of optimization as it were, right?
Gururaj Rao: Correct. That is at the root of the workload optimization. Where IBM has approached this is, we build general purpose servers; System x, System p, System z. And based on how customers use this, based on application availability, customers choose what they want to do with these general purpose servers. They are optimized for a certain set of workloads.
For example, the mainframe System z has traditionally been extraordinarily good for OLTP and batch. So that’s one category, general-purpose servers, fit for purpose servers, whatever you are going to call that.
There is another category that we have identified, and here a reasonably broad set of workloads that a general purpose server does. By adding this component, which we will call the optimizer, makes the general purpose system much more suitable, much more optimized for certain types of workloads.
Michael Coté: Right. You are sort of un-generalizing the generalized platform, specializing it.
Gururaj Rao: Specializing it. An example of that would be cryptography.
Michael Coté: Right.
Gururaj Rao: You can add a crypto card. You can use variety of encryption schemes. Some of them are custom-programmable. You keep on enhancing the algorithms. These are the kinds of things, by adding something to that, you build on it.
Now, the mainframe has always been one of the early leaders of cryptography, added as I/O cards; and in the latest generation, the z 196, we have enhanced some of the encryption algorithms.
Traditionally, RSA, which is one of the algorithms, DES, triple-DES, AES, these are some of the algorithms. But one of the emerging algorithms is the Elliptic Curve Cryptography. What it allows you to do is to achieve the same strength of encryption with a smaller number of bits. Therefore, people think this is something that’s going to be attractive for the mobile environment. And we have certainly got a leg up by introducing the cryptographic enhancement on the z196.
So there is a third category. So we have talked about the general purpose servers, the optimizers. The third category is, how you can create a hardware/software solution that is purpose-optimized, and where appropriate, pre-installed, so that there is a time-to-market benefit for customers.
The mainframe hasn’t done a whole lot of this in the new space, but other platforms, like POWER systems, the IBM Smart Analytics System, where a lot of work has been done to optimize for queries or analytics, is an example of that.
Then there is a fourth category. The fourth category is one of the most — this is what makes, as Steve Mills said, the zEnterprise announcement, the most powerful ever. That is about — you can describe it in multiple ways. It is having a system of systems. When you create a very complex system, and of necessity there are those, being able to compose them into smaller or I should say less complex systems and having those systems work together, we think that is a very relevant and a central part of how to build smarter planet.
Now, the other important part, another way to look at this is to say, a customer’s business function or a business process today is composed of a variety of applications, and these applications don’t necessarily rely on one architecture, one platform, or one server, they typically run across all of them.
Michael Coté: They are very heterogeneous.
Gururaj Rao: They are very heterogeneous. They run on a 3 tier architecture model, and today there isn’t a mechanism, an effective mechanism, whereby you can essentially make that application that spans these multiple tiers work effectively, work effectively according to a defined business process or a business process goal I should say.
Michael Coté: Right, right.
Gururaj Rao: And this innovative capability is at the very heart of the zEnterprise. And there are two pieces to this; one is a virtualization and a resource management model, which we call the Unified Resource Management. That’s what allows resources across three different platforms; the mainframe, AIX based applications or workloads running on a POWER7 blade, and in the future Linux based applications or workloads running on an x86 blade.
The ability to virtualize them, the ability to share all those virtualized resources, the ability to allocate them on the basis of a business priority, and be able to do all of this with goal-oriented performance, energy, management, and provide secure access from those components that deal with each other, connect with each other, as well as a very high degree of resilience. Because now they are all, by definition, these are part of a business process and a business function, and you can avoid the current situation, where the component that runs on the mainframe has a very high security, has a very high resilience, is run according to goals.
But the related pieces that accesses the mainframe data, that run on distributor platforms that are surrounding the mainframe, they don’t. By putting them into this structure, you have the first opportunity to be able to do this in a very cohesive way.
In IT there is a notion of, you want to be able to manage the workload. You want to be able to provide the right degree of security. You want to be able to provide the right resiliency. And you want to be able to achieve the right service level agreements. And that the mainframe has already measured.
But now the question that has remained opened is, how do you extend this governance to a business process that runs beyond the mainframe? Now we have a way to do that.
Michael Coté: I mean, you mentioned OLTP and Batch processing. Are there other workloads that you can see pulling in, because you will have the UNIX boxes and the Linux boxes, or is it still the traditional kind of nightly data run and OLTP stuff that you see would be used on the 196?
Gururaj Rao: We need to, sort of, be very clear about what the POWER AIX blades do, and the x86 blades do, and what we have done in the z196 base hardware, the base systems better. If an application is available to run on a Linux environment in the mainframe or the z/OS environment in the mainframe, zLinux or z/OS, then you get certain advantages. The mainframe is more resilient. The mainframe has IO capability. The mainframe has isolation and security control. The mainframe has a coordinated way of preserving the mainframe data along with the zLinux data in a remote data center.
However, not all applications, not all workloads, that customers would want to know, that are part of a business function, are going to be available on z Hardware, or z Operating Systems, and yet, there is a need, like we were talking about, how you can make all these workloads tidied up or behaved right.
So let’s talk a little bit about the z196, how it is different from z10, and what we have done to enhance things like Java and Linux based nontraditional types of workloads.
In keeping these discussions on workload optimization, we don’t want to be misled into thinking that it is all about performance.
Let me characterize, for example, environmentals. The z196 provides about 40% more thread performance or uniprocessor performance, technology and design, and that’s for traditional workloads. For Java, Linux kinds of workloads, it achieves more like 60%.
One of the reasons is, we have provided a specialized type of design in the processor, where instructions can be executed out of sequence, and therefore can better take advantage of what is available from a data and execution capability.
On top of that, the mainframe provides some powerful environmental opportunities or environmental enhancements. Compared to the z10, we pack 60% more capacity on a z196, but at the same power envelope and the same footprint envelope.
What that means is that the MIPS per watts or kilowatts, the power performance efficiency, goes up by a factor of 1.6, or 60% more.
And then there is Linux. Linux tends to be gaining in importance, at least in my mind. Partly because, if I go back to my Smarter Planet context, analytics is at one of the roots of the smaller planet. And IBM has a wealth of analytics offerings. We have Cognos, we have Info Warehouse, we have SPSS, and these are available on z and zLinux. Some of it is available on z/OS, but since many of these things are acquisitions of IBM, they tend to be born in the distributor space, and Linux is the best place to hold that. zLinux is one of the best places to hold.
And what we are trying to do in the z196 to better optimize to Linux is exactly in the right path to try to help this analytics integration.
So Unified Resource Management is at the heart of one of the innovations that we are bringing here. And as I mentioned earlier, you have three different platforms; the mainframe, traditional mainframe, z196, POWER, AIX plates, x86 Linux plates, and these are all virtualized. And these virtualized resources are shared pool and they can get allocated on the basis of a business policy.
Michael Coté: Right. It’s sort of the classic pooled resources that you would have as well. And I don’t know if something in the last two years is classic, but it’s having virtualized pools of resources that you pull together as needed, sort of a private cloud kind of idea.
Gururaj Rao: Being able to share resources is always a powerful way of making more with less.
Michael Coté: Right.
Gururaj Rao: Right. Doing more with less essentially requires using all the resources you have, and we have put that to work on the z/OS system and the traditional mainframe system, now we are extending that using Unified Resource Management.
But that’s not all of it. We are providing the ability to instrument and figuring out what went wrong in the distributor platforms. We are providing the ability to manage some of the infrastructure transparently.
Customers don’t need to deal with virtualization layers, we provide the management of the virtualization layer, configuring, installing it, so that customers just need to worry about the workloads that sit in this virtual source.
Michael Coté: I mean, it seems like the reason you can do that is because it’s all one system, sourced from one place, built on components and software that knows about each other and fits together rather than taking a — more of a buffet, hotchpotch approach to building up your IT, and hoping that the management software you have can keep up with the combination of everything.
Gururaj Rao: Correct.
Michael Coté: You are sort of standardizing at some level.
Gururaj Rao: We are providing optimization for workloads across different platforms. So with that as the sort of a framework, let’s talk a little bit about the Smart Analytics Optimizer.
Now, the Smart Analytics Optimizer is yet another significant innovation that is being brought onto the mainframe for the first time. It is probably going to get extended to other platforms, but it got born on the mainframe. And by the way, it is only the first of the kinds of optimizers that we are going to end up doing.
So what is it? What it is, is the ability to transparently provide warehousing query performance improvement to certain kinds of qualifying DB2 warehouses. A customer can define at a data mart, against which they frequently query, the data mart can be defined, and that data mart and its data can be hosted in a set of blades in the BladeCenter Extension or zBX. They are parallelized. The data is put into a suitable format for query processing and is kept in the memory. As a result, you get a significant speedup, not just because of the parallelization, but also because of some of the innovative techniques that we provide.
But there are a number of other very interesting features. We have seen this compared to the traditional way of executing on the mainframe can give 1-2 orders of magnitude improvement in performance, the elapsed time of a query. Of course, it depends on what queries are eligible and what the query is doing and how the data is. So it varies by a number of factors.
Now, the other interesting thing about this is, the way it achieves the speedup does not rely on what the industry typically has done, which is a lot of index and index tuning. Typically, in query systems, if the indices and the data or the query pattern, if they do not match, then you don’t get a very good speedup of the query. By using some of these innovative techniques, we avoid the reliance on index and index creation, and index management.
Therefore, we expect that for the eligible queries, we get a lot more deterministic response, predictable response. We get this with the applications not being changed. The database administrator for this analytic system or the warehouse system is the same as the typical database administrator for the DB2. The administrator would just have to define the data mart and then get the data essentially orchestrated into the blades. And everything else is completely, transparently handled by this optimizer.
Michael Coté: Right, right.
Gururaj Rao: So it’s a way of reducing the skills burden —
Michael Coté: They don’t have to build the system from the ground-up essentially.
Gururaj Rao: Right. You can take a familiar system, you can graph this appliance on the back-end of it, and there you go. So this is also part of what makes the mainframe more suitable.
Obviously, this is not the one and only one, using a combination of what I mentioned earlier, for zLinux based, analytics components, like SPSS, Cognos, warehousing, coupled with warehousing accelerators, like the Smart Analytics Optimizer, and the enhanced capabilities of Java and Linux-based workloads, along with the extended capacity for OLTP and batch workloads, now you have the ability to combine and make a much more integrated, consolidated, smarter planet kind of system.
With these IBM Blades – POWER, AIX, and x86 Linux – now you have the ability to construct more complex systems, solutions, and manage those solutions on the basis of what a customer’s business process and what workload optimization needs are.
Michael Coté: You have sort of found some common services and needs, if you will, and all the traditional things you would want with z and other stuff, and put them together into one, I hesitate to call it a box, but one little set of towers, if you will. And the idea is that — like you were going over the data and data analytics accelerator or optimizer, whatever, plug-in, if you will, appliance.
It sounds like as time goes on, because you have this, to use the word kind of ironically, generalized platform in the 196, you can start adding in these appliances that optimize various workloads that you have, and because it’s kind of unified, you have that control that allows you to optimize that.
Gururaj Rao: In my terminology, it’s a way of having a cake and eating it too. Traditional customers who want to continue to run with the OLTP and Batch, they can continue to do that, you get more value with the z196.
Michael Coté: But then you can start getting those new types of workloads that are integrated and distributed.
Gururaj Rao: You can get new workloads on the z hardware.
Michael Coté: Right.
Gururaj Rao: You can add optimizers into zBX. You can add IBM blades that run applications and you can manage them, construct more solutions. So it provides a very flexible, extendible way of going forward with workload optimization.
Michael Coté: Right, right. Well, great! Well, I appreciate you spending all this time to go over the kind of the idea of one of the purposes of the 196 is, which is consolidating things and basically doing workload optimization. I appreciate it.
Gururaj Rao: My pleasure Michael. But just for of the purpose of terminology, let’s make sure, what we call zEnterprise really is composed of the traditional System z196.
Michael Coté: Sure.
Gururaj Rao: With zBX, which is the BladeCenter Extension and the URM, all of that is zEnterprise. So the 196 is only one component that we are announcing.
Michael Coté: Alright, which is the zEnterprise, if you will.
Gururaj Rao: zEnterprise is what we should be saying, zEnterprise.
Michael Coté: Yeah, yeah. That makes sense.
Gururaj Rao: Absolutely!
Michael Coté: Well, great! Well, thanks.
Gururaj Rao: Alright. Thank you.
Disclosure: IBM is a client and sponsored this video.