Mainframe Computing: What Will the Future Hold?

I do not have any particular insight into IBM's research and development efforts for future mainframe technologies, but I can take some educated guesses. In the process of guessing I am actually making some predictions about the future of all computing. There's been a long history of computing capabilities appearing on mainframes first, often a decade or more ahead of the rest of the computing industry.

Here are three of my predictions, in no particular order:

  1. Fifty-five may be the limit. (Or not?) I'm referring to processor clock speed. For Intel the speed limit is 4.0 GHz, which is only reached with a single active core. Intel hasn't been able to increase the clock speed for nearly 8 years, and that's a long time in microprocessors. IBM's POWER processors did much better, topping out at 5.0 GHz with POWER6 and with all cores active, before backing off the clock a bit. Now IBM's zEC12 sits alone at the top of the clock speed ranking with all cores active at 5.5 GHz continuous. That's simply amazing. But will IBM be able to increase the zEnterprise's clock speed again? If it can be done, I assume it will be, but the physics are tough. That said, I expect IBM zEnterprise will maintain industry clock speed leadership indefinitely because, if it makes sense to solve those tough problems anywhere, it will make sense on zEnterprise.
  2. Hardware collaborative iterative compilation. IBM pioneered microprogramming all the way back in the System/360 first introduced in 1965. However, for the most part hardware is static once shipped. Yes, occasionally IBM (and other vendors) may release microcode updates, generally to fix a misbehaving instruction in a processor (often unavoidably slowing down that instruction for the sake of correctness), but that's about it. When you want new instructions, you buy new processors. At least with mainframes you can upgrade your processors in place. I think the processor improvement cycle is going to accelerate as compilers start to tell the hardware how it can improve, and the hardware will respond. In other words, the compilers and the hardware will jointly figure out how to shave execution time and path length off computing tasks as they operate. The dialog will be something like this: "I see, Ms. Compiler, that you're being particularly demanding of my Level 3 cache today, and it looks like you don't need to be if I understand what you're trying to do. Would you mind terribly if you combine these duplicate memory blocks so you can fit in Level 3 for me?" "That's a good idea you've got, Mr. Hardware. I'll take care of that for you the next time Dr. z/OS tells me there's a lull in high service class processing. By the way, Mr. Hardware, I'm doing a lot of financial calculations that could really use a custom instruction or at least a few better instructions. Have you got a better one that you can put into your microcode or into your FPGA for me? Or ask your mother if she can provide you with a new instruction and delete those 5 other instructions I never use? Thanks a bunch." "Teachable hardware" will have some interesting side effects. For example, if you think capacity planning is difficult now, just wait.
  3. Quantum computing. IBM is spending a lot of effort in this area, and I think we'll see a quantum computing element available for zEnterprise as an option in the not too distant future. That innovation will also have some interesting side effects, like perhaps upending cryptography.

by Timothy Sipples February 10, 2013 in Future, Innovation


TrackBack URL for this entry:

Listed below are links to weblogs that reference Mainframe Computing: What Will the Future Hold?:


The comments to this entry are closed.

The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.