Java Grown Up with the System z10
If you read the newspaper articles and watch IBM executives speak on television about the new System z10, you will hear them use some interesting phrasing concerning performance: "...up to 100% faster for CPU-intensive applications."
That's a bold claim, but I think what's most interesting about the System z10 EC is that, for perhaps the first time, this machine is a mainframe-supercomputer. Its quad-core processor technology is state-of-the-art in every respect — not just in I/O performance and execution integrity, but in sheer number-crunching punch. The hardware decimal floating point support is one example, but another example is how Java and XML workloads behave on this machine compared to its predecessors, which were already quite good.
I guess it's no secret that Java simply requires more processing capacity than, say, compiled C/C++ or COBOL, ceteris paribus. That's true on any platform. Processor capacity is never free — except during off-peak hours :-) — but in the large scale enterprise systems world (i.e. mainframes) we care about how code scales, because we worry about supporting the largest number of users as economically as possible. So we keep COBOL, PL/I, C/C++, and other code — and create billions of new lines each year — in part because the code execution is efficient and has evolved with the organization's performance needs in mind.
Before I was born the great debate was between COBOL and Assembler. High level languages like COBOL were slow and inefficient but more portable, and Assembler was peppier but harder to learn and maintain. The high level languages mostly "won." However, when IBM created the System/360 architecture, Assembler got a boost because it was more portable, able to run on any bigger System/360 or any successor. A lot of that Assembler is still running, and a lot of people are still writing new Assembler.
Anyway, there's a similar debate concerning Java, although that debate is probably winding down now. IBM and other vendors keep incrementally improving Java technology, with progressively faster just-in-time compilers and better platform optimizations. In 2004, when IBM introduced the zAAP technology, Java economics radically improved, and Java invaded huge numbers of mainframe sites. And now, in early 2008, we have the System z10. The new quad-core design once again radically improves Java's economics, with Java typically much closer to that "up to 100% improvement" figure than other workloads. So if you roughly double the core Java performance, increase the number of cores to as many as 64 per machine, triple the memory (for those memory-hungry Java applications, to trade memory for CPU), steer work better using HiperDispatch, allow up to 32 of those 64 cores to be license-free zAAPs, and run the whole thing on zNALC-priced z/OS LPARs that enjoy another "technology dividend," what does that do to the economics?
It means the System z10 is the ultimate Java platform, that's what. And the machine still runs everything else you ever wrote (or will write) in the other languages, faster and better.
Now, contrary to popular belief, path length still matters. Java is not the universal programming language; no language is. (Well, maybe Rational EGL, which lets you program in one easy language and deploy to either a COBOL or Java runtime, without learning either language. Darn useful, that.) But the new System z10 challenges traditional thinking: what if mainframe processing power were comparatively cheap and abundant, especially for Java? What if mainframes really were supercomputers?
System z10 is a disruptive technology.
It's fascinating working with Java developers who experience running their code on the mainframe for the first time. Their first reaction is typically, "It can run?" Then, "Wow, it runs!" Yes, write-once/run-anywhere really works, at least for J2EE applications.
But the next reaction is often, "Are you sure what the mainframe is saying about my code is true?" That's because there are no coding secrets when you run on the mainframe. WebSphere Application Server for z/OS can produce a flood of juicy SMF records to help expose poor quality (e.g. poor performing) code quickly and precisely. Tools like Jinsight (free and exclusively for System z) and Tivoli Composite Application Manager peer even deeper inside. The mainframe is the best Java diagnostic environment on the market, at least "out of the box." All this introspection can be a blow to the sometimes fragile egos of application developers who suddenly learn that, no, code to look up an account balance probably shouldn't require two full CPU-seconds per user. Make sure you understand that psychological dynamic.
One solution is to award bonuses to developers who meet or exceed performance targets. There are some complexities in how to run such a program, but the concept makes sense and can help establish a new performance-oriented esprit d'corps within the development community.
|by Timothy Sipples||February 28, 2008 in Application Development |
Permalink | Comments (1) | TrackBack (0)
New mainframe z10 introduced
Now that was different. I woke up this morning and opened up
my laptop. There, on the front of my screen is the Yahoo messenger. The top
story is “IBM
rolls out new Mainframe”. That was
And so is this new server. The z10 mainframe continues the
trend to reduce the amount of electricity and cooling per mip. And it’s got a
tremendous amount of new capacity. Now a customer can get 64 engines vs. the 54
engines that are available on the z9 server. With its faster engines, the
overall capacity of the z10 server is 50% larger than its predecessor.
A New York Times
article includes comments from Hannaford Bros and Nationwide Insurance. Nationwide
has consolidated 1300 applications across 480 virtual servers running Linux for
System z. They believe they’ll save over $15 million dollars over three years
and are running ahead of schedule.
That brings to mind the technology dividend that comes with
the System z architecture. If you’d purchased any of the specialty engines,
IFL, zAAP or zIIP, on previous System z servers, you’ll get the same number of z10
engines plus the extra capacity of this first in the industry quadcore
processors at no additional charge. And this upgrade will be far easier to
handle than if you were operating “scale out” x86 servers. Typically, a
mainframe gets upgraded in a matter of hours. Hundreds of x86 servers being “technology
refreshed” would take weeks or months and a tremendous amount of additional
power and floor space to be refreshed. That would take a tremendous amount of
labor to accomplish to upgrade those x86 servers when compared to the z10
servers as well. Oh, and did I mention
that you’d be paying for each of those server images? Maybe that’s a new metric
to consider…instead of TCO, look at the Total Cost of Upgrade.
So let’s go back to the customer scenarios and “discover” a new mantra for the mainframe. Customers are now taking applications and databases that are operating on separate servers and re-hosting them on the mainframe. Once running on the mainframe, the applications and databases take advantage of the reduced floor space, improved mean time to failure, reduced security intrusion points, capacity on demand and the resiliency of the mainframe. In essence, it becomes the same code in a different container provides a superior operations environment. That’s where the real savings come from. That’s a new mantra when considering the IBM System z z10 mainframe server. Add some new work to an existing mainframe and save.
|by JimPorell||February 26, 2008 in Innovation |
Permalink | Comments (1) | TrackBack (0)
IBM Announces the System z10 for the Next Generation Data Center
As Tuesday, February 26, 2008, begins in each timezone, IBM announces the new System z10 for all the world's next generation data centers. Here, for example, is the Japanese language press release:
We mainframe bloggers will try to keep everyone informed as we hear more from IBM, so check back for updates. I suspect we'll also have some interesting thoughts and perspectives to offer.
I see that Wikipedia already has a short article. That's fast!
UPDATE #1: Released at 12:01 a.m. New York time, here's the English version of the press release along with a short video and some pictures:
UPDATE #3: IBM has posted a large number of official announcement letters. Here are the links to the PDFs. Let's read together, shall we?
- 108-154: IBM System z10 Enterprise Class -- The forward thinking mainframe for the 21st century
- 108-155: IBM System Storage DS8000 series (Machine types 2421, 2422, 2423, and 2424) delivers Extended Distance FICON for IBM System z environments
- 108-156: IBM System Storage DS8000 series (Machine type 2107) delivers Extended Distance FICON for IBM System z environments
- 208-038: IBM Systems Director Active Energy Manager for Linux on System z, V3.1 is designed to enable optimization of energy consumption for heterogeneous IBM systems
- 208-039: IBM ISPF Productivity Tool V5.10 enhancements deliver increased efficiency for ISPF users
- 208-041: IBM DB2 for z/OS Value Unit Edition offers one-time-charge price metric for net new workloads on IBM System z
- 208-042: Preview: z/OS V1.10 -- Raising the bar and redefining scalability, performance, availability, and economics
- 308-001: IBM GDPS V3.5: Enterprise-wide infrastructure availability
UPDATE #4: Yowza! The System z10 EC announcement is 391 pages!
UPDATE #5: OK, not so much reading. Most of those 391 pages are model conversion tables. So I could skim the announcement quickly for the golden nuggets.
One of the most surprising facts is that the System z10 is available immediately. (Maybe you can send a truck to Poughkeepsie to pick one up today?) The new HiperDispatch feature looks very interesting. There's a bit of a caution in the announcement suggesting that workloads could vary more than usual in how they perform when moving up to the System z10. That makes sense, because there's an awful lot of new technology packed into this model that's way beyond the typical model upgrade. Going from a single core 1.7 GHz processor to a quad-core 4.4 GHz processor design is a big leap.
4.4 GHz per core! (I've got to start getting used to saying "cores" now.) And up to 1.5 TB of memory per machine. Many more capacity model choices to make the costs smoother. Uniprocessor performance increased up to 70% for z/OS mixed workloads -- quite a jump. I also like how IBM is fencing off the HSA. It's 16 GB, but you never see it, and you don't have to pay for it. The Capacity for Planned Events (CPE) looks like a great idea. You can get up to 3 days of capacity for activities like relocating data centers. There's a nice statement of direction concerning z/VM and LPARs. You'll be able to manage all processor types and all operating systems within a single z/VM LPAR. (At present you have to fence resources.) New and improved OSA networking. InfiniBand coupling for Parallel Sysplex, raising the local distance limit up to 150 meters. (Sort of a mini-GDPS distance! Definitely nice for campuses.) There's something about some new time of day capabilities in the base configuration which looks great. The processors now support 1 MB page sizes, and there's both Assembler and C/C++ support for them.
This is a major leap. Still reading.
|by Timothy Sipples||February 25, 2008 in Systems Technology |
Permalink | Comments (0) | TrackBack (0)
Perfecting the Storm
... or ... "Virtually Ready for Disaster"
My friend Adam Thornton has written a teriffic article on using virtualization for disaster recovery: for D/R drills, for actual disasters, and even for your production stuff to make it more D/R prepared.
How many times have you had to pick up the pieces and re-boot your business because some tornado ripped your lovely redundant-powered, raised-floor, multi-million-dollar state of the art datacenter? That many? I didn't think so. (To be fair, me neither.) Like a Boy Scout, you should be prepared. In the course of time, bad things happen to good datacenters. If you have had to recover from a lost facility, how many times did recovery go smoothly? That many? I was afraid not.
In my new day job, I've seen D/R done right. It's not easy, but it is sweetly satisfying when it flies. A good D/R strategy costs: time, contracts, redundancy, planning, working it. The smoothest way to recover from a disaster is to have a complete reproduction of your IT infrastructure at the recovery site. (I say "the" recovery site, not just "a" recovery site. Plan it!)
But how many organizations can afford a complete duplicate datacenter? Depending on the need, some have to justify that depth of effort. For those who cannot afford total replacement, virtualizing some or all of the systems is the new looming hope.
With z/VM, you get native platform virtualization. On other architectures, that may be harder to come by. Vanderpool and Pacifica are available for PC class systems if you have the latest chipset, getting you very, very close. In any case, true virtual machines let you drop your operation into the recovery site and enjoy full recovery. Other "lighter" forms of virtualization can be similarly effective but require more careful planning and may benefit from going virtual at all ends of your game, production as well as recovery.
The more that can be built ahead of time to run in V12N the more successful will be your D/R experience. When you are intentional about running your network with abstractions there are less switches to flip when the underlying reality changes.
Disaster is like a perfect storm. Crises combine, crashing crucial computing 'quipment. An untapped aspect of virtualization is the ability to plan and test secondary and tertiary backup facilities. Plan the work; work the plan. (Try your drill before the storm hits.) V12N is the smoke and mirrors to give you a less cloudy day.
|by sirsanta||February 20, 2008 |
Permalink | Comments (1) | TrackBack (0)
The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.