Canonical (Ubuntu) Needs a Mainframe: An Elaboration

I forgot about this 2011 blog post. That wasn't such a great prediction, was it?

I want to quote one of the comments Mark Shuttleworth wrote which I think illustrates his profound misunderstanding, a misunderstanding that might have contributed to Canonical's recent security failure:

Cloud architectures are fault-tolerant by software architecture, only an idiot would pay for the same fault tolerance twice. Therefore, no matter how hard IBM tries to sell the idea of mainframes being the hardware for cloud, I don’t see it panning out that way. The whole point of the work that’s brought us cloud is to be able to do very large, very reliable services on low-cost, unreliable hardware.

OK, forgetting for a moment that reliability is only one of the many qualities of service — security is another one, as Canonical has belatedly and tragically discovered — no, I disagree, and so do most IT professionals. The reason is very simple: everything in IT fails, particularly software and administration (people). Over the past few days I've repeatedly explained that IT doesn't work well unless you get both the hardware and the software right, and unless both are co-engineered to work together cooperatively, with really, really excellent, common, consistent autonomics that reduce the people risks as much as possible. That's especially true in availability engineering.

One of the many beautiful aspects of the zEnterprise family of solutions — IBM's decades-long genius, really — is that IBM always expects software to fail, whether its own software or its customers' software. Only last week I heard a long and painful story from a client. That client explained in great detail how often their pure software cluster failed, leaving thousands of users with nothing to do. Programmers are not perfect. Nor are hardware designers necessarily, but "defense in depth" is extremely valuable when engineering for high availability.

And that's what IBM has done and keeps doing. It's not that IBM hasn't tried other approaches. Decades ago IBM implemented software-based clustering in IMS, for example. It's merely "OK" by mainframe standards, meaning it's superb software clustering but it isn't what customers expect. IBM still supports that form of clustering, but a couple decades ago IBM introduced the first version of Parallel Sysplex which relies on a combination of common, hardware-based features and software-enabled products that exploit those hardware features. IMS is one of many examples but only one example. Parallel Sysplex evolved over the past two decades and continues to evolve and improve. (This month's announcement of RoCE memory-to-memory high performance networking is a good example. Ostensibly that's a hardware feature, and it is, but it's actually a clever, packaged, integrated combination that provides a common service to all applications, transparently. Always with multiple layers of availability and fault tolerance.)

Frankly it takes an amazing amount of hubris to suggest that programmers always get it right, every time, and never ever muck up what they previously got right. Or that it's not possible to learn from other engineers who took a different approach that actually works in the real world. Again, look at Apple. Why on earth did Apple buy hardware companies? Why do they have engineers who can design chips? Why are they reportedly investigating the purchase of their own chip foundry? Both hardware and software matter to achieve a particular business outcome. Granted, Apple isn't maniacally focused on maximum qualities of service enterprise IT engineering like IBM is with its zEnterprise solutions. Apple is engineering for different outcomes than IBM and literally never compete. But the core principle is the same.

I should say that I very much respect Mark Shuttleworth and his accomplishments. But I think he got this one wrong, very wrong. We all make mistakes sometimes, myself included. We hopefully learn from those mistakes, otherwise we're doomed to repeat them.

by Timothy Sipples July 26, 2013 in Security
Permalink | Comments (0) | TrackBack (0)

Canonical (Ubuntu) Needs a Mainframe (and zBC12 Post #3)

Canonical's Ubuntu forums have been hacked. Fortunately IBM offers the lovely new zEnterprise BC12 which is a great fit for new, newly security-minded customers like Canonical.

In my previous blog post I wrote "stay tuned," because I'd have a lot to write about the new software solutions IBM is announcing with the new zEnterprise BC12. In this post I'll start with some general observations on important foundational software capabilities that IBM is either improving or introducing for the first time. And as I've written many times (and as Canonical has perhaps realized), the marriage of hardware and all the software layers must be harmonious into order to achieve particular business outcomes and in order to deliver the best qualities of service. I'm also seeing IBM develop and ship new top-to-bottom solution packages which include zEnterprise server hardware and storage, operating systems, middleware, applications, maintenance, and services with simple, competitive, multi-year "no surprises" pricing. Inevitably there's always some customization involved in enterprise computing, but these zEnterprise solution kits greatly simplify the time-to-value.

Here are some examples of the new/improved foundational capabilities and associated solution packages:

  • IBM is introducing new hardware-based/software-exploited cryptographic algorithms, notably elliptic curve cryptography (ECC). IBM is seriously positioning zEnterprise as the enterprise security hub, with solutions for managing digital certificates across the enterprise, for example. The IBM Enterprise Key Management Foundation solution, introduced shortly after last year's zEC12 announcement, has been updated to include a zBC12 option and the new cryptographic algorithms.
  • IBM has formally announced z/OS 2.1. There's a lot to unpack in that announcement, and I might have to do that in a separate blog post, but one of my favorite areas of improvement is in z/OS UNIXTM System Services. As HP-UX and Solaris fade to black, IBM keeps improving z/OS UNIX as a warm and welcoming target environment for those workloads and for other applications. In particular IBM has adopted more of the new GNU conventions, and the z/OS C/C++ compiler picks up more of the latest C/C++ language standards, not to mention deeper exploitation of zEC12/zBC12 processor features for performance and scalability. Relatedly IBM has updated its Cognos business intelligence solutions to the latest version on z/OS so that customers can minimize data movement, keep their information most secure, and access reports from Web and mobile devices directly on zEnterprise. And if you want everything in one simple package, including a zEnterprise server, IBM enterprise storage, z/OS, DB2, the IBM DB2 Analytics Accelerator — with or without Cognos and SPSS — IBM's updated Smart Analytics System 9710 is perfectly packaged and ready to roll.
  • What about COBOL and PL/I which power, among other things, most of the world's large financial systems? IBM thinks its new Enterprise COBOL Version 5.1 is so attractive that it wants everyone to try it. Thus there's a new "developer trial" edition. You can order V5.1 and kick the tires without starting the 12 month single version charge (SVC) period. Enterprise COBOL V5.1 is the first COBOL compiler that can exploit specific processor models at compile time. IBM had to do a great deal of "plumbing" work to make this happen, and this version is but the opening salvo in a series of new COBOL compiler releases. IBMers on the forums are now openly discussing their COBOL roadmap to a 64-bit compiler, something they were hesitant to do until they got this new compiler technology in place. Go grab it. You'll like it. (And let's not forget the new PL/I V4.4.) But while a new compiler is an important foundational capability, IBM is doing a lot of work above that layer. A good example is IBM's Business Rules for z/OS which, in simple terms, means that application developers don't have to write or modify code to change business rules — those kinds of business changes can now be made much more easily in a graphical, code-free way. If you think the first and only way to solve a particular business problem is to program, think again. To borrow from Strunk and White, omit needless coding.
  • There are lots of new mainframe "freebies" — I'll have to update my list. Perhaps my favorite is the new z/OS Management Facility Version 2.1, and one of the big headlines there is that it's much easier to deploy and even less resource-intensive thanks to the new WebSphere Liberty Profile packaging. IBM is also letting everyone know that it expects to release a new Encryption Facility for z/OS that'll exploit the new zEDC hardware, so both compression and encryption will be turbocharged. zSecure Version 2.1 is also newly announced, to make security administration and auditing simpler and more reliable.
  • IBM is pushing its zEnterprise cloud credentials hard, and rightly so. The mainframe is the original and best cloud platform, particularly for secure private clouds. IBM announced z/VM Version 6.3 which now incorporates integration with the industry standard "OpenStack" initiative. You'll be able to manage z/VM-based cloud environments via OpenStack-compatible solutions. There are new z/VM features to reduce or eliminate planned outages of individual z/VM LPARs, notably "upgrade in place." Previously it was necessary to carve up system memory into 256 GB LPARs for z/VM, but now 1 TB z/VM LPARs are supported. (Memory scalability is often the limiting factor in cloud deployments, so this is an important increase.) Of course IBM has updated its Enterprise Linux Server (ELS) solution and application solutions based on ELS to incorporate these new capabilities.

That's just a sampling of core infrastructure capabilities and how they map to new and updated, packaged zEnterprise solutions. I expect to have more to say about this week's "announcement deluge" as I keep finding more and more gems, and I'll try to provide some more perspective on trends and directions.

Welcome to the future.

by Timothy Sipples July 24, 2013 in Innovation, Linux, Security
Permalink | Comments (0) | TrackBack (0)

Happy New Mainframe Day! Introducing the zBC12 (Post #2)

IBM has posted a long and detailed frequently asked questions list which covers the zEnterprise BC12 and other related announcements today. Here are some more interesting details from my point of view:

  • Ever since the IBM z890 mainframe model introduced in 2004, IBM has offered an A01 subcapacity model with a CP capacity of about 26 PCIs and 4 MSUs. The z10BC dropped the MSUs down to 3 but otherwise retained the same approximate PCI rating. That changes a bit now: the zBC12 starts at about 50 PCIs and 6 MSUs, so that's the smallest current model z/OS (and z/VSE and z/TPF) machine now available. Is that bump to 50 PCIs and 6 MSUs a problem? Well, no, not with IBM software which has been eligible for subcapacity licensing for many years. Also, admirably IBM says they're pricing the zBC12 with 50 PCIs the same as the z114 (the previous model) at about 26 PCIs. In other words, even if you still only need 26 PCIs of capacity you'll get roughly twice that capacity for the same price according to IBM. Everything's better financially then: if you wish you can keep running with 3 MSUs of software (which will perform much better on a 6 MSU machine due to the way "softcapping" works) and also hold the hardware costs level. Or you can bump up to 4, 5, or 6 MSUs of IBM software if you need it while still holding hardware costs level. All good news there! (And actually it's even better than I described.) The potential trouble is that a few other software vendors still have antiquated licensing rules, charging full capacity even if you don't run their software that way. That might be a problem in particular situations, but the solutions are by now well known, notably switching to software products that have better licensing terms.
  • IBM has goosed up the I/O capacity and performance considerably, but I'll let you read about the details in that linked document above.
  • On the zBC12 and also the zEC12 you can now configure up to 2 zIIPs and up to 2 zAAPs per CP, including subcapacity CPs. For example, on that A01 zBC12 capacity model (with 50 PCIs) you could add up to 2 zIIPs and 2 zAAPs, and all those speciality engines operate at full speed. There's one caution, though: IBM says the zBC12 and zEC12 are the last servers to sport zAAPs. So it's possible IBM is bumping up this ratio in anticipation of zAAP retirement. My recommendation would be to order only zIIPs at this point since zIIPs can now do everything zAAPs can. If and only if you run out zIIP allotments and you could use zAAPs, then maybe get some zAAPs. Perhaps that advice is too obvious for those familiar with zIIPs and zAAPs, but I thought I'd mention it.
  • IBM has posted a series of new mainframe videos including this walkthrough video.

More to follow, including a lot of software details as I keep looking through all these announcements. And it's software that makes the magic happen, so stay tuned.

by Timothy Sipples July 23, 2013
Permalink | Comments (0) | TrackBack (0)

Happy New Mainframe Day! Introducing the zBC12 (Post #1)

We saw some clues that IBM was getting ready to announce a new mainframe today, and here it is: the new zEnterprise BC12. There are a lot of IBM announcements related to the new zBC12, and I'm sifting through all the information. Here are some of the highlights as I see them. I'll provide further updates as I read all the materials IBM has released.

  • First, the hardware itself: processor clock speed increased to 4.2 GHz, core count increased, and both total capacity and per-core capacity up more than I would have expected. A single zBC12 can provide almost 5,000 PCIs for z/OS, z/VSE, and/or z/TPF with its maximum 6 CPs. That still leaves another 7 engines for any mix of specialty cores (zIIPs, IFLs, etc.) Uniprocessor performance is above 1,000 PCIs. Also, the maximum memory is now up to 496 GB of usable RAIM-protected memory, which is another very nice bump. This new zBC12 can soak up a large amount of workload.
  • There's a new "LPAR absolute hardware capacity" setting on the zBC12 which presumably also will now be available on the zEC12. This setting will be mainly of interest to Linux on zEnterprise customers who want to set particular IFL capacity limits mostly for software licensing purposes.
  • IBM is introducing exploitation of 2 GB memory page support in the zBC12 and zEC12 starting with Java 7 for z/OS, to improve Java performance and capacity yet again.
  • There's a new high speed memory-to-memory adapter ("10GbE RoCE Express") which provides something analogous to HiperSocket connections but now between machines, to speed up data transmission and reduce networking overhead. This new adapter is available for both the zBC12 and zEC12.
  • There's another new adapter for both models called the zEDC Express which accelerates data compression.
  • I always wondered why IBM had 101 customer configurable cores on the zEnterprise EC12 machine. It's an odd number, and that's unusual for mainframes. Now we know: IBM reserved one core for a new Internal Firmware Processor (IFP) which is invisible. But this IFP, also included as a standard feature on the zBC12, supports the new RoCE Express and zEDC Express functions. I expect IBM will use this "hidden" processor for progressively more supporting functions, much like how SAPs provide various accounting and support services for I/O. We'll never really deal with the IFP and its control programs, but they'll be there, supporting particular new functions.

Much more information and analysis will follow. Stay tuned.

by Timothy Sipples July 23, 2013 in Innovation, Systems Technology
Permalink | Comments (0) | TrackBack (0)

HP to Retire OpenVMS as Early as 2020

I expected HP to place OpenVMS into a virtualization package then continue supporting that package on X86 servers "forever," but HP isn't doing that. HP says it will not carry OpenVMS forward to the next (last?) Itanium-based server hardware — which wouldn't actually require much work. HP says they'll end support for OpenVMS as early as 2020.

Yes, if you're running OpenVMS you should be looking at z/OS as a place to move your remaining OpenVMS-based applications. I would recommend starting that project within the next 12 to 24 months given HP's forecast timeline.

Now what does HP do about NonStop (formerly Tandem)?

by Timothy Sipples July 21, 2013 in History, Systems Technology
Permalink | Comments (0) | TrackBack (0)

Apple Needs a Mainframe

Apple's developers need better protection.

by Timothy Sipples July 21, 2013 in Security
Permalink | Comments (0) | TrackBack (0)

IBM Announces 2Q2013 Earnings: zEnterprise Even Better

From time to time I post updates on IBM's earnings reports, as I did last quarter. The caveats I included then still apply.

That said, zEnterprise had yet another fantastic quarter: revenues up 10% (11% at constant currency) and capacity shipments up 23%. That's yet another big server marketshare gain for zEnterprise. zEnterprise was also a very bright spot in IBM's earnings report. Thanks to all the current and new zEnterprise customers that are showing new and renewed appreciation for the unique competitive advantages of zEnterprise.

IBM's other hardware product lines had a tough quarter, but that's sometimes how it goes and why IBM's diversification is a strong advantage. Non-mainframe UNIX servers had another dreadful quarter in particular — HP and Oracle/Sun continue their UNIX server slide into oblivion — and IBM gained marketshare in that segment despite a 24% (constant currency) slide in its Power server business. As I've said before, my best guess is that the UNIX market continues to split with many of those UNIX workloads moving to Linux (including to Linux on zEnterprise, which is picking up workloads from HP and Oracle UNIX server retirements) while others are moving to high-end highly virtualized Power servers. However, like all servers, Power servers are subject to model cycle swings.

I don't see any way that HP and Oracle fix their UNIX server businesses. I predict that both vendors will soon wrap their operating systems into virtualization packages to license and to run them on X86-based hardware in order to cut costs and to support their remaining few customers' workloads that must run on HP-UX, VMS, and Solaris. (Sun already effectively did that with its Solaris on X86, so Oracle is somewhat better positioned in not angering its remaining UNIX customers too much.) I don't know what HP does with NonStop (formerly Tandem), but it won't be pretty.

UPDATE: IBM's CFO Mark Loughridge mentioned on the earnings call for analysts that he expects another double digit revenue growth quarter for zEnterprise in the third quarter. Wow.

UPDATE #2: IBM has posted the charts Loughridge used for his presentation to analysts. One of the charts indicates that zEnterprise's gross profit margin declined in the second quarter, although I haven't found other details. My best guess, recalling previous earnings reports, is that IBM was selling more zEnterprise capacity upgrades a year ago (2Q2012) on existing machines relative to new machines. This past quarter, if I'm right, the mix shifted more toward physical machines that can later be upgraded. Both new model deliveries and capacity upgrades are reportedly profitable, but upgrades are more profitable. Said another way, if I'm right this decrease in the gross profit margin is actually good news because that means IBM shipped more new physical zEnterprise machines (some to brand new mainframe customers presumably) and did not "coast" on capacity upgrades. Then those new model machines can be upgraded, so we would expect that red arrow to turn into a green arrow if/when IBM's mainframe customers upgrade those machines. We really don't have enough financial data from IBM to prove or to disprove this hypothesis, but it seems like a reasonable understanding of the model cycle effects.

A couple other points jumped out of Loughridge's presentation and comments. The next quarter (3Q2013) is the first quarter when zEnterprise revenues will be compared against the quarter with the first sales of the zEnterprise EC12, a model that has already proven wildly popular. Yet Loughridge is predicting another double digit revenue growth quarter for zEnterprise. That's interesting.

Another point is that Loughridge spoke again about a possible divestiture in 2013 or 2014. That's not unusual: IBM routinely divests particular businesses that don't seem like a good fit for IBM's business model. For example, in 1934 IBM sold its food scale business to Hobart, and Hobart still runs that business very successfully. As another example, IBM sold its hard disk spindle business, a business IBM invented, to Hitachi a few years ago. A lot of analysts think IBM is contemplating divesting its System x (X86 server) business, and there have been press reports that IBM discussed the idea with Lenovo but did not reach an agreement. Maybe, maybe not — I don't automatically believe everything I read. But I have a couple reactions to that possibility. One is that IBM seems to have a good track record protecting customers in both acquisitions and divestitures. IBM acquires businesses to grow them, and that means encouraging existing customers to buy more and attracting new customers. That has to be done the old fashioned way, with product improvements, integration, better service, etc., and IBM generally seems to do that. IBM also has a very good track record finding excellent stewards for its divested businesses. The other point I'd make, which I've made before, is that analysts that say IBM is "getting out of the hardware business" are flat out wrong. It's much more correct to say that IBM has been adjusting the mix within its hardware businesses, in recent years buying Netezza, DataPower, and Texas Memory Systems to pick three hardware company examples. Hardware, software, services, and financing are all important, but it's the flexible combination that's magic. The three "hardware" examples I mentioned are all highly solution-focused examples, and that's squarely where IBM wants to be. The strategy is conceptually simple, but it's also a strategy that's very difficult for IBM's competitors to mimic successfully.

by Timothy Sipples July 17, 2013 in Financial
Permalink | Comments (0) | TrackBack (0)

How to Preserve and to Access Information for Over 1 Million Years

Scientists at the University of Southampton may have figured out how to preserve digital information for over one million years. They've demonstrated high density laser recording onto a durable quartz disc, a disc which could theoretically hold about 360 TB. They're a long way from demonstrating commercial viability for this technology, and that's not a given. Many storage technologies have looked promising but haven't panned out. It's also very unclear what the costs of quartz discs and their writers/readers will be.

Further to my "Big Data, Big Tape" post, tape is currently the most popular durable storage medium, but unfortunately magnetic tape isn't one million years durable. Organizations with tape media must periodically refresh their tape media every couple decades or so to maintain reliable readability, and there may be strong economic reasons to refresh more frequently. IBM, for example, only very recently ended hardware support for its famous 7-track and 9-track reel tape drives, officially closing that chapter in tape media history. There are still data recovery specialists that can read reel tapes if necessary, but those tapes aren't getting any younger. If you've got such tapes, now's the time to read them and convert them to newer media.

One million year media only solves one long-term data storage problem and perhaps not even the most interesting one. The real problem is how to make sense of all those bits in the distant future or even a few years from now. Human civilization is only a few thousand years old, but there are still many recorded human languages that are now incomprehensible. The Digital Age we now live in may end up being the New Dark Ages. Software developers and vendors do not usually prioritize readability of older data formats. Newer versions of Microsoft Word cannot open older Microsoft Word files, for example. Humanity is losing its collective memory with each new software release.

Mainframe vendors generally aren't so careless, and mainframe customers routinely access decades-old data using decades-old code to interpret those data. That works and works well, and it's one of the important reasons why mainframes are useful and popular as central information hubs and systems of record. Moreover, if for some reason the applications must be retired, that's possible while retaining the ability to retrieve and to interpret the archived data. Contrast that multi-decade mainframe track record with, for example, Apple which opened for business in the mid-1970s. The sad fact is that every Macintosh application compiled up through 2005 completely stopped working on the version of the Macintosh operating system introduced only 6 years later in 2011. Consequently some of the proprietary data formats that those "obsolete" applications know how to access and interpret remain unintelligible on newer Macs. Apple very deliberately broke binary program compatibility within just a few short years. Macintosh models introduced starting in the second half of 2011 cannot run Macintosh applications compiled before 2006. That might be a great technical strategy if you're Apple and you're trying to sell more Macintosh computers — to "wipe the slate clean" periodically — but, I would argue, that's horrible for the human race and its collective memory.

Virtualization offers one possible solution to the problem of interpreting old data, but virtualization isn't a complete solution on its own. Mainframes excel in virtualization. IBM has incorporated many different types of virtualization technologies throughout the zEnterprise architecture, and those technologies continue to evolve and improve. They help preserve backward compatibility. Nonetheless, on rare occasions IBM has dropped support for older application execution modalities despite the availability of virtualization as a possible solution. But then there's someone who thankfully steps in to help and then help again.

Maybe those one million year quartz discs all need to include a built-in "Rosetta Stone" that helps future generations interpret the bits they contain. Some scientists have at least thought about the problem.

by Timothy Sipples July 11, 2013 in Future
Permalink | Comments (0) | TrackBack (0)

IBM Acquiring CSL International

IBM is acquiring CSL International and its CSL-WAVE software for monitoring and managing z/VM and Linux on zEnterprise.

It's clear that IBM is continuing to enhance its cloud offerings with this acquisition.

by Timothy Sipples July 10, 2013 in Cloud Computing, Financial
Permalink | Comments (0) | TrackBack (0)

The Reality of "Rehosting"

IBM has a point of view

by Timothy Sipples July 1, 2013 in Economics
Permalink | Comments (0) | TrackBack (0)



The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.