California Keeps Its Mainframes (Thank Goodness)
Sometimes it's difficult to admit making a mistake and reverse course. Yet California's Department of Motor Vehicles finally took the correct action and cancelled its second costly attempt to replace its IBM mainframe applications for vehicle registration.
According to public information, the applications date back to 1965 when they were first written to run on an RCA mainframe. In a few years they were moved to an IBM mainframe, and they are still running on one or more IBM mainframes today. Along the way the DMV has enhanced the applications, and they now have far more interfaces (and far more users) than they had in 1965. The applications handle about 30 million vehicle registrations.
The State of California first tried to replace these applications in 1987. The State hired Tandem Computers (now part of HP) and Ernst & Young to undertake the work. Seven years and $44+ million dollars later, California terminated the project. The new applications created to that point lacked basic functions and were 10 times slower than what they have.
California tried again, hiring EDS (ironically also now part of HP) to do the work in 2007. After spending $135 million (and counting), California cancelled the project. California has also recently cancelled its failed project to replace its payroll system after spending $254 million. Yes, that's at least $433 million wasted, and that's without adjusting for inflation and not counting significant opportunity costs.
It's important to recognize that these applications are not perfect. They are, however, better than any viable alternative, as has now been well proven twice. I would advise the State of California to take a new direction involving two important elements. The first element is that they should make an investment — a much smaller one — to improve the resiliency of these systems and applications to avoid outages. These applications are mission-critical, and, especially compared to the vast sums wasted, it's inexpensive to improve the resiliency of IBM zEnterprise-hosted application environments and access to them. For example, a couple years ago IBM introduced GDPS/Active-Active technology which helps businesses and governments achieve continuous service and move past traditional synchronous update distance limitations in many cases. The second element is to execute a program of sensible, progressive remediation of all of the deferred application maintenance, application improvements, and documentation that the State of California has undoubtedly failed to accomplish in its many years of budgetary problems and unwise spending on these two costly replacement projects.
Yes, these are three big IT disasters — no question about it. But I'm hopeful that California, on behalf of its taxpayers, can take the right steps starting today. And I'm also optimistic that at least some companies and governments can learn from this experience.
UPDATE: A correspondent writes me with this link to the project plan. The story is at least a bit more complex. The project involved multiple interdependent project deliverables. The plan was to move VSAM data into DB2, to move applications hosted in an Assembler-based "Real Time Control" environment to COBOL applications in CICS, and to do a lot of user interface technology rework using WebSphere Application Server (platform not clearly stated but apparently AIX on Power). Unfortunately it didn't fly.
If you're going to migrate these big applications and databases then the California DMV picked some good target environments — there was some thought that went into that decision. However, what seems to have happened is that the project was so big it collapsed under its own massive weight. It strikes me as an awfully big project to pull off in just a few short years. What probably has to happen now is what I alluded to above: some type of progressive rehabilitation, stretched over a longer period of time, with more tactical incrementalism guided by immediate business requirements. And hopefully the bigger, grander project that didn't fly still yielded some salvageable assets, such as better documentation of the existing code and data models. The destination may end up being the same, but the journey will be longer.
Observing from afar, another possible problem that might have cropped up is a common one: "What are we doing this for again?" Frequently I've been reminded of the fact that IT organizations sometimes get wrapped up in work that might be important to IT, but delivering observable, tangible business results also has to be part of the picture. If the users and business managers don't know why you're doing something or why you're spending their money, and if they don't see sufficient progress toward addressing their needs, they understandably get impatient. For example, it might matter a great deal to IT whether data are stored relationally or non-relationally. Both approaches have their merits, and non-relational databases are increasing in popularity nowadays (including at big name California-based Internet companies like Google and Facebook). Do users care? Should they? Not in those terms.
As it happens, IBM has a bit of software called CICS VSAM Transparency which, despite the name, doesn't actually require CICS but which does support moving VSAM data into DB2 databases without changing applications. It can be used selectively or broadly, as appropriate. That particular software was not on the DMV's materials list. Should it be, as part of a more gentle incrementalism? Maybe, but like any other project initiative it would depend crucially on user needs, and of course it would also depend on technical viability in the circumstances.
Anyway, my thanks to that correspondent who contacted me. And let's continue to cheer on the California DMV for making what was probably a tough decision. I think they do a pretty good job in general, especially considering the many years of state budget challenges that put pressure on every state agency.
I understand at least some of you are having trouble commenting on blog postings. We'll see if we can get our small army of technicians to look into that problem.
|by Timothy Sipples||February 16, 2013 in Religion |
Permalink | Comments (0) | TrackBack (0)
Mainframe Computing: What Will the Future Hold?
I do not have any particular insight into IBM's research and development efforts for future mainframe technologies, but I can take some educated guesses. In the process of guessing I am actually making some predictions about the future of all computing. There's been a long history of computing capabilities appearing on mainframes first, often a decade or more ahead of the rest of the computing industry.
Here are three of my predictions, in no particular order:
- Fifty-five may be the limit. (Or not?) I'm referring to processor clock speed. For Intel the speed limit is 4.0 GHz, which is only reached with a single active core. Intel hasn't been able to increase the clock speed for nearly 8 years, and that's a long time in microprocessors. IBM's POWER processors did much better, topping out at 5.0 GHz with POWER6 and with all cores active, before backing off the clock a bit. Now IBM's zEC12 sits alone at the top of the clock speed ranking with all cores active at 5.5 GHz continuous. That's simply amazing. But will IBM be able to increase the zEnterprise's clock speed again? If it can be done, I assume it will be, but the physics are tough. That said, I expect IBM zEnterprise will maintain industry clock speed leadership indefinitely because, if it makes sense to solve those tough problems anywhere, it will make sense on zEnterprise.
- Hardware collaborative iterative compilation. IBM pioneered microprogramming all the way back in the System/360 first introduced in 1965. However, for the most part hardware is static once shipped. Yes, occasionally IBM (and other vendors) may release microcode updates, generally to fix a misbehaving instruction in a processor (often unavoidably slowing down that instruction for the sake of correctness), but that's about it. When you want new instructions, you buy new processors. At least with mainframes you can upgrade your processors in place. I think the processor improvement cycle is going to accelerate as compilers start to tell the hardware how it can improve, and the hardware will respond. In other words, the compilers and the hardware will jointly figure out how to shave execution time and path length off computing tasks as they operate. The dialog will be something like this: "I see, Ms. Compiler, that you're being particularly demanding of my Level 3 cache today, and it looks like you don't need to be if I understand what you're trying to do. Would you mind terribly if you combine these duplicate memory blocks so you can fit in Level 3 for me?" "That's a good idea you've got, Mr. Hardware. I'll take care of that for you the next time Dr. z/OS tells me there's a lull in high service class processing. By the way, Mr. Hardware, I'm doing a lot of financial calculations that could really use a custom instruction or at least a few better instructions. Have you got a better one that you can put into your microcode or into your FPGA for me? Or ask your mother if she can provide you with a new instruction and delete those 5 other instructions I never use? Thanks a bunch." "Teachable hardware" will have some interesting side effects. For example, if you think capacity planning is difficult now, just wait.
- Quantum computing. IBM is spending a lot of effort in this area, and I think we'll see a quantum computing element available for zEnterprise as an option in the not too distant future. That innovation will also have some interesting side effects, like perhaps upending cryptography.
|by Timothy Sipples||February 10, 2013 in Future, Innovation |
Permalink | Comments (0) | TrackBack (0)
The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.