Manufacturer in Texas Still Relies on their IBM 402
In the late 1940s, IBM's factory in Endicott, New York, started manufacturing and delivering the IBM 402 Accounting Machine and related accessories. The 402 was not exactly a "computer" in the way we now think of them. For example, the 402 was electromechanical instead of electronic, and programming a 402 involved stringing wires in complex patterns on plugboards. Nonetheless, the IBM 402 helped businesses and governments efficiently and quickly (for the time) solve a wide variety of accounting problems: inventory control, payroll, billing, cashflow, etc.
One of the IBM 402 machines was delivered to Sparkler Filters, a small manufacturing company in Conroe, Texas, which specializes in chemical process filtration. That machine, now over 60 years old, is still running and still handling a wide variety of the company's accounting tasks. That's in no small measure due to the people who keep Sparkler's 402 in top condition, including Lutricia Wood, the company's data processing manager, and Duwayne Leafley, an independent maintenance technician. As far as anyone knows this IBM 402 is the last one operating in the United States and perhaps also in the world.
Sparkler Filters is an extreme example perhaps, but their experience reinforces some important lessons. One lesson is that business processes are extremely important and often durable, and the programs written today often endure a lot longer than anyone expects. Certainly Sparkler could adopt different technology, but doing so would be highly disruptive, especially without careful planning, documentation, training, and customization to fit within existing processes. Obviously they would prefer not to experience that disruption (and cost) unless there's a "damn good reason," and they haven't found one yet. Another lesson is that the choices we make about technologies should recognize durability and longevity requirements to support long running businesses and their often stable core missions. For example, the principles of accounting really haven't changed for centuries, so why rewrite core applications "just because"? That's frequently expensive and disruptive.
There should be a balance. IBM works incredibly hard to make sure that zEnterprise-hosted programs simply don't break even when everything else is evolving: input, output, servers, storage, operating systems, middleware, addressing, etc. While the supported program portfolio doesn't go back as far as IBM 402 programs, the commitment IBM made in the 1960s with the System/360 has been and continues to be honored. zEnterprise customers really do run code which was born in the 1960s alongside 64-bit code written 5 minutes ago on one operating system on one machine. That code interoperates: exchanging data, calling back and forth, etc. Unless there's a "damn good reason," you don't have to replace your programs unless and until your business has a new requirement, and then only to the extent you wish.
Now, just because you could run only decades old code without improvements and innovations doesn't mean you should. If you underinvest in any business infrastructure, including important application enhancements, that's a problem that'll progressively and negatively impact your business. However, being forced to make vendor-driven changes simply for the sake of change (and for the benefit of vendors' quarterly financial statements) is no way to do business either. That sort of change is simply wasteful.
I hope I get the chance to stop by Conroe, Texas, someday soon. What a wonderful story.
|by Timothy Sipples||April 26, 2013 in History, Media |
Permalink | Comments (0) | TrackBack (0)
The PC Market is Shrinking Fast
IDC and Gartner reported their separate views of the state of the PC market for the first calendar quarter of 2013. They agree that the PC market is shrinking, quickly.
According to IDC, the PC market suffered its worst year over year decline in the history of its tracking study. It estimates that the global PC market shrank nearly 14% and that the U.S. PC market shrank nearly 13%. (Gartner estimates are -11.2% and -9.6% respectively.) They both agree that HP lost a lot of marketshare, Dell lost some, and Lenovo, Toshiba, and Apple gained some marketshare. Lenovo, in particular, managed to grow within a shrinking market.
If Microsoft's Windows 8 operating system had any impact it wasn't helpful. By definition a shrinking PC market, particularly one with Apple gaining marketshare, means a shrinking Microsoft since so much of its business is dependent on the twin Microsoft Windows and Microsoft Office franchises. Microsoft also has other business units, but those two are essential to its continued success. I can't imagine Microsoft CEO Steve Ballmer being happy with what's happening to the PC market his company dominates.
What's going on? Simple: smartphones and tablets. Google and Apple supply the two most popular mobile platforms. Those devices are killing off much of the PC market. Smartphones and tablets are simply more relevant to more people in more places than PCs, and the mobile client application ecosystems are now stronger. It's astonishing, really. And what's really scary if your business is heavily dependent on PCs is that most of the world's population will never buy a PC but almost everybody on the planet has a mobile phone. Mobile phones are literally more popular than toilets. And smartphones are rapidly eroding the remaining market for feature phones. If it hasn't happened yet, this year (2013) is when the number of active smartphones and tablets will surpass the number of PCs, and it won't take long for tablets and smartphones to outnumber PCs by billions.
Naturally Microsoft isn't happy with this turn of events and has even filed a complaint with the European Commission, which strikes me (and a lot of other people) as absurd. Google and Apple simply built better business models and more compelling platforms than Microsoft, and that's why they're doing well (and should do well).
So what does the continuing rise of smartphones and tablets have to do with mainframes? Plenty. I continue to hear from mainframe users that transaction volumes and associated batch processing are growing faster than they expected, and that's mostly explained by smartphones and tablets. Mobile users are inherently more demanding users in terms of time and place, and consequently they expect continuous or near-continuous service with excellent security. Mobile users often generate wilder swings in application usage patterns — a simple Tweet could easily trigger (twigger?) a sudden spike in demand. Do those requirements sound familiar?
I don't predict the death of the PC, however. I'm just not sure yet where the PC market will stabilize.
UPDATE: ZDNet has posted an incredible chart showing what's happening to the PC market along with some more commentary. Wow.
|by Timothy Sipples||April 11, 2013 in Analysts, History |
Permalink | Comments (0) | TrackBack (0)
IBM Mainframe Computing at AT&T in 1973
AT&T has posted a wonderful historical film produced by Bell Labs in 1973. The film introduces employees to the services available at the Holmdel, New Jersey, computing center. At that time you could submit batch jobs on punched cards, typically resulting in printed output, or you could interact with the system through TSO (Time Sharing Option) and interactive terminals, some connected via telephone lines.
Note the photograph attached to the IBM System/370 that appears at about the 5:30 mark in the film. Many things have changed in 40 years, and many things have not.
UPDATE: One thing that has changed is that the Bell Labs Holmdel Complex closed in the mid-2000s. The property is available for occupancy if you're interested. Bell Labs has been largely dismantled through a series of corporate reorganizations and cutbacks beginning with the AT&T breakup in 1984. There are a few vestiges of Bell Labs still operating as part of Alcatel-Lucent and (separately) as part of Ericsson.
I'm not the sort of person who looks back on the past as necessarily better. In most respects it wasn't. For example, unleaded gasoline arrived after 1973 in the United States, eliminating a serious public health hazard especially for children. There's some evidence that the reduction in lead exposure has contributed to a large drop in crime rates. As another example, smallpox was still afflicting some of humanity in 1973 but was eradicated a few years later. Also, back in 1973 you couldn't send an e-mail or an SMS to practically anyone, and a one minute "long distance" telephone call from New York to Los Angeles cost about $0.25 (in 1973 dollars, only if you called on nights and weekends), excluding the monthly service charge. Now you can talk and text as much as you want using a mobile phone for as little as $19 per month. So is the loss of Bell Labs worth the benefits to consumers with the explosion in Internet and mobile services at affordable prices? Yes, to the extent it was a trade, that was a trade worth making, and that's what the market decided — with a little help from Judge Harold Greene perhaps. That doesn't mean the loss of Bell Labs wasn't a loss: it certainly was and is. Continuing, long-term investments in research and development are critically important to the success of consumers, a company, a nation, and all of humanity. How best to support those ongoing investments is another question.
|by Timothy Sipples||April 10, 2013 in History |
Permalink | Comments (0) | TrackBack (0)
Sad Day: Unisys Abandons Its Mainframe Processors
I frequently write about IBM's virtuous mainframe cycle. IBM continues to pump lots of investment into its mainframe-related research and development because it makes business sense. Growing sales promote more investments, which promote more sales, and so on. The new zEnterprise EC12 and IBM's latest round of truly new and innovative software products for zEnterprise are among the direct results of those ongoing investments.
Unfortunately Unisys hasn't been able to sustain that sort of virtuous cycle. Unisys's announcement this week is, in my view, a sad closing chapter in that long history. The announcement is that Unisys has given up on its own processor designs even for its high-end mainframes. From now on everything will run in emulation.
Unisys was formed when Sperry and Burroughs merged many years ago. Sperry heritage mainframes trace their lineage to UNIVAC and Remington Rand, and they run the OS 2200 operating system. Burroughs heritage mainframes run MCP. There are still a few installations around, but Unisys's clear path (pun intended) is to manage their decline as best they can. Now with software emulation on server hardware sourced from Dell.
If you look across the industry right now you see that the most successful enterprise IT company (IBM) has intensified its focus on vertical integration: hardware and software co-designed and optimized together in common purpose. And the most successful consumer IT company (Apple) is doing exactly the same thing: the chips inside that new iPhone 5 are Apple's design, with Apple's unique power saving features. Oracle, with its Exadata products, and Microsoft, with its announced (but not yet shipping) Surface tablet products, are trying to do the same thing. It's a recipe that works and works well if you can sustain long-term investments to stay in the lead.
I completely understand why Unisys abandoned processor design. Good R&D is expensive, and Unisys can't afford it. I understand it, but that doesn't mean I have to like it.
With all due respect to Timothy Prickett Morgan (the author of the article linked above), emulation is not special, and HP should not buy Unisys (or vice versa). HP effectively outsourced its processor designs to Intel many years ago (Itanium), and that has been an unmitigated disaster for HP and HP's shareholders. HP needs market-relevant innovation, badly, after over a decade of corporate mismanagement and plundering. Again, look at IBM and Apple. Both companies have made relatively small acquisitions to bolster their own R&D efforts, and to nurture and grow those other great ideas as part of a common purpose.
Finally I should point out that Unisys's announcement is one more reason for their customers to consider switching over to IBM zEnterprise and z/OS. UBS made that move, as an example. That really is the clear path forward.
|by Timothy Sipples||October 11, 2012 in History, Systems Technology |
Permalink | Comments (0) | TrackBack (0)
Oracle's Hardware Sales Down Sharply Again
Three months ago, Oracle reported financial results for the company's second financial quarter. Its hardware sales declined 14% year over year, to $953 million. Oracle predicted that hardware sales would fall again anywhere from 4% to 14% in the next quarter (at constant currency).
Actually they fell 16% year over year, to $869 million. Quarter to quarter they fell about 9%.
For perspective, Oracle claims that their nascent Exadata/Exalogic/Exalytics business is fast growing, but that only means the far larger Oracle/Sun Solaris business is collapsing even faster. Also, the earnings report reveals that "hardware systems support" (a.k.a. hardware maintenance) declined only 3% (constant currency, year over year). So Oracle's remaining customers are caught in the perfect storm of a cratering Oracle/Sun Solaris business combined with escalating maintenance prices. Fabulous.
Of course I realize that hardware sales can be cyclical, but Oracle's hardware problems are deeply structural. Oracle introduced servers with the new SPARC T4 processors in September, 2011, which took the SPARC CPU up to 3.0 GHz. This past quarter should have been a terrific one, or at least a decent one, given that Oracle/Sun model cycle. Instead it was awful again.
I predict that Stuart Alsop now has the opportunity to correctly predict when the last Oracle/Sun Solaris server will be unplugged.
|by Timothy Sipples||March 21, 2012 in Financial, History, Systems Technology |
Permalink | Comments (0) | TrackBack (0)
HP Lays an Ostrich Egg During Turkey Week
One of the fundamental rules of public relations is that if you want to make an announcement that you don't want to be noticed, do it just before a big holiday. That probably helps explain why HP picked the Tuesday before the big Thanksgiving holiday in the U.S. to announce its "Project Odyssey."
I have to applaud HP on picking an exceptionally appropriate name. Odysseus means "trouble" in Greek, and his journey took ten long years after defeat in war. During his journey none of his fellow travelers survived. Sounds like a perfect name for the latest plot twist in the Itanium Meltdown, doesn't it?
I can summarize HP's announcement thusly:
- There's nothing we can do to save HP-UX, OpenVMS, or NonStop. We might have another unexciting Itanium chip or two in the pipeline, but who cares? Our turkeys are cooked without software vendor support.
- Unfortunately our lawyers aren't giving us high odds of prevailing over Oracle in court. If by some miracle we did, it wouldn't help our customers. We'd just collect some cash.
- You'll be able to stick obsolete-but-repurchased Itanium blade servers into the same chassis that also hold Intel X86 blades running Windows and/or Linux. This plan is so exciting that we're announcing it when nobody will notice. (And didn't we already announce that?)
- Yes, we know blades are horizontally scalable, not vertically scalable. Yes, we know many NonStop customers aren't thrilled with blade architectures for availability reasons. Did we mention we're getting out of the high-end server business? Did we mention we couldn't afford the R&D a decade ago to be a credible high-end server vendor, never mind now? We just wanted to be a box pusher and let Intel worry about the CPU and everybody else worry about the software. It seemed like a brilliant idea at the time, at least to our accountants.
- We'll strip the HP-UX corpse of any interesting bits and make those available for Linux and/or Windows. No, we don't have any idea what those bits might be.
- There are no new versions of HP-UX, OpenVMS, or NonStop Kernel to announce. Are we being clear enough yet?
- Isn't it cute that Microsoft said something nice in our press release? Of course they would: they abandoned Itanium before Oracle did. Of course they'll sign their name to our Itanium retirement party book. Ditto Red Hat.
- In an announcement like this one normally we'd boast about all the new Itanium customers, our revenue growth, etc., etc. Of course, we can't do that when the opposite is true. We couldn't even persuade a real customer to say anything nice about this announcement. We got tired of hearing "F**k you!" every time we asked a customer for a quote.
|by Timothy Sipples||December 4, 2011 in History, Systems Technology |
Permalink | Comments (2) | TrackBack (0)
Did the U.S. Government Kill Cloud Computing Over 50 Years Ago?
Ed and I were chatting about cloud computing last week. Ed is a co-worker with decades of experience in the IT industry. He said that all this talk about (public) cloud computing reminded him of the Service Bureau Corporation. I wasn't too familiar with SBC, so I read more about that interesting chapter in computing history. In fact, it's so interesting that I'm beginning to wonder: did the U.S. government kill public cloud computing over half a century ago?
The more you learn about the history of "cloud computing," the more you discover how old fashioned and well-proven the concept is. The basic principle underpinning cloud computing is economic: computing resources (including expertise) are "expensive" and subject to varying demands per user. If you can share those resources more efficiently across users, with their varying demands, you can achieve economies of scale and deliver (and/or capture) cost savings.
That basic economic principle is very, very old — so old that it easily predates electronic computing. In 1932, as a notable example, IBM opened its first "service bureaus." Back then state-of-the-art data processing equipment included punched cards, sorters, and tabulators. Businesses and government agencies had to make a substantial investment in order to buy (or lease) their own dedicated equipment, and they had to hire a cadre of experts to run it. Some did, but many had one or two accounting problems that they only had to solve occasionally, such as once per month. Economically it made much more sense to rent tabulating time, run their card jobs (such as payroll or billing), then pack up and come back a month later. IBM recognized this enormous opportunity and opened so-called service bureaus around the United States then, later, around the world.
In every way that's relevant, IBM's service bureaus from the 1930s provided exactly what cloud computing providers such as Amazon do today. Of course that was before widespread data networking — you had to access your local service bureau via sneakernet — but the business model was exactly the same.
IBM dominated data processing throughout most of the 20th century, arousing U.S. government antitrust concerns. The company's battle with the U.S. Department of Justice lasted decades, a battle which IBM technically won but which limited the company in certain ways. In 1956, at the start of the electronic computing era, IBM and the Department of Justice reached an agreement. That agreement covered several of IBM's business practices, including its service bureaus. IBM promised to operate its service bureaus according to certain rules. IBM transferred its bureaus to a new and separate subsidiary, the Service Bureau Corporation, which could not use the IBM name or logo despite IBM ownership. SBC's employees, directors, and corporate accounts had to be kept separate from its parent. IBM could not treat SBC any differently than other service bureaus, including service bureaus run by other computer companies, and had to supply equipment (and advance knowledge about that equipment) on an equal basis. And SBC had to follow certain rules about pricing for its services.
Did the 1956 consent decree kill cloud computing in its cradle, delaying the growth and popularity of cloud computing for a full half century? Quite possibly. Although SBC still did fairly well, the consent decree meant that IBM had to treat its own subsidiary as a second- or third-class customer. IBM and SBC couldn't talk privately about how best to design computers to support shared services because that information had to be shared with everyone. And if a company cannot protect its trade secrets then it is unlikely to invest as much in that area of technology.
Even so, IBM could (and did) invest an enormous amount of development effort in what we now know as private clouds. That is, IBM still had many large, individual customers that demanded systems which could support thousands of concurrent users, applications, and databases, all with the ultimate in qualities of service, including reliability and security. Sound familiar? It should, because that describes mainframe computing. Private clouds and public clouds ended up being very similar technically, and so SBC and its competitors, such as National CSS which started in the late 1960s, could still manage to deliver public cloud services.
So what happened? Why are companies like Amazon and Microsoft now trying to reinvent what SBC, National CSS, and other service bureaus started delivering decades ago?
Part of the answer dates back to the early 1970s. The #5 computer vendor at the time, Control Data Corporation, sued the #1 computer vendor, IBM. CDC accused IBM of monopolistic behavior. CDC and IBM eventually agreed to an out-of-court settlement. As part of the settlement, IBM sold SBC to CDC for a mere $16 million, and IBM agreed not to compete against SBC for several years. As it turned out, CDC had big business problems, and SBC wasn't enough to save CDC. However, SBC's core business was so strong that it survived even CDC's implosion, and now SBC lives on as Ceridian. Ceridian provides human resources-related services, such as payroll processing, often via Software as a Service (SaaS) and cloud technologies. To its credit, Ceridian publicly acknowledges its corporate history, tracing it all the way back to 1932 and IBM's service bureaus.
The other part of the answer depends on an accident of antitrust history and its influence on the economics of computing. At its most basic level, computing (data processing) consists of four finite resources: computing (CPU), storage (memory, disk, etc.), input/output (networking), and expertise (design, operations, programming, etc.) All four of these limited resources have costs. However, two of them (computing and storage) saw their costs fall dramatically and much earlier than the cost of networking fell. The major reason? In most countries, including the U.S., the governments regulated and protected their national telecommunications monopolies. It was only when the Internet became popular starting in the late 1990s that the cost of networking followed the trends in computing and storage.
With lower costs for computing, storage, and now networking, brain power (expertise) is currently the most costly resource in computing. That's reflected in the ever-increasing prices for ever-more-sophisticated commercial software and in businesses' efforts to obtain expertise from anywhere in the world to develop and run their computing systems, otherwise known as offshoring. High quality data processing is not easy, and in a world where expertise is the most precious computing commodity, we're seeing a return to very familiar business models.
Is it any wonder why the (modern) mainframe and service bureaus are getting more popular? In the sense of business models at least, welcome to the future: the past!
|by Timothy Sipples||September 5, 2011 in Cloud Computing, History |
Permalink | Comments (2) | TrackBack (0)
Windows Server 2008 and Itanium: No, Obviously Not Better
I had a really good laugh watching this video:
That video was uploaded to YouTube on November 29, 2009 — less than two years ago. But just barely four months later, on April 2, 2010, Microsoft announced that Windows Server 2008 R2 would be the last Windows operating system for Itanium systems.
No, Windows 2008 and Itanium are not a "better alternative to the mainframe." They aren't even alternatives at all.
The Itanium turmoil in recent months is a good reminder of one of the hallmarks of mainframe computing: durability. As IBM celebrates its 100th birthday this week, I think the company's most astonishing accomplishment is the continuing fulfillment of a promise made in 1964, with the announcement of the System/360. It was a simple but powerful promise, one that requires extraordinary focus and continuous investment to keep: never again would people have to re-write applications unless there was a business reason to do so.
Here we are, over 47 years later, and IBM has kept that promise. Some people say the mainframe is "old." No, it's Itanium that's old, because Itanium couldn't even accomplish what the System/360 did starting in the mid-1960s: preserve your investment in your valuable code, written in any language — even 24-bit assembler.
Now, that's not to say you should keep all that old code. You might improve it, extend it, renovate it, adapt it, enable it, or otherwise do something with it. But those decisions should be based insofar as possible solely on business reasons, not on the whims of vendors. Breaking code is really, really expensive. It's durability that's extremely valuable — and thoroughly modern.
So if you've got a business process that you want to automate, and if the process (or at least its steps) might be around for a while, it's a very good idea to develop on the mainframe where you can keep running as long as you like. But it's not only that. That old code isn't trapped in some time capsule. You can fold, spindle, and mutilate it as much or as little as you want. You can run that old code right alongside code written 5 minutes ago, with old and new cooperating in myriad ways. It really is a magnificent accomplishment, one that hasn't been replicated.
Happy 100th birthday, IBM. And thanks for the durability.
|by Timothy Sipples||June 17, 2011 in Application Development, History |
Permalink | Comments (1) | TrackBack (0)
IBM Centennial Film: "They Were There"
IBM produced this 30 minute film as part of its centennial year (2011). The film includes some interesting stories about the System/360, Sabre (airline reservations), and the Apollo space program, among others. Enjoy.
|by Timothy Sipples||April 15, 2011 in History, Innovation |
Permalink | Comments (0) | TrackBack (0)
HP Itanium's Ignominious Demise (Updated)
I thought I'd let the dust settle a bit before commenting on recent news concerning the fate of HP's Itanium servers. I've worked with some organizations that have purchased a lot of Itanium servers, and I know they're all very distressed with software vendor abandonment of HP's flagship platforms.
First, a brief review of history. In the 1990s, HP still designed its own CPUs for its high-end servers, notably the PA-RISC architecture for its UNIX servers running HP-UX. However, HP's homegrown CPUs were rapidly becoming uncompetitive as other vendors (including IBM) could sustain bigger CPU R&D investments. HP had also acquired responsibility for former DEC platforms (specifically servers running the VMS operating system) and the Tandem NonStop platform (which some analysts classify as a mainframe environment), among others. In an effort to cut costs, HP decided to end support for some operating systems completely, deliver minimal new function for others ("maintenance mode" only), and consolidate its operating systems onto a single CPU, with most of the design and fabrication outsourced to share costs. Thus HP struck a partnership with Intel to jointly create the Itanium CPU. The forecasts were rosy, and Itanium would allegedly take over the world. In particular, Intel agreed that its X86 architecture would remain forever 32-bit only, while Itanium would be anointed as its sole 64-bit architecture, with a new instruction set. For migration purposes, Intel and HP agreed to equip the Itanium CPU with a 32-bit X86 compatibility mode. And (HP promised), while its customers could experience some disruption moving their applications from other CPUs to the new CPUs, that would be a one-time problem and (HP claimed) worth the effort and expense.
That was the plan. But almost nothing went according to plan. Itanium was always late and slow. AMD created and shipped the X86-64 instruction set (called AMD64) in its CPUs, forcing Intel's hand in doing exactly the same thing. All but a few tiny vendors stopped shipping Itanium CPUs, leaving HP as the only company shipping Itanium CPUs in large numbers. And IBM kept gaining high-end server marketshare from HP, becoming far and away the #1 UNIX server vendor. IBM also nurtured the mainframe's renaissance.
Even so, for many years HP managed to keep a respectable Itanium business going, with HP-UX emerging as the #2 UNIX platform as Sun collapsed in the 2000s. But then some other shoes dropped. Microsoft stopped developing software for Itanium, eliminating Windows Server and SQL Server from the platform. Red Hat stopped shipping new versions of its Linux distribution, which also meant that Red Hat's JBoss application server left the platform. Those software vendor announcements were bad but probably not fatal. But then HP's Board of Directors fired the company's CEO, Mark Hurd. Hurd, a personal friend of Oracle CEO Larry Ellison, quickly joined Oracle. Oracle, fresh from its acquisition of Sun (and the #3 UNIX server vendor), which as recently as 2006 had been HP's (and HP-UX's) strongest and most profitable software vendor, announced in late March that there would be no new versions of any Oracle software products for Itanium. Oracle Database 11gR2 would be the last — although HP customers are free to pay Oracle a perpetual stream of support fees for 11gR2 if they wish.
Needless to say, HP (and to a lesser extent, Intel) were stunned. But nobody buys servers only to fill racks in data centers. Companies and governments buy servers to run applications and databases. HP's biggest and most popular Itanium-compatible software product — its only big "anchor tenant" — has now hung up the "going out of business" sign and is closing its doors. Ironically, IBM is now Itanium's largest software vendor, by far. Also last month (in case the message wasn't clear enough), Oracle shipped the latest Oracle Database 11gR2 for Linux on IBM System z.
Let me state my assessment plainly: Oracle's announcement is Itanium's death sentence. It's also brutal but highly effective business strategy for Oracle in its competition with HP. Oracle will probably lose some HP customers who are (rightly, in my opinion) concerned about doing business with a vendor that abandoned them, but Oracle is betting, correctly, that most of its customers, when forced to choose sides, will choose Oracle's software over HP's hardware. To paraphrase a former U.S. presidential advisor, "It's the software, stupid."
For Intel it's a short-term annoyance but nothing more. This week Intel proceeded with its latest Xeon announcement and paid mere lip service to Itanium, at best. Intel would be quite happy to save R&D expenses and focus exclusively on X86 and, in particular, try to expand into mobile device markets and do battle with ARM. However, Intel would prefer not to anger its biggest OEM (HP) and might face some financial penalties acting on its own, so Oracle's announcement is in some ways a nice alternative.
And then there's IBM, which is extremely well positioned with its Power, System z, and software offerings, to attract and win the many soon-to-be-former HP customers. IBM can assist HP Itanium customers in any manner they wish: in straight-up migration from the #2 UNIX platform to the #1 (and growing) UNIX platform, in enterprise consolidation (through Power and/or System z) for additional cost savings and operational benefits, and/or in tactical or strategic migrations from Oracle software products to IBM software products, such as Oracle Database to DB2 (even on HP-UX), immediately or over time. Oracle's announcement also helps validate IBM's entire business strategy over the past decade-plus (and invalidate HP's): "It's the software, stupid."
As for HP, I think their new CEO has many challenges. There are some strong technical reasons (big endian v. little endian) why HP will find it very difficult to move HP-UX and other Itanium operating systems to Intel/AMD X86-64, and even once that's done (if it can be done) Oracle's software is still not going to support HP-UX. I suspect HP will just have to muddle through the Itanium meltdown and, as Intel improves X86-64, move toward an Itanium emulation solution for any remaining VMS, NonStop, and HP-UX customers. That'll provide application compatibility but not performance. Alternatively, HP might want to call IBM to inquire about a move to Power and/or System z CPUs which would be technically more suitable for HP's operating systems and which could at least be partitioned and virtualized to run other operating systems with continuing access to new business software from many vendors. Otherwise, I think HP is fated to devolve into much more of a Dell-like commodity X86-64 server (and PC) vendor, of course with lower profit margins. I don't know how HP builds a software business essentially from scratch. HP's business software portfolio is very limited, and there aren't too many affordable software vendors available for acquisition. Some analysts have suggested that HP should buy EnterpriseDB and try to migrate Oracle Database customers there, but I don't think that strategy could possibly work. In short: well played, Larry.
UPDATE #1: Expanding on the "HP on IBM" idea for a moment, IBM acquired Transitive, the company that produced Apple's superb Rosetta technology which helped Apple (ironically) move from PowerPC to Intel CPUs while maintaining compatibility with binary PowerPC applications. That technology works very well, and IBM Transitive would be the best in the world in helping HP's customers transition from Itanium to Power and/or System z CPUs while maintaining binary compatibility. Also, technically IBM and HP could do something very similar to what Apple did and place that CPU emulation within the same operating system instance. That would be highly desirable for HP-UX Itanium binaries specifically, which could then run as-is within AIX. (VMS and NonStop are probably too "alien" and would need to run in separate LPARs or virtual machines, but those two operating systems don't run inside HP-UX today, so that's fine.) Of course, HP would have to contact IBM, strike a deal, and collaborate. Given that IBM is now HP Itanium's largest software vendor, and that IBM possesses the best set of technologies to help HP customers migrate (whether HP collaborates or not), I think HP's CEO probably ought to give his counterpart at IBM a call if he hasn't already. It's better for HP to save some revenue, profit, and customer relationships. There's ample precedent for that sort of partnership between (former?) server rivals. For example, Hitachi is a major IBM Power and AIX OEM, and Hitachi and IBM also collaborated on the z800 mainframe.
UPDATE #2: Oracle's announcement also severely and negatively impacts HP's remaining VMS and NonStop customers. Oracle owns Rdb, the DEC-created database engine that's probably the most popular database for VMS systems. Oracle also owns Tuxedo, probably the most popular transaction manager for NonStop systems. Oracle is halting development for these middleware products, too. VMS and NonStop customers typically have very demanding quality of service (QoS) requirements, including application stability and longevity, so my recommendation would be (in most cases) that they plan for and execute a migration to IBM System z. That won't necessarily be easy, but it's the best available option given Oracle's actions.
|by Timothy Sipples||April 6, 2011 in History, Systems Technology |
Permalink | Comments (3) | TrackBack (0)
IBM's Birthday: "100x100"
As I mentioned previously, IBM is celebrating its 100th birthday this year. Here's a new and well-crafted video from IBM which cleverly illustrates just how far we've come and what's coming next:
I hope I'm doing as well as the first speaker in the video when I'm 100 years old!
One of the major changes at IBM that isn't too obvious in the video is the company's transition to software as its major profit generator. Software now accounts for almost half of IBM's total profit. The video mentions some notable software milestones (FORTRAN and relational databases, for example), but it would have been impossible to cover everything within 13 minutes.
|by Timothy Sipples||February 1, 2011 in History, Innovation |
Permalink | Comments (0) | TrackBack (0)
2011: IBM's Centennial(ish) Year
IBM is a fascinating company in many ways. This year seems a particularly appropriate year to look back on IBM's long and interesting history, because 2011 is IBM's official centennial year.
In classic IBM fashion, the company may be underselling itself when it comes to recognizing its centennial. The Tabulating Machine Company, founded by Herman Hollerith — arguably history's leading computing pioneer — dates back to 1896 and commercialized much of the technology Hollerith invented for the 1890 U.S. Census. Hollerith's tabulating equipment improved the Census's productivity by at least a factor of three. There is a recognizably straightforward evolution, albeit a somewhat convoluted one with at least two big hardware technology transitions, from TMC's late 1800s punched card tabulating technologies to many of IBM's current technology offerings, even including the IBM zEnterprise mainframe. Yet IBM considers the incorporation of Computing Tabulating Recording (C-T-R) in mid-June, 1911, to be its official start. C-T-R, later renamed International Business Machines (IBM), was a merger of TMC, the Computing Scale Corporation, and the International Time Recording Company. It's not clear to me why the addition of meat scale and employee time clock businesses reset IBM's corporate clock, so to speak, especially when IBM divested those businesses long ago, but that's how IBM sees its history.
Anyway, even if IBM should be celebrating its 115th birthday instead of its 100th, understanding the company's history is helpful in understanding how IBM — and how technology overall — is likely to evolve in the future. Therefore I would recommend paying at least a little attention to IBM's centennial reflections over the course of this year.
|by Timothy Sipples||January 5, 2011 in Future, History |
Permalink | Comments (23) | TrackBack (0)
50 Years Ago: The IBM 1401
I missed the exact 50th anniversary date, but here's a terrific short documentary celebrating the landmark IBM 1401 computer...and, more importantly, its people:
The IBM 1401 was so wildly popular that IBM provided optional 1401 emulation on the System/360 mainframes that were announced in 1964. Although it's hard to be certain, it is conceivable (and likely) that, somewhere, there are still programs originally written for the IBM 1401 that are running on today's System z mainframe, nearly 50 years on. Perhaps they've been modified and updated along the way, as business rules and other business needs evolved, but they were born many decades ago.
What an amazing journey.
|by Timothy Sipples||December 17, 2009 in History |
Permalink | Comments (13) | TrackBack (0)
Seamus McManus: Beekeeper; Father of Cloud Computing
A 2 minute 'history' of cloud computing, including the pivotal role of the mainframe. For a more factual account, see today's New York Time's article.
|by Timothy Sipples||June 15, 2009 in History |
Permalink | Comments (1) | TrackBack (0)
A Tale of Two Mainframe customers – one growing and one leaving the mainframe
This is the tale of two mainframe customers. One customer
has achieved a period of tremendous growth in their business, processing
transactions on the mainframe, while reducing expenses and becoming more
resilient. The other business chose to get off the mainframe at a significant
cost and in all likelihood, spends more today than they would have on the
mainframe. What’s interesting is that at one time, they shared the same system
infrastructure. And Clerity, a consulting firm, would like you to believe that
the non-mainframe customer got tremendous value in the move. Here are their
In any basic computer architecture class, a student will
learn that the fewer the number of data moves, the better for performance. Now,
in an era of regulatory compliance and privacy considerations, that becomes
exceedingly true because each instance of data must now be auditable and
recoverable which implies additional costs for each extra instance of data.
This becomes important when considering an outsourced computing environment. It appears that SIAC never got this level of education, while its customer, DTCC seems to have excelled in this computer architecture class. Even funnier is that Clerity has decided that SIAC is a model customer…that doesn’t bode too well for their other consulting arrangements.
So what really happened?
DTCC’s trading business was growing tremendously. But let’s have them tell you, in their own words:
Sometimes "insourcing" pays off more than
outsourcing. Until last year, two DTCC subsidiaries outsourced all their
infrastructure support activities to the Securities Industry Automation
Corporation (SIAC). Now, following the completion of a multi-year initiative,
DTCC has cut costs and bolstered business continuity by insourcing the
activities previously performed by SIAC into DTCC’s infrastructure. The two
subsidiaries are National Securities Clearing Corporation (NSCC) and Fixed
Income Clearing Corporation (FICC). ……
On top of strengthening the industry's business continuity and infrastructure, the project is yielding financial benefits, enabling DTCC to cut the industry’s overall operating expenses. In 2006, by leveraging DTCC's processing capabilities, insourcing has reduced DTCC's annual operating expenses an estimated $42 million, said William Aimetti, DTCC’s chief operating officer. This was one factor that enabled DTCC to lower its fees in 2006.
And going back to another DTCC newsletter, they explained that they got a 167% performance improvement, without a line of code change, because they reduced the number of data moves and connections necessary to process a transaction:
To keep ahead of transaction volumes that have been rising sharply over the past several years, DTCC has significantly increased the capacity of its mainframe database for equity processing. The system, called Trade Repository Processing (TRP), can now process at least 160 million sides per day. This 167% increase is nearly triple the previous capacity of 60 million sides.
What’s more, the TRP can handle the additional volume within the same time frames, thanks to changes that make the system perform more efficiently. In addition, for current volumes, the upgrade allows DTCC to deliver certain participant reports, such as the Consolidated Trade Summary, up to 45 minutes earlier.
DTCC was able to do all this without modifying any of their
customers applications. While DTCC was hosted on the SIAC systems, they found
that there were extra network hops and copies of data deployed. They also were
heavily dependent on SIAC to make changes to the infrastructure on a regular
basis. Because both of the SIAC systems were located in the New
York City area, DTCC was also afraid that a single
catastrophe would take out the redundant systems and affect their availability.
These were the fundamental concerns that led DTCC to move out on its own.
Prior to this decision, SIAC signed a multi year agreement for software, systems and services with IBM. This agreement included discounted pricing assuming capacity growth projections that SIAC provided as their objectives. By sharing the mainframe infrastructure with DTCC, SIAC dramatically reduced their own operational overhead which was predominantly associated with batch processing and account reconciliation in the evening, while DTCC used the same processing infrastructure for trade transactions during the day. Each had applications that overlapped each other though.
When DTCC pulled their two applications from the SIAC
system, SIAC was left with a lot of free daytime capacity, but still a
reasonably busy system in the evening. But SIAC was now completely responsible
for the costs of this system. The growth potential that they had promised to IBM
would no longer be possible and as such, the discounts they were offered were
no longer relevant. This underutilized
mainframe was now quite a bit more expensive to SIAC than it had been when it
was sharing the expense with DTCC. That’s a fact…nothing sweet about that.
Now SIAC could actually have downsized with their mainframe
and reduced their costs. Instead, they chose to “down size” to a distributed
environment. In doing so, they also needed to solve federally mandated business
resilience requirements, something they had ignored on their previous mainframe,
and build another data center.
So using SIAC’s own words:
In 2006, when the Shared Data Center team, the technology arm of the New York Stock Exchange (NYSE), evaluated its internal infrastructure in light of competitive market factors, changing regulations, and anticipated future growth, the decision was made to replatform its 1,660 MIPS mainframe workload onto IBM System p Model 595 servers running AIX and UniKix rehosting software from Clerity.
"Quite simply, we can now transact more business per
hour at a lower rate," said Francis Feldman, Vice President of the Shared Data Center
So let’s parse this a little bit. Notice the timeframe:
2006….it’s the same time that DTCC moved off the SIAC system. The 1660 MIPS is
the combined processing power for both DTCC and SIAC. The reality is that SIAC
NEVER used all that capacity themselves, even though they owned the system. So
while factually true, it wasn’t real. Changing regulations refers to the need
to develop a second site outside the New York metropolitan area. After converting the
applications, SIAC had to create automation and recovery scripts and deploy the
new servers at an alternative location as well. I am assuming that they
licensed the software for those systems as well. And this required changes
for all of SIAC’s customers as well. That cost certainly isn’t factored
into the migration costs for SIAC. Finally, I wonder what SIAC’s cost were when
they shared the infrastructure….is the new solution more or less than that
environment? Unfortunately, we’ll never know because the new environment
includes a new data center….but I could imagine it was less.
So, an alternative plan, at presumably far less expense, effort
and time, with little or no application changes, would have been to downsize
the mainframe to the size appropriate to SIAC’s new capacity requirements. To
meet the resiliency objectives, SIAC could have installed a mainframe in
another geographic location using IBM’s Capacity Backup pricing which would not
have charged for software usage except during a disaster, while still allowing
for regular disaster preparedness testing with no additional licensing costs. Many of IBM’s customers take advantage of this
model for availability processing.
And has availability improved? Click here for a list of outages that SIAC experienced in 2008. All types, but not associated with the mainframe. Perhaps SIAC changed their reporting structure in 2008, but I can’t find many outages listed in a search on 2007, though there is an awful lot of information dating back many years on their site.
As for SIAC, I don't know, but they don't spend nearly the amount of time bragging about their infrastructure as DTCC. That should give you a clue right there!
|by JimPorell||January 6, 2009 in Current Affairs, Economics, History, Systems Technology |
Permalink | Comments (1) | TrackBack (0)
Who do you trust? RFG? Yes. HP? No
In my last entry, I talked about the misleading information that HP provided regarding a consulting report to get off the mainframe. In that dialog, I also referenced the same consulting group giving pro-mainframe support and then asked rhetorically, who do you trust, RFG or RFG? I’ve worked closely with Robert Francis Group and I was wondering why they would have done this.
Well, it looks like RFG is a lot more trustworthy than HP who misused the consulting report. Today, RFG put out a press release in which their headline reiterated:
Believes That the Mainframe Is One of the Best and Most Energy Efficient
Well, it was certainly nice to see that and their explanation. Anyone hazard to guess when we might see a retraction or correction from HP?
|by JimPorell||November 17, 2008 in Economics, History, Innovation |
Permalink | Comments (2) | TrackBack (0)
Of Hurricaines and Old Haunts
With one storm having hit New Orleans and another hammering South Texas, memories come back like flood waters. Thankfully, the Crescent City did not experience the trauma that we feared, a repeat of three years ago. Still, we all clung to our television and internet news feeds. To me, the names of the canals and streets and neighborhoods are all too familiar. I lived there once. It's where I met VM.
Back in the days when I was coasting through college, Mom and Dad were living in New Orleans. The Big Easy was the place where I went for the break. I needed a summertime job ... anything. Found a non-specific clerical job with a geologist at Shell Oil. Hey! That's cool! Working at One Shell Square! That geologist soon learned that I had a technical bent and set me to work on some minor programming he needed done. (This is another example of God throwing a gem in your path when you were not savvy enough to go digging for it.) The system was something called VM/SP.
This VM/SP thing was IBM all the way. Big hardware; big terminal. The terminal I used was a boat anchor of a thing, a 3270 model 5 with 27 lines and 132 columns. Strictly speaking, it was a dumb terminal but was the smartest dumb terminal I had yet encountered. There was a component of VM/SP called CMS, the Conversational Monitor System. The way you interacted with CMS was that you typed a bunch on the screen and the computer at the other end of the wire was sent all your typing at once.
At least once I snatched a glimpse of the 43xx series hardware in the machine room, There was some grand plan for these machines. (I later learned the plan was PROFS which Shell and many other large companies used for office email ... before there was an Internet.) VM really caught on at Shell and was used for years, especially thanks to PROFS, later called OfficeVision, but also because engineers could build their batch jobs for other systems using the excellent editor built into CMS and then punch those jobs to other systems for execution. VM is like "datacenter glue", nimbly connecting unlike systems.
But VM was more than just a spiffy interactive system. It presented every user with his own (virtual) System/370 computer. I asked about this for clarification: It's your own personal "machine"? Yes. Can you boot other operating systems? Yes. Sweet! I figured that Shell had paid through the nose for this fine technology. Only later did I learn that VM was one of the cheaper system products available. When I got back to school, I told friends and professors about this amazing system which allowed you to run other operating systems concurrently on one physical box. One of my professors was especially clued in and in his overview of operating systems he pronounced, "VM is the baby", the precious one.
Eventually my alma mater did get VM. And then there was BITNET and that whole story, which we should save for another time.
The baby has grown up: VM/HPO, VM/XA, VM/ESA, and now z/VM. Though PROFS and OfficeVision have fallen to stacked Dominos, VM has recently enjoyed the resurgence you all know about thanks to Linux. It's hypervisor is the finely tuned engine driving hundreds and sometimes thousands of virtual penguins. CMS hasn't changed much in appearance, so my old friends at Shell would still recognize it. While those in other shells (eg: on Linux) have no idea that they're being supported by a warm teddy bear of a host.
When Katrina hit, aside from the human tragedy, I wondered how One Shell Square faired. Funny how the mind goes to the familiar even in the most shocking circumstances. My sources tell me Shell still runs VM, but not like it once did.
|by sirsanta||September 30, 2008 in History |
Permalink | Comments (1) | TrackBack (0)
New York Times: "Why Old Technologies Are Still Kicking"
Few readers of this blog are unfamiliar with Stewart Alsop's 1991 claim that in five years' time the last mainframe would be unplugged. It's become a piece of good humor over the years - with even Stewart himself sharing in the laugh by appearing in the pages of one of IBM's annual reports to "eat his words."
Still, the issue has always been one considered from the point of "if."
On Sunday, the New York Times took a "why" look at the dynamics behind the comings and goings - and mostly the staying power - of various technologies. The article is called "Why Old Technologies Are Still Kicking," and is certainly worth a read.
Front and center in this reporting is the IBM mainframe. The article even includes a colorful (well, one of them is in color) pair of mainframe photos that not only show how far the mainframe has come, but how far fashion and hair style seem to have evolved during the mainframe's lineage. See below (and be sure to check out the Times article).
|by Kevin Acocella||March 25, 2008 in History |
Permalink | Comments (1) | TrackBack (0)
Will Microsoft Windows be next on System z?
Anyone paying attention to the Sine Nomine demo, two weeks ago, now knows that Solaris is inevitably
coming to System z. In the wake of all of this, I have heard from a number of
you – and people throughout the industry about – you guessed it: “What’s next,
Windows?” I’m guessing many of you have
the same question. To really get at the
dynamics of this, we have to look to the past.
Back in the early 90's, Windows was not just on the x86
architecture (at the time, only on Intel – remember Wintel?). It was also running on
MIPS, Alpha and the PowerPC architecture. There's something a bit different
about the x86 architecture....this is a bit technical here, but it uses the Little Endian bit
representation within a byte while the RISC architectures under the popular
UNIX platforms and IBM's mainframe use Big Endian
architecture. Solaris and Linux were
written in a way that makes that bit representation transparent. But Microsoft
decided long ago that Windows would only run in Little Endian mode.
Back in 1994, there was a skunk works within the IBM
mainframe division that looked at running Windows NT as a native operating
system on what was then a 10-way S/390 platform. It figured out how to boot the
machine up as a Little Endian server and it could have run Windows across those
10 processors. But guess what? No hypervisor or virtualization capabilities
would exist. They have been written in Big Endian mode. So it would be an
entire mainframe dedicated to a single instance of Windows. My palm can do
So, IBM realized then that this had no future in terms of
consolidation value. In turn, Microsoft decided to uniquely support the x86
architecture and the Alpha and MIPS implementations of Windows died a rather
Next up was to bring some Windows portability to the
mainframe. So working with Bristol Technology (now a subsidiary of HP), IBM
looked at getting a set of Windows 32 bit APIs and its OLE and COM capabilities
on OS/390. This was just after IBM had announced its intention to brand OS/390
as a UNIX operating system. Bristol Technology had a license to the
Microsoft source code to facilitate that. Well, Microsoft must have gotten
afraid of the possibilities of Windows applications running easily on the
mainframe, so they took away the software license from
. Bristol, in turn, sued them for unfair trade and they won. But by then, Microsoft’s approach had driven these types of developers from their platform. Today, Mainsoft Corporation provides a Windows portability layer across UNIX systems and z/OS, but we'll never see a day when Windows will run natively on the mainframe.
So what are the implications to the mainframe? Let’s start
with development tools for creating new applications. If you only use Microsoft
.Net development tooling, those solutions will be relegated to the Wintel
platform. Should those applications want to interoperate with the mainframe,
there are a variety of connectors that enable interoperability with both 3270
and SOAP/Web service based applications as well as distributed data requests. As mentioned earlier, you can use Mainsoft's technology to translate the .Net code into Java byte codes and run that on z/OS and Linux for System z as well. Therefore you get some developers synergy, but deployment options beyond Wintel platforms.
If you really want cross platform deployment from the Windows desktop environment, eclipse.org is an open standards group comprised of a number of leading tooling vendors to facilitate rapid application development, good tool integration and provide a flexible choice for platforms to which those applications can be deployed. Leveraging this tool set will facilitate exploitation of mainframe technology and is highly recommended to deliver the best qualities of service for software running on System z. IBM’s Rational Developer for z is an implementation of the eclipse capabilities for System z.
|by JimPorell||December 14, 2007 in History |
Permalink | Comments (7) | TrackBack (0)
the greaterIBM connection
One of the many innovations Sam Palmisano has spearheaded at IBM is the idea of reaching out to "alumni". The first initiative was a few years ago when he hosted a reception for a group of former executives of the company. A few were retired but most were in senior positions in other companies. That was just the beginning and now the idea of reaching out has been expanded -- big time. The number of past and present IBMers is probably close to a million people. Establishing communications with such a huge base can be nothing but a good thing for the company.
When I left engineering school and joined IBM in 1967, it was common to look for a job at a company and expect to stay there your entire career. Nobody thinks that way anymore. If you tell someone you were with a company for decades, they might ask "what's the matter, couldn't you find any other jobs?". Another change is that in the old days if someone left the company they were considered a traitor and barred from coming back. Today, there are many executives that left the company at some point, got some experience at one or more other companies, and then brought that experience back into IBM. Some have come and gone multiple times. The turnover has strengthened the company.
And now we have social networks. In the early stages there was a perception that social networking meant eleven year-old girls on MySpace. Now businesses are realizing that it is more likely forty or fifty year-old business people on Facebook and Xing and LinkedIn and Plaxo Pulse. The Internet has enabled everyone to be connected to everyone. Whether it is reading blogs, posting to wikis, updating status on Facebook, or making new connections through viral invitations, it is clear that a big company like IBM has a lot to gain by "connecting" past, present, and future IBMers to each other and with the company. IBM calls it "the greaterIBM connection". On Monday evening the company hosted a greaterIBM reception at the Metrazur at Grand Central Station in New York. More than four hundred attended. It was good to reconnect with some colleagues I had not seen for quite a few years.
Will social networking payoff in business terms? Nobody knows for sure but in my opinion it is certain -- as soon as we see the New York Times run a front page story that social networking is a fad, in trouble or peaking out we will have confirmation that success is a sure thing. A short term inhibitor is that there are so many different social networks. As web standards evolve I am confident that we will have a world where people will create one profile and then be able to decide which part of their profile is accessible in which networks.
IBM sees the potential and is investing the time and resources to build a large and active network. The possibilities are endless -- collaboration on projects, networking to hire or get hired, crafting deals, referrals to and from IBM and its business partners. As a bonus, social networking is fun and good for morale. I look forward to continuing to be a part of the greaterIBM connection as it evolves. Upon e-tirement in 2001 after nearly four decades at IBM, I don't really feel like I left anyway! The stories that I have been writing since 1998 over at the patrickWeb blog fall into a number of categories. One section is devoted to "IBM Happenings". I am sure I will also be writing and linking at the greaterIBM connection along with others. Cross linking will increase the overall "connectedness". That's what the web is all about. I am really proud that IBM is taking networking and the blogosphere so seriously.
|by John R Patrick||November 14, 2007 in History, Innovation, People |
Permalink | Comments (1) | TrackBack (0)
The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.