IBM Passes Microsoft in Market Capitalization

I mentioned a few months ago that it might happen, and it now has. The financial markets now assign more value to IBM than to Microsoft. That is, after the close of Wall Street trading on September 29, 2011, IBM had a slightly higher market capitalization than Microsoft. Market capitalization refers to the total value of a company's shares of stock.

It's hard to say exactly what caused this flip-flop in market positions. However, Amazon's announcement of their new Microsoft-free Kindle Fire, priced at only $199, might have nudged the market. Microsoft derives about 60% of its revenues from its twin Windows and Office software franchises, and, like the iPad, the Kindle isn't tied to either of those Microsoft products. Mobile devices are eroding (or at least containing) Microsoft's previously unassailable client device business. Those mobile devices also rely heavily on cloud delivery of computing services, almost always using Linux-based servers and middleware which Microsoft does not produce.

Microsoft fully capitalized on one of history's smartest business deals when, back in 1981, Microsoft agreed to supply the operating system for IBM's then-new personal computer. However, Microsoft insisted on a non-exclusive deal with IBM. Microsoft also insisted on retaining ownership and copyrights in their operating system. IBM was desperate for an operating system, particularly since Digital Research rebuffed IBM at least initially. IBM still very much viewed itself as a box-pushing company at that time, and IBM's lawyers and management agreed to Microsoft's terms. Later, IBM considered but rejected the idea of buying Microsoft (and Intel) outright. The rest, as they say, is history.

What's particularly ironic now is that arguably Microsoft relied far too long on a strikingly similar box-pushing business model (Windows and Office) while IBM, chastened in the early 1990s, figured out how to recreate itself, in large part recognizing the enormous value of having a rich, business solution-supporting software portfolio. That's not to say that Microsoft doesn't: its server-oriented middleware business is fairly large and one of the few moderate successes in recent years. However, Microsoft's middleware exclusively runs on Microsoft operating systems, and that's a challenge for the company and for prospective customers. In particular, Microsoft is losing a lot of potential business with Silicon Valley-based cloud-oriented companies, precisely the sort of customers Microsoft would have won in the past.

I think this market capitalization flip-flop helps demonstrate why the future of the mainframe is extremely bright. What every current or prospective mainframe customer wants to see is a stable, prosperous supplier with a great business model, growing sales, aggressive investment in the platform and its software to stay six steps ahead of competitors, continuous improvements in price-performance, and increasing architectural relevancy in areas such as cloud, security, etc. We're seeing all that, and that's great news for now and for the future.

by Timothy Sipples September 29, 2011 in Financial
Permalink | Comments (3) | TrackBack (0)

SK Communications Needs a Mainframe

This report, which details the July intrusion into Korea's largest telecommunications company, an intrusion which resulted in unknown hackers collecting the personal details of up to 35 million Korean citizens, is absolutely chilling and horrifying.

by Timothy Sipples September 27, 2011 in Security
Permalink | Comments (2) | TrackBack (0)

Mainframe Cases Closed in Europe

The European Commission opened two investigations last year into IBM's mainframe-related business practices. The Commission has closed one case without action. In the other case, IBM suggested a solution, and the Commission praised IBM's suggestion but is allowing interested parties to comment.

The first investigation began when T3 Technologies and TurboHercules (later joined by Neon Enterprise Software) lodged a complaint alleging that IBM was guilty of antitrust abuses because IBM refused to license its mainframe software to run with their technologies. T3 and Neon also filed antitrust cases in U.S. court. IBM complained that Microsoft, a competitor which has had significant antitrust problems, was actually providing the funding for these complaints. Also, IBM noted that the company spends billions of dollars to develop new mainframe technologies, that IBM holds multiple mainframe hardware patents on those innovations (including in Europe), that the vendors did not have patent licenses, and that IBM should not be compelled to support infringement of its own patents. Also, there was recent and compelling precedent in the case of Psystar Corporation versus Apple. T3 and Neon lost their related court cases in the U.S., and all three companies withdrew their European Commission complaints early last month.

The second investigation concerned maintenance services for IBM mainframes. Apparently some maintenance service companies in Europe had difficulty winning mainframe service contracts. IBM proposed changes in how it supplies mainframe spare parts and technical documentation. Pending a comment period, that case looks like it's resolved without drama.

by Timothy Sipples September 23, 2011 in Current Affairs
Permalink | Comments (1) | TrackBack (0)

HP's Board Ousted CEO, Whitman Named (Updated)

Original Post: The Wall Street Journal (via The Australian) reports that Hewlett-Packard's board of directors met last night (September 21, U.S. time) to discuss firing current CEO Leo Apotheker. Apotheker began his tenure only in November, 2010.

During Apotheker's short tenure, HP's stock price fell 45 percent. But since rumors of Apotheker's possible departure started swirling earlier this week, HP's stock has rebounded by 10%.

HP is clearly a company in turmoil. The Itanium Meltdown is one big reason, and so is HP's restructuring.

In my view it's perfectly rational for prospective HP customers to hold off purchases (or consider alternatives) while there's so much uncertainty about HP's ability and williness to invest in particular businesses — and even uncertainty about what businesses HP will pursue and not pursue. That's particularly true in a tough global economic environment. I think Apotheker had some good ideas that might have fixed HP 5 or 10 years ago. He is significantly overpaying to acquire the wrong software company (Autonomy), but HP's board had to sign off on that deal and shares the blame. His execution could have been much more graceful, especially with the PC and tablet businesses. (The "we're getting out, but we're not sure how, except we're sure the HP brand won't go with it" announcement for their PC business was bizarre and disturbing.) When Oracle singlehandedly mooted HP's Itanium server business, Apotheker should have struck a deal with IBM along the lines I outlined previously. And Apotheker hasn't addressed how HP will move to where the market will be in 5 or 10 years.

On the bright side, HP still has lots of pricey printer ink to sell.

UPDATE #1: HP's board may name ex-eBay CEO Meg Whitman as the new CEO. Whitman joined HP's board after her losing California gubernatorial campaign in which she spent $140 million of her own money and still lost by over 11 percentage points. That's despite the fact her party won the majority of races in the same 2010 election.

UPDATE #2: New HP CEO Meg Whitman talks with reporters.

by Timothy Sipples September 22, 2011 in Current Affairs, People
Permalink | Comments (1) | TrackBack (0)

CIOs All Agree: Mainframes Are More Strategic Than Themselves

Alan Radding over at MainframeZone.com neatly destroys a recent Micro Focus-sponsored Standish Group marketing brochure survey of select CIOs. Several people posting comments to Alan's article also make good points.

I'd like to make one more point. According to CIO Magazine, the average tenure of a CIO barely exceeds four years. A supermajority (70%) of Standish Group-selected CIOs agreed that their mainframes are strategic. But, when asked to look forward 5 to 10 years, they at least weren't sure. Ergo, all of The Standish Group-selected CIOs agree: their mainframes are more strategic, on average, than themselves — and the mainframes will undoubtedly serve their employers much, much longer.

by Timothy Sipples September 21, 2011 in Analysts, Media
Permalink | Comments (6) | TrackBack (0)

Mitsubishi Needs a Mainframe (Updated)

Mitsubishi Heavy Industries (MHI), Japan's top defense contractor, has discovered targeted viruses on more than 80 of its servers and PCs. Japanese Defense Minister Yasuo Ichikawa is assuring the public that the viruses didn't transmit the weapons plans stored on those computers to another party, but the truth is that nobody knows for sure yet. The Japanese government has ordered MHI to undertake a full investigation, and ministers are quite angry MHI didn't report the incident much earlier. It's an extremely serious security breach.

The Chinese government denies that it was responsible for the viruses. However, most governments spy on other governments. There are other reported attacks targeting defense contractors and Japanese government agencies, but the perpetrator is still unknown.

I know a bit about MHI. MHI is a very big, complex company. Among its many subsidiaries and affiliated companies, MHI has some in-house information technology talent, and some of their people have strong IBM mainframe skills. Unfortunately, as evident in press accounts, MHI did not employ an IBM mainframe as its secure system of record for these weapon designs, which include designs for submarines, missiles, and destroyers. I hope that MHI will leverage its own mainframe-skilled people to support these high security requirements and other mission-critical applications. While the incident is quite bad, the important part now is to learn and to adapt — and to use the right server technology for the mission.

See also Sony Needs a Mainframe and About "(Blank) Needs a Mainframe."

UPDATE #1: DigiNotar, the Dutch certificate authority (which reminds me of CardSystems), is now bankrupt. Hackers penetrated DigiNotar then generated signed SSL certificates which resembled authentic security certificates for Google, Facebook, Yahoo!, and other major Web sites. Hundreds of thousands of people, many in Iran, probably including many democratic movement leaders, then had their formerly secure Web browser sessions intercepted through so-called "man in the middle" attacks. Considering the results of a security audit, DigiNotar needed a mainframe. But it's too late for DigiNotar.

UPDATE #2: Microsoft, which I mentioned last month, still needs a mainframe.

by Timothy Sipples September 20, 2011 in Security
Permalink | Comments (2) | TrackBack (0)

zEnterprise and SAP: Perfect Together

I've seen some past criticism that IBM doesn't publish benchmarks for its mainframes. That's another "mainframe myth." IBM does publish benchmarks from time-to-time. Here's the latest example: a new world record for SAP core banking performance and scalability.

Benchmarks aren't useful at all unless they resemble some projected reality, in total. They're simply tools to try to understand how a system will behave when you put it to real, productive work. Unfortunately every business (and government agency) is unique, and when you're looking at a thoroughly mixed workload environment (IBM's zEnterprise), running a couple off-the-shelf tests isn't going to help much.

That said, this SAP benchmark is at least a better attempt, because it's trying to simulate what a real bank would do during the day, night, and across multiple channels and business functions. And the zEnterprise results are extremely impressive, even measuring SAP by itself. (IBM mainframes can and do of course run lots of applications on the same footprints.)

It's tough to benchmark "run most of my entire business" scenarios. But it's not necessarily impossible. I've worked with a lot of customers who pack up, go visit a benchmark center, and figure out how big their systems should be, how they should (or should not) architect their applications, etc. That's the best way to gain confidence about how well particular infrastructure(s) will support future business demands. (They can also test quality of service issues, such as how quickly they can fail over to a second data center, how much if any interruption there would be when upgrading software versions, and so on.)

Alternatively, if you've already got a mainframe, you're probably benchmarking all the time, perhaps without knowing it. You probably already know that your mainframe can handle the next merger or acquisition, and the next five applications, and then some. Maybe that'll require adding some capacity, maybe not, but the IBM mainframe inherently yields lots of performance-related data so you already know that you can deliver a quality outcome. After all, the easiest benchmark is the one you don't have to run. There are also affordable Capacity On Demand (COD) options for running special high-capacity tests on your own machine(s) — or to save your business if the programmers delivered some poor performing code that they can't fix right away.

Of course, if you have 150 million bank accounts, and you want to run SAP core banking, just go buy a couple zEnterprise machines from IBM (and a zBX or two). You've now got that benchmark, too.

by Timothy Sipples September 16, 2011 in DB2, Systems Technology
Permalink | Comments (3) | TrackBack (0)

Did the U.S. Government Kill Cloud Computing Over 50 Years Ago?

Ed and I were chatting about cloud computing last week. Ed is a co-worker with decades of experience in the IT industry. He said that all this talk about (public) cloud computing reminded him of the Service Bureau Corporation. I wasn't too familiar with SBC, so I read more about that interesting chapter in computing history. In fact, it's so interesting that I'm beginning to wonder: did the U.S. government kill public cloud computing over half a century ago?

The more you learn about the history of "cloud computing," the more you discover how old fashioned and well-proven the concept is. The basic principle underpinning cloud computing is economic: computing resources (including expertise) are "expensive" and subject to varying demands per user. If you can share those resources more efficiently across users, with their varying demands, you can achieve economies of scale and deliver (and/or capture) cost savings.

That basic economic principle is very, very old — so old that it easily predates electronic computing. In 1932, as a notable example, IBM opened its first "service bureaus." Back then state-of-the-art data processing equipment included punched cards, sorters, and tabulators. Businesses and government agencies had to make a substantial investment in order to buy (or lease) their own dedicated equipment, and they had to hire a cadre of experts to run it. Some did, but many had one or two accounting problems that they only had to solve occasionally, such as once per month. Economically it made much more sense to rent tabulating time, run their card jobs (such as payroll or billing), then pack up and come back a month later. IBM recognized this enormous opportunity and opened so-called service bureaus around the United States then, later, around the world.

In every way that's relevant, IBM's service bureaus from the 1930s provided exactly what cloud computing providers such as Amazon do today. Of course that was before widespread data networking — you had to access your local service bureau via sneakernet — but the business model was exactly the same.

IBM dominated data processing throughout most of the 20th century, arousing U.S. government antitrust concerns. The company's battle with the U.S. Department of Justice lasted decades, a battle which IBM technically won but which limited the company in certain ways. In 1956, at the start of the electronic computing era, IBM and the Department of Justice reached an agreement. That agreement covered several of IBM's business practices, including its service bureaus. IBM promised to operate its service bureaus according to certain rules. IBM transferred its bureaus to a new and separate subsidiary, the Service Bureau Corporation, which could not use the IBM name or logo despite IBM ownership. SBC's employees, directors, and corporate accounts had to be kept separate from its parent. IBM could not treat SBC any differently than other service bureaus, including service bureaus run by other computer companies, and had to supply equipment (and advance knowledge about that equipment) on an equal basis. And SBC had to follow certain rules about pricing for its services.

Did the 1956 consent decree kill cloud computing in its cradle, delaying the growth and popularity of cloud computing for a full half century? Quite possibly. Although SBC still did fairly well, the consent decree meant that IBM had to treat its own subsidiary as a second- or third-class customer. IBM and SBC couldn't talk privately about how best to design computers to support shared services because that information had to be shared with everyone. And if a company cannot protect its trade secrets then it is unlikely to invest as much in that area of technology.

Even so, IBM could (and did) invest an enormous amount of development effort in what we now know as private clouds. That is, IBM still had many large, individual customers that demanded systems which could support thousands of concurrent users, applications, and databases, all with the ultimate in qualities of service, including reliability and security. Sound familiar? It should, because that describes mainframe computing. Private clouds and public clouds ended up being very similar technically, and so SBC and its competitors, such as National CSS which started in the late 1960s, could still manage to deliver public cloud services.

So what happened? Why are companies like Amazon and Microsoft now trying to reinvent what SBC, National CSS, and other service bureaus started delivering decades ago?

Part of the answer dates back to the early 1970s. The #5 computer vendor at the time, Control Data Corporation, sued the #1 computer vendor, IBM. CDC accused IBM of monopolistic behavior. CDC and IBM eventually agreed to an out-of-court settlement. As part of the settlement, IBM sold SBC to CDC for a mere $16 million, and IBM agreed not to compete against SBC for several years. As it turned out, CDC had big business problems, and SBC wasn't enough to save CDC. However, SBC's core business was so strong that it survived even CDC's implosion, and now SBC lives on as Ceridian. Ceridian provides human resources-related services, such as payroll processing, often via Software as a Service (SaaS) and cloud technologies. To its credit, Ceridian publicly acknowledges its corporate history, tracing it all the way back to 1932 and IBM's service bureaus.

The other part of the answer depends on an accident of antitrust history and its influence on the economics of computing. At its most basic level, computing (data processing) consists of four finite resources: computing (CPU), storage (memory, disk, etc.), input/output (networking), and expertise (design, operations, programming, etc.) All four of these limited resources have costs. However, two of them (computing and storage) saw their costs fall dramatically and much earlier than the cost of networking fell. The major reason? In most countries, including the U.S., the governments regulated and protected their national telecommunications monopolies. It was only when the Internet became popular starting in the late 1990s that the cost of networking followed the trends in computing and storage.

With lower costs for computing, storage, and now networking, brain power (expertise) is currently the most costly resource in computing. That's reflected in the ever-increasing prices for ever-more-sophisticated commercial software and in businesses' efforts to obtain expertise from anywhere in the world to develop and run their computing systems, otherwise known as offshoring. High quality data processing is not easy, and in a world where expertise is the most precious computing commodity, we're seeing a return to very familiar business models.

Is it any wonder why the (modern) mainframe and service bureaus are getting more popular? In the sense of business models at least, welcome to the future: the past!

by Timothy Sipples September 5, 2011 in Cloud Computing, History
Permalink | Comments (2) | TrackBack (0)



The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.