Did the U.S. Government Kill Cloud Computing Over 50 Years Ago?

Ed and I were chatting about cloud computing last week. Ed is a co-worker with decades of experience in the IT industry. He said that all this talk about (public) cloud computing reminded him of the Service Bureau Corporation. I wasn't too familiar with SBC, so I read more about that interesting chapter in computing history. In fact, it's so interesting that I'm beginning to wonder: did the U.S. government kill public cloud computing over half a century ago?

The more you learn about the history of "cloud computing," the more you discover how old fashioned and well-proven the concept is. The basic principle underpinning cloud computing is economic: computing resources (including expertise) are "expensive" and subject to varying demands per user. If you can share those resources more efficiently across users, with their varying demands, you can achieve economies of scale and deliver (and/or capture) cost savings.

That basic economic principle is very, very old — so old that it easily predates electronic computing. In 1932, as a notable example, IBM opened its first "service bureaus." Back then state-of-the-art data processing equipment included punched cards, sorters, and tabulators. Businesses and government agencies had to make a substantial investment in order to buy (or lease) their own dedicated equipment, and they had to hire a cadre of experts to run it. Some did, but many had one or two accounting problems that they only had to solve occasionally, such as once per month. Economically it made much more sense to rent tabulating time, run their card jobs (such as payroll or billing), then pack up and come back a month later. IBM recognized this enormous opportunity and opened so-called service bureaus around the United States then, later, around the world.

In every way that's relevant, IBM's service bureaus from the 1930s provided exactly what cloud computing providers such as Amazon do today. Of course that was before widespread data networking — you had to access your local service bureau via sneakernet — but the business model was exactly the same.

IBM dominated data processing throughout most of the 20th century, arousing U.S. government antitrust concerns. The company's battle with the U.S. Department of Justice lasted decades, a battle which IBM technically won but which limited the company in certain ways. In 1956, at the start of the electronic computing era, IBM and the Department of Justice reached an agreement. That agreement covered several of IBM's business practices, including its service bureaus. IBM promised to operate its service bureaus according to certain rules. IBM transferred its bureaus to a new and separate subsidiary, the Service Bureau Corporation, which could not use the IBM name or logo despite IBM ownership. SBC's employees, directors, and corporate accounts had to be kept separate from its parent. IBM could not treat SBC any differently than other service bureaus, including service bureaus run by other computer companies, and had to supply equipment (and advance knowledge about that equipment) on an equal basis. And SBC had to follow certain rules about pricing for its services.

Did the 1956 consent decree kill cloud computing in its cradle, delaying the growth and popularity of cloud computing for a full half century? Quite possibly. Although SBC still did fairly well, the consent decree meant that IBM had to treat its own subsidiary as a second- or third-class customer. IBM and SBC couldn't talk privately about how best to design computers to support shared services because that information had to be shared with everyone. And if a company cannot protect its trade secrets then it is unlikely to invest as much in that area of technology.

Even so, IBM could (and did) invest an enormous amount of development effort in what we now know as private clouds. That is, IBM still had many large, individual customers that demanded systems which could support thousands of concurrent users, applications, and databases, all with the ultimate in qualities of service, including reliability and security. Sound familiar? It should, because that describes mainframe computing. Private clouds and public clouds ended up being very similar technically, and so SBC and its competitors, such as National CSS which started in the late 1960s, could still manage to deliver public cloud services.

So what happened? Why are companies like Amazon and Microsoft now trying to reinvent what SBC, National CSS, and other service bureaus started delivering decades ago?

Part of the answer dates back to the early 1970s. The #5 computer vendor at the time, Control Data Corporation, sued the #1 computer vendor, IBM. CDC accused IBM of monopolistic behavior. CDC and IBM eventually agreed to an out-of-court settlement. As part of the settlement, IBM sold SBC to CDC for a mere $16 million, and IBM agreed not to compete against SBC for several years. As it turned out, CDC had big business problems, and SBC wasn't enough to save CDC. However, SBC's core business was so strong that it survived even CDC's implosion, and now SBC lives on as Ceridian. Ceridian provides human resources-related services, such as payroll processing, often via Software as a Service (SaaS) and cloud technologies. To its credit, Ceridian publicly acknowledges its corporate history, tracing it all the way back to 1932 and IBM's service bureaus.

The other part of the answer depends on an accident of antitrust history and its influence on the economics of computing. At its most basic level, computing (data processing) consists of four finite resources: computing (CPU), storage (memory, disk, etc.), input/output (networking), and expertise (design, operations, programming, etc.) All four of these limited resources have costs. However, two of them (computing and storage) saw their costs fall dramatically and much earlier than the cost of networking fell. The major reason? In most countries, including the U.S., the governments regulated and protected their national telecommunications monopolies. It was only when the Internet became popular starting in the late 1990s that the cost of networking followed the trends in computing and storage.

With lower costs for computing, storage, and now networking, brain power (expertise) is currently the most costly resource in computing. That's reflected in the ever-increasing prices for ever-more-sophisticated commercial software and in businesses' efforts to obtain expertise from anywhere in the world to develop and run their computing systems, otherwise known as offshoring. High quality data processing is not easy, and in a world where expertise is the most precious computing commodity, we're seeing a return to very familiar business models.

Is it any wonder why the (modern) mainframe and service bureaus are getting more popular? In the sense of business models at least, welcome to the future: the past!

by Timothy Sipples September 5, 2011 in Cloud Computing, History
Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d834521c8469e201539152c5b1970b

Listed below are links to weblogs that reference Did the U.S. Government Kill Cloud Computing Over 50 Years Ago?:

Comments

For a while, ITT owned SBC, and I think they acquired it with an eye toward using it to do all the IT for their hundreds of subsidiary corporations. When it became obvious that scenario was not going to fly (for both technical and internal political reasons), they sold it off.

When I got in the profession (1963), hardware was the major cost. Software was mostly user-written and the biggest expense was people for both IBM and customers. When IBM finally decided to do hardware & software together by building the /360 line, the software cost rose (for IBM), but people were still the biggest expense.

That probably impacted some aspects of the design of OS. Unlike Wintel servers designed to run pretty much hands-off, IBM systems have always needed some human attention. z/OS is much more sophisticated than earlier systems but still needs operators. There is no job category of 'Server Operator' nor does 'operating' a server require the level of training and knowledge that a z/OS operator requires. If a reboot doesn't clear a server problem, a server tech is required, but these events are exceptions rather than the intended, ongoing, normal daily process.

(At least they're supposed to be exceptions. My current shop has 7 server techs and my estimate is they spend half their time installing & upgrading and the other half dealing with server failures. Not only is the hardware less robust than a mainframe, the software is often flaky and the combination increases the people-cost significantly. If Linux-based versions of our server apps were available, we could probably run all them on the mainframe with 3 or 4 techies.)

Posted by: Ray Saunders | Sep 5, 2011 10:39:11 AM

Cloud computing also promises reliability. Like any equipment, technologies equally malfunction. At any certain instance, they might crash and cease operating, so maintenance is important. Maintenance equals system downtime. And downtime can certainly unfavorably affect enterprise principally with unplanned ones. Given that anxiety there is a backup method to minimize the threat of system glitches.

Posted by: It support | Dec 21, 2011 7:06:34 AM

The comments to this entry are closed.



The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.