Cloud Computing

The Mainframe Blog Home

Server Hardware Trends: A Commodity Market Plus IBM

The New York Times summarized the latest IDC and Gartner server marketshare reports, highlighting the rise of the non-branded custom-built commodity server makers that supply big Internet firms such as Facebook. "Others" is now the #3 server "vendor" on a hardware revenue basis and #1 on a volume basis. (On a revenue basis, IBM is #1 and HP is #2.)

These long running trends are fascinating, and I've described them before in various ways. I think it's important, though, to distinguish between IBM and HP because they have very different positions in the overall market. IBM is now the only remaining credible vendor of "high-end" servers. We've seen time and time again in many markets — retailing, to pick an excellent example — that getting stuck in the middle is a bad place to be because competitors are both attacking from below and above. The attack from below is based fundamentally on price, particularly acquisition price. Those are the "Others." The attack from above is based on value, sustained high levels of research and development to deliver innovation, and best-of-breed capabilities and qualities. That's IBM. In the middle is HP, the JCPenney of the server market. In a few more quarters Dell will probably be right there, too, but we'll see.

I very much like IBM's position given these market trends, and I'm not too worried about the slight hardware revenue dip IDC and Gartner reported given the structure of that dip. IBM's high-end got higher, to put it succinctly, and there's some good evidence IBM's margins improved. Moreover, most of IBM's revenues associated with its servers are not measured by its hardware revenues alone, and that's unique to IBM. When HP sells servers they typically don't include much else from HP that customers buy. In contrast, it's very rare that an IBM server gets sold without substantial IBM content that customers buy.

I don't know exactly how big the high-end server market will be, but it will continue to be a terrific business amidst the continuing explosion of information, long-term economic trends, and increasing quality demands. As long as IBM keeps finding ways to differentiate and to innovate up and down their solution set, the company will do fine, and more importantly so will its many and growing numbers of customers. However, while IBM is very much pursuing its high-end strategy with gusto, IBM is also eager to push into volume markets as well — IBM in the role of Target (and/or Costco) to offer an alternative to Walmart, metaphorically speaking. I'm referring of course to IBM's OpenPOWER Consortium with Google, NVIDIA, and others.

So these Gartner and IDC reports are really not good news for HP in particular. As I've said before I don't know how HP gets out of its shrinking box. HP's CEO Meg Whitman has a tough job.

by Timothy Sipples August 29, 2013 in Cloud Computing, Economics, Financial, Systems Technology
Permalink | Comments (0) | TrackBack (0)

IBM Announces the OpenPOWER Consortium

IBM's Tom Rosamilia describes IBM's OpenPOWER Consortium announcement. IBM is sharing the complete blueprints for its POWER microprocessors with several major industry partners: Google, NVIDIA, Mellanox, and TYAN. Others are welcome to join. Yes, that Google, the search giant that buys many thousands of bespoke servers but which also has some of the most challenging data center-related problems in the world. Now Google gets an entire, more advanced microprocessor design to use as it pleases.

It's no secret that the traditional RISC UNIX market has struggled. IBM has been steadily gobbling up UNIX server marketshare for several years as other UNIX vendors, lately HP and Oracle/Sun, collapsed. But it's not good enough to dominate a (probably) declining market, so IBM is wisely trying to expand the whole market and go all-in on Linux cloud infrastructure. IBM has got some superb launch partners in that effort.

I think it's a bold IBM move but a calculated one. IBM is basically trying to replicate ARM's success in the processor licensing business but in a much different market, a market Intel currently dominates with its proprietary X86 architecture. I'm referring to massive, horizontal scale-out computing architectures in remote (typically) data centers: large Linux-based public clouds, notably Google's, but also with NVIDIA-infused GPU technologies for supercomputing (as another example). Not competing with ARM at all which, despite a few rumblings, isn't charging into data centers. Optimizing microprocessors for mobile use cases is quite different than optimizing for public cloud backends.

So will Intel get "squeezed" in the middle? The middle has proven to be a dangerous place to be in the server processor business. Which is why I also remain extremely bullish on zEnterprise, by the way (and which is doing very well indeed). It's certainly an interesting development, and it's really good news for customers. Frankly IBM had to do something bold, and this move definitely qualifies.

It also puts IBM's acquisition of SoftLayer into better focus. I was a little unclear how SoftLayer would fit into IBM's strategy, but now it makes a lot more sense. It also makes complete sense for IBM's launch partners to join the OpenPOWER Consortium.

I like this.

by Timothy Sipples August 7, 2013 in Cloud Computing, Linux, Systems Technology
Permalink | Comments (0) | TrackBack (0)

IBM Acquiring CSL International

IBM is acquiring CSL International and its CSL-WAVE software for monitoring and managing z/VM and Linux on zEnterprise.

It's clear that IBM is continuing to enhance its cloud offerings with this acquisition.

by Timothy Sipples July 10, 2013 in Cloud Computing, Financial
Permalink | Comments (0) | TrackBack (0)

Musings on the Evolution of the Server Market

I've been doing a lot of thinking lately about trends in the server marketplace, in part based on conversations with server buyers but also as a student of history. To summarize, I think the world has decided there should be three basic types of servers:

  • High-end servers (aliases: enterprise servers, mission-critical servers, vertically scalable servers, large SMP servers, "system of record" servers, "data center in a box"). These servers are becoming more important every year as businesses and governments cope with massive amounts of data, as security needs grow, and as the consequences and costs of business process failures rise. Preeminent among these servers are IBM's zEnterprise machines, although IBM's high-end Power servers are also firmly in this category.
  • Workload-specific servers (aliases: appliances/appliance servers, application-specific servers, workload-optimized servers). These servers generally run one or a few highly related workloads that are packaged together and, at least to some degree, tweaked and tuned. They often include integrated disk and/or solid stage storage. There's a range of capabilities in this category, from the highly flexible IBM PureSystems to more closed offerings such as Oracle Exadata as examples.
  • Commodity servers (aliases: X86 servers, volume servers, horizontally scalable servers). Obviously there are lots of these servers sold every year, and some buyers, such as big Internet companies, design their own hardware to their own specifications, bypassing vendors such as HP and Dell. Vendors such as Lenovo will also put pressure on HP and Dell in particular. IBM and Cisco have been trying to differentiate themselves with unique capabilities in this crowded market segment. Intel dominates this part of the server market, although it's still an open question whether ARM and/or AMD will be able to make further inroads as CMOS microprocessor technologies reach their natural limits.

There is some blurring between these categories, notably in the small but important supercomputer market which blends characteristics of workload-specific and commodity servers.

IBM has been carrying the high-end torch for years, building the world's most impressive, capable, and innovative enterprise servers. Consequently IBM always has a target on its back — it has always been thus. After all, if you don't have anything to compete, then the only option is to try to dismiss the category entirely. Also, while one of the defining characteristics of a high-end enterprise server is its general purpose nature — the ability to run multiple disparate workloads with varying service level requirements even within one operating system image — I would expect IBM to continue improving the "consumability" of its servers. An excellent recent example is the IBM DB2 Analytics Accelerator which combines zEnterprise, DB2 for z/OS, and Netezza technologies into a single, integrated, extremely high performance information system for both transactional and business intelligence/data warehousing. It "just works." In such ways these servers are workload-optimized while still retaining the full, open flexibility that general purpose high-end servers also have.

As I've mentioned before, the server business is tough. In some ways it resembles the global commercial aircraft industry which is highly competitive but also highly concentrated with only two major manufacturers: Boeing and Airbus. In the server market it's much the same now: IBM and Intel, perhaps with Oracle lately playing the role of Bombardier or Embraer (regional aircraft manufacturers for niche roles) — an even more volatile market segment. There's a simple reason: server (and processor) research and development is extremely expensive. Both IBM and Intel have built business models that, while different, work for them. IBM's server business model is analogous to Apple's, with lots of vertical integration, high value added, revenue stability and predictability, and a broad range of in-house capabilities. Intel's is more like Microsoft's, still its closest partner as it happens.

I am expecting the server market to continue along these three basic tracks for some time to come. I'm also expecting IBM and Intel to continue leading the industry. The vast majority of businesses and governments need some of what both these companies produce.

by Timothy Sipples August 21, 2012 in Cloud Computing, Systems Technology
Permalink | Comments (0) | TrackBack (0)

Explosion of New Mainframe Software

I continue to marvel at how much new mainframe software is being introduced, and not just from IBM. Let's take a quick and necessarily incomplete tour:

  • IBM Financial Transaction Manager: Provides new and enhanced, pre-built, ready-to-roll support for financial industry transactions and messaging via SWIFT, as one example.
  • Tivoli System Automation Version 3.4: Extend automation throughout the zEnterprise and beyond, across many types of virtualized environments. A very important ingredient in successful cloud deployments now.
  • WebSphere Operational Decision Management: Add business rules flexibility to your enterprise applications, regardless of programming language. Helps dramatically cut down on the amount of coding you need to do.
  • WebSphere eXtreme Scale Version 8.5 and WebSphere Application Server Version 8.5: Exciting for their performance, support for the latest cutting edge Java Enterprise Edition standards, and the new lightweight Liberty Profile deployment option.
  • Business Process Manager Version 8.0 and Business Monitor Version 8.0: New iPhone/iPad capabilities for viewing, managing, and participating in sophisticated, optimized business processes.
  • CICS Transaction Server Version 5.1: Lots of improvements, including CICS's very own sophisticated, standards-based Web user interface environment (with JSPs, etc.), support for the WebSphere Liberty Profile, a big leap in Java performance and flexibility, pre-configured MQ DPL support for containers (no more 32K limit!), and lots more 64-bit support, among other features. A beta version of CICS TS 5.1 will be publicly available for download.
  • CPLEX Optimizer: Lots of mathematical optimization routines, ready to use right from your core applications on your mainframe.
  • Tivoli OMEGAMON XE Version 5.1: Wow, they dramatically enhanced the 3270 interface and made the graphical interface easier to deploy. I love the new interface!
  • GT.M from FIS Global: This is a very high performance key-value "NoSQL" database, available as open source on PCs for developers but also now available on z/OS and Linux on z with full support from FIS. GT.M is the foundation for FIS PROFILE core banking applications (now available for z/OS as well), but it is also a very popular execution environment for applications written in the M programming language, also known as MUMPS. The healthcare industry is chock full of important MUMPS applications, including the open source VistA software created by the U.S. Veterans Administration. Thus GT.M provides a wonderful new option for consolidating and simplifying thousands of healthcare industry applications onto IBM zEnterprise, some of which are still running on old DEC VMS systems, many of which are mission-critical.

    by Timothy Sipples May 10, 2012 in Application Development, Cloud Computing, Innovation
    Permalink | Comments (0) | TrackBack (0)

IBM PureSystems: Simple Is Good

IBM officially unveils its new PureSystems today. With it, simplification takes a big step forward.

Enterprise applications and their interdependencies have become extremely complicated: hard to deploy, hard to manage, hard to scale, and impossible to secure. IBM is really working overtime to tame that complexity. Do take a close look.

Keep in mind that if you've got a zEnterprise server, with its Unified Resource Manager, you're already taming complexity like nothing else. For instance, IBM PureSystems initially support 4 operating environments across 2 processor architectures in harmony, which is a tremendous accomplishment. With zEnterprise you've got 8+ across 3+. (I'm using plus symbols because it depends on how you count, but 8 and 3 are the minimum counts.) In other words, IBM PureSystems are part of a continuum, and your zEnterprise server leads the way. It's extremely likely you'll want some of both in your data center.

So that's my instant reaction, with more comments to follow no doubt. What do you think? What are your most urgent issues?

by Timothy Sipples April 11, 2012 in Cloud Computing, Innovation, Systems Technology
Permalink | Comments (0) | TrackBack (0)

Learning z/OS the Cloud(y) Way

Marist College is accepting new students for its z/OS professional courses. You can learn z/OS from the comfort of your own home — the courses are conducted online. The deadline for enrollment for the spring term is February 6, 2012. If you miss this term you can enroll for the next term, but why wait?

by Timothy Sipples January 24, 2012 in Cloud Computing, z/OS
Permalink | Comments (2) | TrackBack (0)

Did the U.S. Government Kill Cloud Computing Over 50 Years Ago?

Ed and I were chatting about cloud computing last week. Ed is a co-worker with decades of experience in the IT industry. He said that all this talk about (public) cloud computing reminded him of the Service Bureau Corporation. I wasn't too familiar with SBC, so I read more about that interesting chapter in computing history. In fact, it's so interesting that I'm beginning to wonder: did the U.S. government kill public cloud computing over half a century ago?

The more you learn about the history of "cloud computing," the more you discover how old fashioned and well-proven the concept is. The basic principle underpinning cloud computing is economic: computing resources (including expertise) are "expensive" and subject to varying demands per user. If you can share those resources more efficiently across users, with their varying demands, you can achieve economies of scale and deliver (and/or capture) cost savings.

That basic economic principle is very, very old — so old that it easily predates electronic computing. In 1932, as a notable example, IBM opened its first "service bureaus." Back then state-of-the-art data processing equipment included punched cards, sorters, and tabulators. Businesses and government agencies had to make a substantial investment in order to buy (or lease) their own dedicated equipment, and they had to hire a cadre of experts to run it. Some did, but many had one or two accounting problems that they only had to solve occasionally, such as once per month. Economically it made much more sense to rent tabulating time, run their card jobs (such as payroll or billing), then pack up and come back a month later. IBM recognized this enormous opportunity and opened so-called service bureaus around the United States then, later, around the world.

In every way that's relevant, IBM's service bureaus from the 1930s provided exactly what cloud computing providers such as Amazon do today. Of course that was before widespread data networking — you had to access your local service bureau via sneakernet — but the business model was exactly the same.

IBM dominated data processing throughout most of the 20th century, arousing U.S. government antitrust concerns. The company's battle with the U.S. Department of Justice lasted decades, a battle which IBM technically won but which limited the company in certain ways. In 1956, at the start of the electronic computing era, IBM and the Department of Justice reached an agreement. That agreement covered several of IBM's business practices, including its service bureaus. IBM promised to operate its service bureaus according to certain rules. IBM transferred its bureaus to a new and separate subsidiary, the Service Bureau Corporation, which could not use the IBM name or logo despite IBM ownership. SBC's employees, directors, and corporate accounts had to be kept separate from its parent. IBM could not treat SBC any differently than other service bureaus, including service bureaus run by other computer companies, and had to supply equipment (and advance knowledge about that equipment) on an equal basis. And SBC had to follow certain rules about pricing for its services.

Did the 1956 consent decree kill cloud computing in its cradle, delaying the growth and popularity of cloud computing for a full half century? Quite possibly. Although SBC still did fairly well, the consent decree meant that IBM had to treat its own subsidiary as a second- or third-class customer. IBM and SBC couldn't talk privately about how best to design computers to support shared services because that information had to be shared with everyone. And if a company cannot protect its trade secrets then it is unlikely to invest as much in that area of technology.

Even so, IBM could (and did) invest an enormous amount of development effort in what we now know as private clouds. That is, IBM still had many large, individual customers that demanded systems which could support thousands of concurrent users, applications, and databases, all with the ultimate in qualities of service, including reliability and security. Sound familiar? It should, because that describes mainframe computing. Private clouds and public clouds ended up being very similar technically, and so SBC and its competitors, such as National CSS which started in the late 1960s, could still manage to deliver public cloud services.

So what happened? Why are companies like Amazon and Microsoft now trying to reinvent what SBC, National CSS, and other service bureaus started delivering decades ago?

Part of the answer dates back to the early 1970s. The #5 computer vendor at the time, Control Data Corporation, sued the #1 computer vendor, IBM. CDC accused IBM of monopolistic behavior. CDC and IBM eventually agreed to an out-of-court settlement. As part of the settlement, IBM sold SBC to CDC for a mere $16 million, and IBM agreed not to compete against SBC for several years. As it turned out, CDC had big business problems, and SBC wasn't enough to save CDC. However, SBC's core business was so strong that it survived even CDC's implosion, and now SBC lives on as Ceridian. Ceridian provides human resources-related services, such as payroll processing, often via Software as a Service (SaaS) and cloud technologies. To its credit, Ceridian publicly acknowledges its corporate history, tracing it all the way back to 1932 and IBM's service bureaus.

The other part of the answer depends on an accident of antitrust history and its influence on the economics of computing. At its most basic level, computing (data processing) consists of four finite resources: computing (CPU), storage (memory, disk, etc.), input/output (networking), and expertise (design, operations, programming, etc.) All four of these limited resources have costs. However, two of them (computing and storage) saw their costs fall dramatically and much earlier than the cost of networking fell. The major reason? In most countries, including the U.S., the governments regulated and protected their national telecommunications monopolies. It was only when the Internet became popular starting in the late 1990s that the cost of networking followed the trends in computing and storage.

With lower costs for computing, storage, and now networking, brain power (expertise) is currently the most costly resource in computing. That's reflected in the ever-increasing prices for ever-more-sophisticated commercial software and in businesses' efforts to obtain expertise from anywhere in the world to develop and run their computing systems, otherwise known as offshoring. High quality data processing is not easy, and in a world where expertise is the most precious computing commodity, we're seeing a return to very familiar business models.

Is it any wonder why the (modern) mainframe and service bureaus are getting more popular? In the sense of business models at least, welcome to the future: the past!

by Timothy Sipples September 5, 2011 in Cloud Computing, History
Permalink | Comments (2) | TrackBack (0)

Brief News Roundup for Mid-August, 2011

Here are some interesting mainframe-related stories from around the world:

  1. ComputerWeekly interviews 18-year old Danish mainframe engineer John Prehn.
  2. Data Center Journal reports on "Big Iron Today: The State of Mainframes." Answer: The state is excellent.
  3. T3 Technologies, TurboHercules, and Neon Software Enterprises have withdrawn their complaint lodged with the European Union's antitrust authorities against IBM. T3 and Neon both lost U.S. court cases against IBM earlier this year.
  4. The Wall Street Journal writes: "Behind the Youthful Sales Surge for IBM Mainframes."
  5. Aptly named Micro Focus is looking for a white knight. Meanwhile, Compuware and BMC beat earnings estimates.

by Timothy Sipples August 15, 2011 in Cloud Computing, Current Affairs, People
Permalink | Comments (1) | TrackBack (0)

Candidate for Lamest Excuse Not to Share Machines

Overheard from a data center manager (closely paraphrasing): "We have to buy separate machines because that's a separate business unit."

That manager made that assertion while sitting next to his company's data center. The building, power supplies, cooling systems, networking, fire suppression systems, security guards, telephones, lighting systems, and numerous other components of that data center — including his (too high?) salary — are all shared across business units. No, each business unit does not have its own data center. They somehow figured out how to share everything except the servers? Really?

What a lame excuse! Unfortunately his company is losing marketshare and has been roundly criticized for its poor efficiency and demonstrably awful IT security. I hope that manager figures out a way to help his company, quickly.

The zEnterprise 196's partitioning (LPARs) has been certified according to Common Criteria EAL5+ standards. No other business server's virtualization has achieved that sophisticated standard, which is equivalent to separate servers. Yes, you can share mainframes — even across businesses, not only business units. Practically everybody who owns mainframes does just that and enjoys the efficiencies.

See also: "How Many Mainframes Do You Need?"

by Timothy Sipples August 12, 2011 in Cloud Computing, Economics
Permalink | Comments (0) | TrackBack (0)

Amazon and Microsoft Need Mainframes

According to Wikipedia, lightning occurs somewhere in the world about 44 times per second on average. A quarter of those lightning flashes strike the ground. Meteorologically, lightning is extremely common. All the more reason to wonder why both Amazon's and Microsoft's customers suffered hours-long outages due to a lightning strike.

I found Amazon's advice to its customers particularly galling: "For those looking for what you can do to recover more quickly, we recommend re-launching your instance in another Availability Zone." Translation: You handle your own disaster recovery (if you can), because obviously we can't.

These are certainly not these companies' first outages. It's rational to assume they won't be the last.

Fortunately Amazon and Microsoft customers have an alternative. They can follow these simple steps:

  1. Find at least two data centers, physically separated — your own, or someone else's. (Scores of IT service companies, not only IBM, operate mainframe-based clouds. They used to be called "service bureaus.")
  2. Put a mainframe at each site — your own, or share someone else's.
  3. Use any of several common, cross-site disaster recovery features available with mainframes, notably IBM's GDPS. Choose whichever flavor meets your particular RTO and RPO requirements.
  4. Hire competent IT staff, and pay them reasonably.
  5. Put your applications and information systems on these mainframes, at least for your most critical business services, end-to-end.
  6. Stop wasting money with Amazon and/or Microsoft.

by Timothy Sipples August 8, 2011 in Business Continuity, Cloud Computing
Permalink | Comments (8) | TrackBack (0)



The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.