Musings on the Evolution of the Server Market
I've been doing a lot of thinking lately about trends in the server marketplace, in part based on conversations with server buyers but also as a student of history. To summarize, I think the world has decided there should be three basic types of servers:
- High-end servers (aliases: enterprise servers, mission-critical servers, vertically scalable servers, large SMP servers, "system of record" servers, "data center in a box"). These servers are becoming more important every year as businesses and governments cope with massive amounts of data, as security needs grow, and as the consequences and costs of business process failures rise. Preeminent among these servers are IBM's zEnterprise machines, although IBM's high-end Power servers are also firmly in this category.
- Workload-specific servers (aliases: appliances/appliance servers, application-specific servers, workload-optimized servers). These servers generally run one or a few highly related workloads that are packaged together and, at least to some degree, tweaked and tuned. They often include integrated disk and/or solid stage storage. There's a range of capabilities in this category, from the highly flexible IBM PureSystems to more closed offerings such as Oracle Exadata as examples.
- Commodity servers (aliases: X86 servers, volume servers, horizontally scalable servers). Obviously there are lots of these servers sold every year, and some buyers, such as big Internet companies, design their own hardware to their own specifications, bypassing vendors such as HP and Dell. Vendors such as Lenovo will also put pressure on HP and Dell in particular. IBM and Cisco have been trying to differentiate themselves with unique capabilities in this crowded market segment. Intel dominates this part of the server market, although it's still an open question whether ARM and/or AMD will be able to make further inroads as CMOS microprocessor technologies reach their natural limits.
There is some blurring between these categories, notably in the small but important supercomputer market which blends characteristics of workload-specific and commodity servers.
IBM has been carrying the high-end torch for years, building the world's most impressive, capable, and innovative enterprise servers. Consequently IBM always has a target on its back — it has always been thus. After all, if you don't have anything to compete, then the only option is to try to dismiss the category entirely. Also, while one of the defining characteristics of a high-end enterprise server is its general purpose nature — the ability to run multiple disparate workloads with varying service level requirements even within one operating system image — I would expect IBM to continue improving the "consumability" of its servers. An excellent recent example is the IBM DB2 Analytics Accelerator which combines zEnterprise, DB2 for z/OS, and Netezza technologies into a single, integrated, extremely high performance information system for both transactional and business intelligence/data warehousing. It "just works." In such ways these servers are workload-optimized while still retaining the full, open flexibility that general purpose high-end servers also have.
As I've mentioned before, the server business is tough. In some ways it resembles the global commercial aircraft industry which is highly competitive but also highly concentrated with only two major manufacturers: Boeing and Airbus. In the server market it's much the same now: IBM and Intel, perhaps with Oracle lately playing the role of Bombardier or Embraer (regional aircraft manufacturers for niche roles) — an even more volatile market segment. There's a simple reason: server (and processor) research and development is extremely expensive. Both IBM and Intel have built business models that, while different, work for them. IBM's server business model is analogous to Apple's, with lots of vertical integration, high value added, revenue stability and predictability, and a broad range of in-house capabilities. Intel's is more like Microsoft's, still its closest partner as it happens.
I am expecting the server market to continue along these three basic tracks for some time to come. I'm also expecting IBM and Intel to continue leading the industry. The vast majority of businesses and governments need some of what both these companies produce.
|by Timothy Sipples||August 21, 2012 in Cloud Computing, Systems Technology |
Permalink | Comments (0) | TrackBack (0)
Explosion of New Mainframe Software
I continue to marvel at how much new mainframe software is being introduced, and not just from IBM. Let's take a quick and necessarily incomplete tour:
- IBM Financial Transaction Manager: Provides new and enhanced, pre-built, ready-to-roll support for financial industry transactions and messaging via SWIFT, as one example.
- Tivoli System Automation Version 3.4: Extend automation throughout the zEnterprise and beyond, across many types of virtualized environments. A very important ingredient in successful cloud deployments now.
- WebSphere Operational Decision Management: Add business rules flexibility to your enterprise applications, regardless of programming language. Helps dramatically cut down on the amount of coding you need to do.
- WebSphere eXtreme Scale Version 8.5 and WebSphere Application Server Version 8.5: Exciting for their performance, support for the latest cutting edge Java Enterprise Edition standards, and the new lightweight Liberty Profile deployment option.
- Business Process Manager Version 8.0 and Business Monitor Version 8.0: New iPhone/iPad capabilities for viewing, managing, and participating in sophisticated, optimized business processes.
- CICS Transaction Server Version 5.1: Lots of improvements, including CICS's very own sophisticated, standards-based Web user interface environment (with JSPs, etc.), support for the WebSphere Liberty Profile, a big leap in Java performance and flexibility, pre-configured MQ DPL support for containers (no more 32K limit!), and lots more 64-bit support, among other features. A beta version of CICS TS 5.1 will be publicly available for download.
- CPLEX Optimizer: Lots of mathematical optimization routines, ready to use right from your core applications on your mainframe.
- Tivoli OMEGAMON XE Version 5.1: Wow, they dramatically enhanced the 3270 interface and made the graphical interface easier to deploy. I love the new interface!
- GT.M from FIS Global: This is a very high performance key-value "NoSQL" database, available as open source on PCs for developers but also now available on z/OS and Linux on z with full support from FIS. GT.M is the foundation for FIS PROFILE core banking applications (now available for z/OS as well), but it is also a very popular execution environment for applications written in the M programming language, also known as MUMPS. The healthcare industry is chock full of important MUMPS applications, including the open source VistA software created by the U.S. Veterans Administration. Thus GT.M provides a wonderful new option for consolidating and simplifying thousands of healthcare industry applications onto IBM zEnterprise, some of which are still running on old DEC VMS systems, many of which are mission-critical.
by Timothy Sipples May 10, 2012 in Application Development, Cloud Computing, Innovation
Permalink | Comments (0) | TrackBack (0)
IBM PureSystems: Simple Is Good
IBM officially unveils its new PureSystems today. With it, simplification takes a big step forward.
Enterprise applications and their interdependencies have become extremely complicated: hard to deploy, hard to manage, hard to scale, and impossible to secure. IBM is really working overtime to tame that complexity. Do take a close look.
Keep in mind that if you've got a zEnterprise server, with its Unified Resource Manager, you're already taming complexity like nothing else. For instance, IBM PureSystems initially support 4 operating environments across 2 processor architectures in harmony, which is a tremendous accomplishment. With zEnterprise you've got 8+ across 3+. (I'm using plus symbols because it depends on how you count, but 8 and 3 are the minimum counts.) In other words, IBM PureSystems are part of a continuum, and your zEnterprise server leads the way. It's extremely likely you'll want some of both in your data center.
So that's my instant reaction, with more comments to follow no doubt. What do you think? What are your most urgent issues?
|by Timothy Sipples||April 11, 2012 in Cloud Computing, Innovation, Systems Technology |
Permalink | Comments (0) | TrackBack (0)
Learning z/OS the Cloud(y) Way
Marist College is accepting new students for its z/OS professional courses. You can learn z/OS from the comfort of your own home — the courses are conducted online. The deadline for enrollment for the spring term is February 6, 2012. If you miss this term you can enroll for the next term, but why wait?
|by Timothy Sipples||January 24, 2012 in Cloud Computing, z/OS |
Permalink | Comments (2) | TrackBack (0)
Did the U.S. Government Kill Cloud Computing Over 50 Years Ago?
Ed and I were chatting about cloud computing last week. Ed is a co-worker with decades of experience in the IT industry. He said that all this talk about (public) cloud computing reminded him of the Service Bureau Corporation. I wasn't too familiar with SBC, so I read more about that interesting chapter in computing history. In fact, it's so interesting that I'm beginning to wonder: did the U.S. government kill public cloud computing over half a century ago?
The more you learn about the history of "cloud computing," the more you discover how old fashioned and well-proven the concept is. The basic principle underpinning cloud computing is economic: computing resources (including expertise) are "expensive" and subject to varying demands per user. If you can share those resources more efficiently across users, with their varying demands, you can achieve economies of scale and deliver (and/or capture) cost savings.
That basic economic principle is very, very old — so old that it easily predates electronic computing. In 1932, as a notable example, IBM opened its first "service bureaus." Back then state-of-the-art data processing equipment included punched cards, sorters, and tabulators. Businesses and government agencies had to make a substantial investment in order to buy (or lease) their own dedicated equipment, and they had to hire a cadre of experts to run it. Some did, but many had one or two accounting problems that they only had to solve occasionally, such as once per month. Economically it made much more sense to rent tabulating time, run their card jobs (such as payroll or billing), then pack up and come back a month later. IBM recognized this enormous opportunity and opened so-called service bureaus around the United States then, later, around the world.
In every way that's relevant, IBM's service bureaus from the 1930s provided exactly what cloud computing providers such as Amazon do today. Of course that was before widespread data networking — you had to access your local service bureau via sneakernet — but the business model was exactly the same.
IBM dominated data processing throughout most of the 20th century, arousing U.S. government antitrust concerns. The company's battle with the U.S. Department of Justice lasted decades, a battle which IBM technically won but which limited the company in certain ways. In 1956, at the start of the electronic computing era, IBM and the Department of Justice reached an agreement. That agreement covered several of IBM's business practices, including its service bureaus. IBM promised to operate its service bureaus according to certain rules. IBM transferred its bureaus to a new and separate subsidiary, the Service Bureau Corporation, which could not use the IBM name or logo despite IBM ownership. SBC's employees, directors, and corporate accounts had to be kept separate from its parent. IBM could not treat SBC any differently than other service bureaus, including service bureaus run by other computer companies, and had to supply equipment (and advance knowledge about that equipment) on an equal basis. And SBC had to follow certain rules about pricing for its services.
Did the 1956 consent decree kill cloud computing in its cradle, delaying the growth and popularity of cloud computing for a full half century? Quite possibly. Although SBC still did fairly well, the consent decree meant that IBM had to treat its own subsidiary as a second- or third-class customer. IBM and SBC couldn't talk privately about how best to design computers to support shared services because that information had to be shared with everyone. And if a company cannot protect its trade secrets then it is unlikely to invest as much in that area of technology.
Even so, IBM could (and did) invest an enormous amount of development effort in what we now know as private clouds. That is, IBM still had many large, individual customers that demanded systems which could support thousands of concurrent users, applications, and databases, all with the ultimate in qualities of service, including reliability and security. Sound familiar? It should, because that describes mainframe computing. Private clouds and public clouds ended up being very similar technically, and so SBC and its competitors, such as National CSS which started in the late 1960s, could still manage to deliver public cloud services.
So what happened? Why are companies like Amazon and Microsoft now trying to reinvent what SBC, National CSS, and other service bureaus started delivering decades ago?
Part of the answer dates back to the early 1970s. The #5 computer vendor at the time, Control Data Corporation, sued the #1 computer vendor, IBM. CDC accused IBM of monopolistic behavior. CDC and IBM eventually agreed to an out-of-court settlement. As part of the settlement, IBM sold SBC to CDC for a mere $16 million, and IBM agreed not to compete against SBC for several years. As it turned out, CDC had big business problems, and SBC wasn't enough to save CDC. However, SBC's core business was so strong that it survived even CDC's implosion, and now SBC lives on as Ceridian. Ceridian provides human resources-related services, such as payroll processing, often via Software as a Service (SaaS) and cloud technologies. To its credit, Ceridian publicly acknowledges its corporate history, tracing it all the way back to 1932 and IBM's service bureaus.
The other part of the answer depends on an accident of antitrust history and its influence on the economics of computing. At its most basic level, computing (data processing) consists of four finite resources: computing (CPU), storage (memory, disk, etc.), input/output (networking), and expertise (design, operations, programming, etc.) All four of these limited resources have costs. However, two of them (computing and storage) saw their costs fall dramatically and much earlier than the cost of networking fell. The major reason? In most countries, including the U.S., the governments regulated and protected their national telecommunications monopolies. It was only when the Internet became popular starting in the late 1990s that the cost of networking followed the trends in computing and storage.
With lower costs for computing, storage, and now networking, brain power (expertise) is currently the most costly resource in computing. That's reflected in the ever-increasing prices for ever-more-sophisticated commercial software and in businesses' efforts to obtain expertise from anywhere in the world to develop and run their computing systems, otherwise known as offshoring. High quality data processing is not easy, and in a world where expertise is the most precious computing commodity, we're seeing a return to very familiar business models.
Is it any wonder why the (modern) mainframe and service bureaus are getting more popular? In the sense of business models at least, welcome to the future: the past!
|by Timothy Sipples||September 5, 2011 in Cloud Computing, History |
Permalink | Comments (2) | TrackBack (0)
Brief News Roundup for Mid-August, 2011
Here are some interesting mainframe-related stories from around the world:
- ComputerWeekly interviews 18-year old Danish mainframe engineer John Prehn.
- Data Center Journal reports on "Big Iron Today: The State of Mainframes." Answer: The state is excellent.
- T3 Technologies, TurboHercules, and Neon Software Enterprises have withdrawn their complaint lodged with the European Union's antitrust authorities against IBM. T3 and Neon both lost U.S. court cases against IBM earlier this year.
- The Wall Street Journal writes: "Behind the Youthful Sales Surge for IBM Mainframes."
- Aptly named Micro Focus is looking for a white knight. Meanwhile, Compuware and BMC beat earnings estimates.
|by Timothy Sipples||August 15, 2011 in Cloud Computing, Current Affairs, People |
Permalink | Comments (1) | TrackBack (0)
Candidate for Lamest Excuse Not to Share Machines
Overheard from a data center manager (closely paraphrasing): "We have to buy separate machines because that's a separate business unit."
That manager made that assertion while sitting next to his company's data center. The building, power supplies, cooling systems, networking, fire suppression systems, security guards, telephones, lighting systems, and numerous other components of that data center — including his (too high?) salary — are all shared across business units. No, each business unit does not have its own data center. They somehow figured out how to share everything except the servers? Really?
What a lame excuse! Unfortunately his company is losing marketshare and has been roundly criticized for its poor efficiency and demonstrably awful IT security. I hope that manager figures out a way to help his company, quickly.
The zEnterprise 196's partitioning (LPARs) has been certified according to Common Criteria EAL5+ standards. No other business server's virtualization has achieved that sophisticated standard, which is equivalent to separate servers. Yes, you can share mainframes — even across businesses, not only business units. Practically everybody who owns mainframes does just that and enjoys the efficiencies.
See also: "How Many Mainframes Do You Need?"
|by Timothy Sipples||August 12, 2011 in Cloud Computing, Economics |
Permalink | Comments (0) | TrackBack (0)
Amazon and Microsoft Need Mainframes
According to Wikipedia, lightning occurs somewhere in the world about 44 times per second on average. A quarter of those lightning flashes strike the ground. Meteorologically, lightning is extremely common. All the more reason to wonder why both Amazon's and Microsoft's customers suffered hours-long outages due to a lightning strike.
I found Amazon's advice to its customers particularly galling: "For those looking for what you can do to recover more quickly, we recommend re-launching your instance in another Availability Zone." Translation: You handle your own disaster recovery (if you can), because obviously we can't.
These are certainly not these companies' first outages. It's rational to assume they won't be the last.
Fortunately Amazon and Microsoft customers have an alternative. They can follow these simple steps:
- Find at least two data centers, physically separated — your own, or someone else's. (Scores of IT service companies, not only IBM, operate mainframe-based clouds. They used to be called "service bureaus.")
- Put a mainframe at each site — your own, or share someone else's.
- Use any of several common, cross-site disaster recovery features available with mainframes, notably IBM's GDPS. Choose whichever flavor meets your particular RTO and RPO requirements.
- Hire competent IT staff, and pay them reasonably.
- Put your applications and information systems on these mainframes, at least for your most critical business services, end-to-end.
- Stop wasting money with Amazon and/or Microsoft.
|by Timothy Sipples||August 8, 2011 in Business Continuity, Cloud Computing |
Permalink | Comments (8) | TrackBack (0)
The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.