The Mainframe Blog Kindly Redirects You
Due to some continuing limitations at this site, including commenting issues we haven't been able to resolve to our satisfaction, we advise you to continue learning more about the world of mainframe computing at the following blogs:
- IBM Mainframe Insights
- Mainframe Watch Belgium
- Mainframe Voice (a CA Technologies Blog)
- Millennial Mainframer
- Destination z's Evangelizing Mainframe Blog
- Mainframe World
- Computerworld's Mainframe Blog
- Mainframe Performance Topics
There are many, many other mainframe-related blogs, and you can find many of them via the sites listed above. We are not endorsing any particular site as a successor to this one, though at some point I will likely choose one of them as my preferred place to continue sharing my views.
I'll see you at the other sites where we can interact better and exchange views much more easily — where the social media facilities are a bit better developed. I sincerely appreciate your many years of readership, commenting, suggestions, complements, and even criticisms. The posts here should remain available for quite some time to come (and then after that in Internet archives), but over time I'll move and update some of the best bits, for instance the "Mainframe Freebies" information.
OK, you can click or tap now. See you there!
by Timothy Sipples | September 20, 2013 in Blogs, Future, History Permalink | Comments (0) | TrackBack (0) |
How to Preserve and to Access Information for Over 1 Million Years
Scientists at the University of Southampton may have figured out how to preserve digital information for over one million years. They've demonstrated high density laser recording onto a durable quartz disc, a disc which could theoretically hold about 360 TB. They're a long way from demonstrating commercial viability for this technology, and that's not a given. Many storage technologies have looked promising but haven't panned out. It's also very unclear what the costs of quartz discs and their writers/readers will be.
Further to my "Big Data, Big Tape" post, tape is currently the most popular durable storage medium, but unfortunately magnetic tape isn't one million years durable. Organizations with tape media must periodically refresh their tape media every couple decades or so to maintain reliable readability, and there may be strong economic reasons to refresh more frequently. IBM, for example, only very recently ended hardware support for its famous 7-track and 9-track reel tape drives, officially closing that chapter in tape media history. There are still data recovery specialists that can read reel tapes if necessary, but those tapes aren't getting any younger. If you've got such tapes, now's the time to read them and convert them to newer media.
One million year media only solves one long-term data storage problem and perhaps not even the most interesting one. The real problem is how to make sense of all those bits in the distant future or even a few years from now. Human civilization is only a few thousand years old, but there are still many recorded human languages that are now incomprehensible. The Digital Age we now live in may end up being the New Dark Ages. Software developers and vendors do not usually prioritize readability of older data formats. Newer versions of Microsoft Word cannot open older Microsoft Word files, for example. Humanity is losing its collective memory with each new software release.
Mainframe vendors generally aren't so careless, and mainframe customers routinely access decades-old data using decades-old code to interpret those data. That works and works well, and it's one of the important reasons why mainframes are useful and popular as central information hubs and systems of record. Moreover, if for some reason the applications must be retired, that's possible while retaining the ability to retrieve and to interpret the archived data. Contrast that multi-decade mainframe track record with, for example, Apple which opened for business in the mid-1970s. The sad fact is that every Macintosh application compiled up through 2005 completely stopped working on the version of the Macintosh operating system introduced only 6 years later in 2011. Consequently some of the proprietary data formats that those "obsolete" applications know how to access and interpret remain unintelligible on newer Macs. Apple very deliberately broke binary program compatibility within just a few short years. Macintosh models introduced starting in the second half of 2011 cannot run Macintosh applications compiled before 2006. That might be a great technical strategy if you're Apple and you're trying to sell more Macintosh computers — to "wipe the slate clean" periodically — but, I would argue, that's horrible for the human race and its collective memory.
Virtualization offers one possible solution to the problem of interpreting old data, but virtualization isn't a complete solution on its own. Mainframes excel in virtualization. IBM has incorporated many different types of virtualization technologies throughout the zEnterprise architecture, and those technologies continue to evolve and improve. They help preserve backward compatibility. Nonetheless, on rare occasions IBM has dropped support for older application execution modalities despite the availability of virtualization as a possible solution. But then there's someone who thankfully steps in to help and then help again.
Maybe those one million year quartz discs all need to include a built-in "Rosetta Stone" that helps future generations interpret the bits they contain. Some scientists have at least thought about the problem.
by Timothy Sipples | July 11, 2013 in Future Permalink | Comments (0) | TrackBack (0) |
Big Data, Big Tape
Data volumes are exploding. A recent IEEE Spectrum article explaining "the DNA data deluge" highlights one part of the problem. Cloud computing providers, whether public or private, are already facing these problems. Where to put all (or even some) of these data?
The best answer, in part, is a classic one: tape. Did you think tape storage was dead? Think again.
The basic problem is that gigabytes and terabytes aren't enough to support the flood of data generated in more and more industries. Petabyte-range storage requirements are becoming commonplace — the Large Hadron Collider generates about 30 petabytes per year — and now some organizations are trying to cope with exabytes. What to do?
Tape! One excellent example: the IBM TS3500 Enterprise Tape Library equipped with IBM TS1140 Enterprise Tape Drives. Each TS1140 tape cartridge can hold 4 TB uncompressed. With a maximally configured TS3500 complex that translates into 2.7 exabytes of accessible compressed data — and much more of course if you're willing to store and fetch tapes manually outside the library, or if you get another TS3500 complex.
OK, but isn't tape slow? Yes and no. If you want fast random read-write access, then you'll (also) need other types of storage such as magnetic hard disks, flash memory, and other types of electronic memory. Unfortunately they cost more than tape, at least in the multi-petabyte and exabyte ranges. (Enterprise tape tends to have an initial fixed cost then comparatively low marginal costs, i.e. it has excellent economies of scale. Sound familiar?) The trick is to place data in "tiers" according to business requirements with the most frequently accessed data situated on the more expensive (and more performance-appropriate) tiers. IBM and zEnterprise (particularly z/OS) happen to do that really well.
But tape isn't actually slow for sequentially accessed data. In fact, it's rather high performance. And a lot of the world's data are like that, trickling or flooding in as "streams." There are also ways to make sense of such data in sequential-friendly ways. IBM's Scalable Architecture for Financial Reporting (SAFR) is an excellent example.
Tape also happens to be very energy-efficient. Cartridges consume no power themselves, and they only expect a reasonable climate when they are space-efficiently stored.
Is tape technology advancing? Yes, certainly. IBM previewed the next two generations (page 5) to follow the TS1140. According to IBM, the next generation should double (or more) the uncompressed cartridge capacity, and the generation after that should do much the same. Data transfer rates should increase by 44% to 50% with each generation. ESCON will fade into the sunset, with applause, and FCoE connectivity will make an appearance.
IBM pioneered tape storage beginning with the IBM 726 in 1952 with its breakthrough vacuum column loading system. Tape is the oldest form of digital storage still in widespread use. The fundamental challenge hasn't changed much. Back in the early 1950s the U.S. Social Security Administration faced big data challenges, and tape came to the rescue, displacing 80-column card storage. Tape is still the best approach for sequential storage and archiving of massive amounts of data, and its future is bright.
by Timothy Sipples | June 28, 2013 in Future Permalink | Comments (0) | TrackBack (0) |
Mainframe Computing: What Will the Future Hold?
I do not have any particular insight into IBM's research and development efforts for future mainframe technologies, but I can take some educated guesses. In the process of guessing I am actually making some predictions about the future of all computing. There's been a long history of computing capabilities appearing on mainframes first, often a decade or more ahead of the rest of the computing industry.
Here are three of my predictions, in no particular order:
- Fifty-five may be the limit. (Or not?) I'm referring to processor clock speed. For Intel the speed limit is 4.0 GHz, which is only reached with a single active core. Intel hasn't been able to increase the clock speed for nearly 8 years, and that's a long time in microprocessors. IBM's POWER processors did much better, topping out at 5.0 GHz with POWER6 and with all cores active, before backing off the clock a bit. Now IBM's zEC12 sits alone at the top of the clock speed ranking with all cores active at 5.5 GHz continuous. That's simply amazing. But will IBM be able to increase the zEnterprise's clock speed again? If it can be done, I assume it will be, but the physics are tough. That said, I expect IBM zEnterprise will maintain industry clock speed leadership indefinitely because, if it makes sense to solve those tough problems anywhere, it will make sense on zEnterprise.
- Hardware collaborative iterative compilation. IBM pioneered microprogramming all the way back in the System/360 first introduced in 1965. However, for the most part hardware is static once shipped. Yes, occasionally IBM (and other vendors) may release microcode updates, generally to fix a misbehaving instruction in a processor (often unavoidably slowing down that instruction for the sake of correctness), but that's about it. When you want new instructions, you buy new processors. At least with mainframes you can upgrade your processors in place. I think the processor improvement cycle is going to accelerate as compilers start to tell the hardware how it can improve, and the hardware will respond. In other words, the compilers and the hardware will jointly figure out how to shave execution time and path length off computing tasks as they operate. The dialog will be something like this: "I see, Ms. Compiler, that you're being particularly demanding of my Level 3 cache today, and it looks like you don't need to be if I understand what you're trying to do. Would you mind terribly if you combine these duplicate memory blocks so you can fit in Level 3 for me?" "That's a good idea you've got, Mr. Hardware. I'll take care of that for you the next time Dr. z/OS tells me there's a lull in high service class processing. By the way, Mr. Hardware, I'm doing a lot of financial calculations that could really use a custom instruction or at least a few better instructions. Have you got a better one that you can put into your microcode or into your FPGA for me? Or ask your mother if she can provide you with a new instruction and delete those 5 other instructions I never use? Thanks a bunch." "Teachable hardware" will have some interesting side effects. For example, if you think capacity planning is difficult now, just wait.
- Quantum computing. IBM is spending a lot of effort in this area, and I think we'll see a quantum computing element available for zEnterprise as an option in the not too distant future. That innovation will also have some interesting side effects, like perhaps upending cryptography.
by Timothy Sipples | February 10, 2013 in Future, Innovation Permalink | Comments (0) | TrackBack (0) |
Happy New Mainframe Day! Introducing the zEnterprise EC12
Good morning in the Americas, and happy New Mainframe Day! Today IBM is announcing the zEnterprise EC12. You can order one (or more) today, and they start shipping in about three weeks. (IBM is just like Apple that way: no vaporware.) And the world's flagship enterprise server is practically perfect in every way.
I'm still reading through the wealth of information that IBM has revealed today. Every major general press outlet is reporting on this announcement. Here are my quick takeaways:
- Another giant leap higher in processor clock speed, and another world record. Wow, I am in awe. IBM people tell me they expect to maintain industry clock speed leadership "indefinitely."
- Hexacore architecture with about double the number of transistors per die compared to the z196. That's a wonderfully nice increase — but it amazes me that IBM's engineers figured out how to run and cool 6 cores per die at mainframe service levels and at such incredible, continuous speeds.
- About 25% performance improvement per core and 50% performance capacity improvement per server. Those are big jumps. Has anybody else noticed that's with only an amazingly short 25 month product cycle since the z196 debuted?
- Cache is up big, although the cache mix across levels is slightly different. Where you "spend" your cache real estate matters, so apparently IBM found a better arrangement for spending that much bigger cache budget.
- New "server class memory," i.e. directly addressable flash storage, another industry first-and-only. Including this server class memory you can have about 9.4 TB of directly addressable memory per machine -- all redundantly protected (RAIM), another industry first-and-only. Right out of the gate z/OS exploits this storage class memory for things like paging, memory dumps, etc. for faster problem determination and recovery. More exploitation to come, of course.
- New processor instructions and loads of new software that exploits those new, faster instructions, with a particular emphasis on information analytics.
- zEnterprise is IBM's first and most advanced hybrid computing server, and now IBM has upgraded the zBX to a Model 003 along with the zEC12.
- A new "zAware" option (which I need to read more about).
- Up to 101 customer configurable cores per server, up from 80. (N.B. There are scores or even hundreds of other processor cores inside such a zEnterprise server, even excluding the zBX option. Be careful if you count and compare.)
- IBM held the line on space/power/cooling requirements. Bravo. They've also added a new, even more redundant/more reliable cooling system.
I will probably post some updates a bit later but, in the meantime, let's read on together.
UPDATE #1: I've corrected the total memory figure above. Computerworld has an interesting story about the new zEC12, including these additional highlights:
- The storage class flash memory is encrypted, presumably because conceivably someone could physically get to it (unlike zEnterprise DRAM). IBM does think of these things. Moreover, IBM zEnterprise CTO Jeff Frey explains that IBM plans DB2 and Java exploitation of this new storage class memory. One example: in the not-too-distant future DB2 will be able to keep extremely large databases in directly addressable nonvolatile memory. Now that would be very interesting and useful.
- As usual, some workloads do better than others when IBM increases the performance of its mainframe, although all should do very well. Some of the ones that should do particularly well are big multithreaded Java applications, compute-intensive C/C++ code, DB2 (especially in analytics), and SAP.
- While IBM's z114 (and predecessors) have had the option to install without a raised data center floor, now that's available for the big zEC12 too. Eliminating the raised floor makes it easier to install a zEC12 in a crate, I would point out, so you can have interesting portable data centers.
The IBM announcement also mentions the fact that the zEC12 is the first commercial server to include processor instructions supporting transactional memory, not counting IBM's unique supercomputer for Lawrence Livermore Laboratory. Now, I have a bone to pick here: the computer science engineers (presumably) haven't given this feature a good name. The word "memory" is misleading, in my view. But so they've decreed, so that's the name. The new transactional (blank) instructions make it easier and faster to program with concurrency control. And that's a very good thing.
UPDATE #2: Mainframe Watch Belgium has much more information, including these highlights:
- IBM has introduced 2GB page sizes and supporting instructions in the zEC12, which are great for DB2 and Java, in particular. So now that's 4K, 1MB, and 2GB page sizes. Considering that we've had only 4K forever, and 1MB debuted only recently (in the System z10 in 2008), this is amazingly rapid evolution.
- IBM is going to eliminate zAAPs in favor of zIIPs. Initially with the zEC12 IBM will provide z/OS PTFs (enhancements) for running all zAAP-eligible workloads on zIIPs, even when zAAPs are installed. But the zEC12 is the last machine to support zAAPs. Translation: your capacity planning just got simpler. I expected this at some point, but I didn't it expect it yet. It's great news.
- There's a significant software technology dividend across IBM software. Or, said another way, you can do more with less money. Nothing wrong with that.
- The machine looks really spiffy, even artistic. No, it's not pink or purple, but perhaps IBM will entertain even that request if you want.
UPDATE #3: IBM's Nick Sardino takes us on a guided tour of the new zEnterprise EC12:
by Timothy Sipples | August 28, 2012 in Future, Innovation, Systems Technology Permalink | Comments (0) | TrackBack (0) |
Next Generation zEnterprise: Smart, Secure, Efficient
Here's part of the argument for high-end enterprise servers:
UPDATE: IBM is promoting a major announcement on its zEnterprise home page. To register for the webcast visit this site:
https://engage.vevent.com/index.jsp?seid=40932&eid=556
by Timothy Sipples | August 21, 2012 in Future, Innovation Permalink | Comments (0) | TrackBack (0) |
STASH: A "Skunkworks" Project for Secure Clients?
Joe Clabby reports on a (formerly) secret project to use IBM mainframes for virtual hosting of secure desktop environments. It's a fascinating read.
by Timothy Sipples | April 23, 2012 in Analysts, Future, Innovation, Security Permalink | Comments (0) | TrackBack (0) |
Up is the New Down
M.G. Siegler astutely compares Apple and Microsoft with this powerful observation:
Apple’s iPhone business alone is larger than all of Microsoft’s businesses combined. And — just as remarkably — if you took away Apple’s iPhone business... the remaining Apple businesses would still be larger than Microsoft’s total business.
Mainframes are enjoying a sustained renaissance while, increasingly, many observers are wondering if the PC is dead. (No, in my view, but they're getting progressively less important as time passes. Although IBM's management certainly made the right call in exiting the PC business.)
Welcome to the future. Isn't it exciting?
by Timothy Sipples | February 6, 2012 in Current Affairs, Future Permalink | Comments (2) | TrackBack (0) |
5 Predictions for the Next 5 Years
In keeping with the season of resolutions and predictions, IBM has gazed into its crystal ball to forecast five innovations that will alter the technology landscape within five years. So let's spend some time considering a couple of these predictions and their impact on mainframe computing.
#2: You will never need a password again. Technically that's no problem whatsoever if you have a mainframe and hasn't been for many years. IBM has done a very good job preserving and extending the mainframe's leadership, positioning the mainframe as the definitive Enterprise Security Hub (or ESH if you like). For example, credit and debit card systems are already getting a lot smarter thanks in large part to the mainframe's security innovations. In an ever more interconnected era (see below) when security is becoming ever more important, more businesses and governments are turning to mainframe-based solutions. The only question in my view is whether mainframe professionals will lead or follow this trend. I vote for the former.
#4: The digital divide will cease to exist. Universal mobile access to computing is going to favor the mainframe. First, there's going to be a direct effect on transaction volumes in existing banking systems, to pick an example. I'm hearing lots of reports that's precisely what's happening, even with only a fraction of the world using smartphones at this point. Second, there will be heightened security requirements (see above). Third, the greater the audience depending on mobile access for services, the greater the cost of service interruptions, thus favoring more resilient systems and solutions. Fourth, the greater the demand, the greater the need for massively scalable systems, i.e. mainframes. That's due to the need for bigger central systems of record as well as worsening data center resource problems in procuring enough space, power, and cooling. The world's telcos, for example, are now seriously rethinking their entire infrastructure which is becoming too costly and unsupportable, after a couple decades of largely unrestrained build-out.
#5: Junk mail will become priority mail. I'm not so sure about e-mail, but the central point here is that transactions are becoming more complex, with more and more heavy information analytics associated with core business processes in order to tailor services much more precisely to customers. That's going to drive the need for massively scalable systems with tight integration. Sound familiar? IBM is right at the vanguard of that trend, with the DB2 Analytics Accelerator as a preeminent example. That technology alone is making whole new analysis-heavy applications possible that were simply never possible before.
What's your forecast? My immediate forecast (or at least wish) is for all of our readers to have a safe, healthy, prosperous, and happy new year.
by Timothy Sipples | December 20, 2011 in Future, Innovation, Security Permalink | Comments (3) | TrackBack (0) |
Google Acquiring Motorola Mobility: Hardware Matters
Google announced a friendly takeover of Motorola Mobility less than a day ago as I write this. No fewer than seven law firms have already announced civil lawsuits against Google and Motorola Mobility in relation to the acquisition, which only reminds us that anyone can complain about anything, with or without legal merit. Instead of lawsuits I'll offer some quick analysis, similar to my previous HP Itanium meltdown analysis.
Mobile competition is turning out to be much more interesting than the PC-related battles of the 1980s and 1990s. Most of our planet's inhabitants don't have PCs or PC-like devices, most probably never will, and it's even possible that PC penetration will diminish over time. These trends are disturbing to some incumbent vendors, notably Microsoft. The one-two punch of the Internet followed by mobile computing have upended the technology world, with some potential winners and losers starting to emerge.
Apple is clearly a winner. Michael Dell famously recommended that Apple's management liquidate the company and give the proceeds to shareholders. Fortunately Apple rejected that free advice. Apple has recently surpassed Exxon Mobil as the world's most valuable publicly traded company, depending on which trading day you check. Apple is now over ten times more valuable than Dell. Most of that shareholder value derives from the tremendous success of the iPhone which has spawned the largest and most popular mobile application ecosystem that now rivals the PC. Later this year, Apple is cutting the cord for good: iPhones, iPads, and iPod touches with iOS 5 will no longer require a PC or Mac for keeping in sync and for updating their operating systems. Inevitably that means many people who don't yet have PCs won't buy them, and people who do have PCs will use them less and less. Mobile computing is just more...mobile. (One of the PC's few remaining distinguishing characteristics, the full-sized keyboard, doesn't offer much advantage for Asian languages such as Chinese and Japanese. And if you really want a full-size keyboard you can add one to your iPad or iPhone via Bluetooth.)
Apple is one of the best examples of the newly rediscovered maxim that "hardware matters." Like Exxon Mobil, which is a vertically integrated energy supplier, Apple is the ultimate vertically integrated mobile computing supplier. (IBM mainframes represent the ultimate expression in vertically integrated hardware and software in the business server marketplace. IBM i systems represent another excellent example.) Google has emerged as the other serious participant in mobile computing with its Android operating system and its related application ecosystem. In effect, in the shift to the mobile world Google replaced Microsoft as the dominant client software provider. I think Microsoft has already lost that fight, and the company's partnership with fast-fading Nokia won't work. Microsoft is to mobile computing what Digital Research was to microcomputers in the early 1980s — an odd irony.
In making this acquisition, Google has acknowledged that, as with mainframes, "hardware matters." Oracle has also come to that realization, but I think Oracle could have solved their problem at much lower cost than its acquisition of Sun. Oracle didn't need Sun to build Exadata-type products. I think Oracle was just petrified that IBM would acquire Sun and reacted accordingly. IBM wisely walked away, leaving Oracle with the carcass — and angry Sun customers. Google mostly solves an immediate problem: the attempted intellectual property-based attacks against Android. Google dramatically beefed up its patent portfolio thanks to a big patent purchase from IBM, and now Google picks up Motorola Mobility and its rich patent portfolio in mobile communications. That's smart when faced with the ongoing stupidity that is the patent system.
But Google has also come to realize that hardware and software integration matter, to deliver a smooth, trouble-free customer experience. Google is one of the few companies, even in the technology sector, that's just crazy enough to build its own servers to support its unique in-house software. Google's purpose-built software has co-evolved with its hardware over time. Likewise, Google should be able to take Android to new levels of ease-of-use and function thanks to Motorola Mobility, its most loyal Android partner.
It'll now be interesting to see what Samsung, LG, ZTE, HTC, Sony-Ericsson, and other mobile device manufacturers do. So far they have simply followed the fads, meaning that they've built a lot of Android devices, quite successfully. I don't view any of them as credible software companies. (No, Samsung's Baidu really doesn't count, except perhaps as insurance for Samsung.) Google has proven to be a good partner in many competitive situations. Despite competing in the Web browser arena (with Chrome), Google continues to fund the bulk of the Mozilla Foundation's budget in return for preferential search engine placement in Firefox. Likewise, despite some tension, Google and Apple maintain their partnership in search and in mapping. Google says that they will continue to improve Android and supply new Android versions on a business-as-usual basis. I believe Google, because Google's business model is and will remain based on maximizing eyeballs, i.e. based on advertising revenue. In the early U.S. television industry, NBC's parent, RCA, manufactured televisions, but NBC was also very happy to have viewers tune in using their DuMont and Philco sets. Google's business model is quite similar, and that'll continue.
So, fundamentally, Google's move is about protecting the Android ecosystem from attack and "getting their feet wet" in developing practical Android innovations that they will also readily share with other device makers to maximize their advertising revenue. I'm not the first to say it, but if you want to develop better software you should also build hardware. I think Google made a very smart move here, and I think Motorola Mobility's talents will accrue to the benefit of Android and of all Android partners. Those Android improvements will help widen the gap between Android and, in particular, Windows Phone and Blackberry, both of which seem to be fading fast. Rival handset makers might get a bit nervous, but I think they'll stick with Android and sell a lot of new, much improved devices. They'll also still be free to innovate atop Android if they wish, and they'll have a more powerful partner in terms of fending off IP-related FUD. Google is paying about $12.5 billion for Motorola Mobility which seems like a much better value than Microsoft's $8.5 billion for Skype.
It'll be interesting to see whether Microsoft buys Nokia now. That would be the obvious competitive response, and Microsoft certainly has the cash. It looks like Microsoft could pick up Nokia for about $30 billion. The trouble is, I don't know what Microsoft can do to establish Windows Phone as a significant mobile platform even with Nokia.
This post was updated with additional detail on Motorola, Skype, and Nokia acquisition prices.
by Timothy Sipples | August 16, 2011 in Future, Systems Technology Permalink | Comments (2) | TrackBack (0) |
A Future Mainframer
Sometimes the simplest phenomena are the funniest, as this adorable future mainframer demonstrates in this video:
I had exactly the same reaction when HP reinvented thousands of years of mathematics, declaring that smaller numbers are actually greater than bigger numbers.
by Timothy Sipples | March 2, 2011 in Future, People Permalink | Comments (0) | TrackBack (0) |
2011: IBM's Centennial(ish) Year
IBM is a fascinating company in many ways. This year seems a particularly appropriate year to look back on IBM's long and interesting history, because 2011 is IBM's official centennial year.
In classic IBM fashion, the company may be underselling itself when it comes to recognizing its centennial. The Tabulating Machine Company, founded by Herman Hollerith — arguably history's leading computing pioneer — dates back to 1896 and commercialized much of the technology Hollerith invented for the 1890 U.S. Census. Hollerith's tabulating equipment improved the Census's productivity by at least a factor of three. There is a recognizably straightforward evolution, albeit a somewhat convoluted one with at least two big hardware technology transitions, from TMC's late 1800s punched card tabulating technologies to many of IBM's current technology offerings, even including the IBM zEnterprise mainframe. Yet IBM considers the incorporation of Computing Tabulating Recording (C-T-R) in mid-June, 1911, to be its official start. C-T-R, later renamed International Business Machines (IBM), was a merger of TMC, the Computing Scale Corporation, and the International Time Recording Company. It's not clear to me why the addition of meat scale and employee time clock businesses reset IBM's corporate clock, so to speak, especially when IBM divested those businesses long ago, but that's how IBM sees its history.
Anyway, even if IBM should be celebrating its 115th birthday instead of its 100th, understanding the company's history is helpful in understanding how IBM — and how technology overall — is likely to evolve in the future. Therefore I would recommend paying at least a little attention to IBM's centennial reflections over the course of this year.
by Timothy Sipples | January 5, 2011 in Future, History Permalink | Comments (23) | TrackBack (0) |
Migrate from Mainframe? To What?
Merv Adrian from Clabby Analytics challenges Gartner (in particular) in a very pointed article on Gartner's (sometime) mainframe migration advice and their apparent commercial conflicts. Adrian writes:
Gartner, the industry’s preeminent information technology (IT) research and analysis firm, has published several reports and case studies over the past few years that promote the idea that IT buyers should migrate their applications off of mainframes and move them to other, more "modern platforms." Part of Gartner’s logic, it appears, is that there is an impending-doom shortage of mainframe managers that is about to occur as elderly mainframe managers retire — so Gartner implies that moving applications to other "more modern" platforms might ensure the long term viability of enterprise applications on those platforms.
I have two major issues with Gartner’s perspective and its recommendation:
- Where is the proof that mainframe skills will decline to critical levels over the next several years? And,
- Which “modern platform” is Gartner advocating?
Just go read the whole article. I am curious to read Gartner's reply.
by Timothy Sipples | July 1, 2010 in Future Permalink | Comments (3) | TrackBack (0) |
IBM to Acquire SPSS
This morning IBM announced its intention to acquire SPSS Inc., a major provider of predictive business analytics software. IBM continues to push, hard, into the business intelligence solutions market, including major improvements in System z's business intelligence capabilities. In fact, one of the just announced new IBM System z Solution Edition Series is an aggressively priced data warehousing package for z/OS.
by Timothy Sipples | July 28, 2009 in Future Permalink | Comments (1) | TrackBack (0) |
Turmoil in the Financial Sector
The financial services industry is experiencing profound disruption, especially in the U.S. JP Morgan buys Bear Sterns then swallows Washington Mutual. Bank of America takes Countrywide then Merrill Lynch. Lehman Brothers collapses, with Barclays picking up some of the pieces (including Lehman's main data center). Lloyds grabs HBOS, while Fortis and B&B collapse. Citigroup buys out Wachovia. The U.S. Government now owns most of AIG. Investors as diverse as Warren Buffett's Berkshire Hathaway and Japan's MUFJ pump cash into Goldman Sachs and Morgan Stanley, respectively. The Big Three U.S. automakers are bleeding cash, with $5,000 and even $10,000 SUV discounts now routine. And central banks around the world, particularly the U.S. Federal Reserve, inject massive amounts of cash into the world's financial system. These are historic weeks in the financial industry, and there is more news to come.
All of this turmoil is causing (and will cause) big changes in these companies' IT plans and operations. Most directly, IT staff now must rapidly reorganize infrastructure assets to match the new corporate organizations. They must do so quickly, and with little or zero service interruption lest they cause further panic.
Fortunately, most if not all the companies mentioned above rely on mainframes for the bulk of their core business processes. It's no exaggeration to say that mainframes have facilitated the incredible pace of consolidation in the global financial industry which occurred even prior to this turmoil on Wall Street. So in an IT sense these historic events are nothing new. IT staff will be busy splitting more LPARs, relocating more LPARs, and/or consolidating more LPARs as they help their companies adjust to their new circumstances. In fact, the extent to which these companies have mainframe-based applications and information will heavily influence the ease and speed of their IT restructuring. These events resemble "disasters," and the mainframe IT staff at these companies certainly understand disaster recovery (DR). Also, we already know mainframes can scale instantly and easily to handle more demand, from customers and/or simply because a merged organization is much bigger. These are good times to have mainframes. These are the "change machines." In contrast, reorganizing the non-mainframe IT infrastructure elements will be much more painful. A lot of smart people will appreciate the contrasts.
These times undoubtedly will result in job losses, including IT job losses. However, the good news is that the IT employment market has been relatively stable or even growing robustly in certain parts of the world, with continuing demand for skilled IT professionals. Although there are never any guarantees, I expect that individuals with mainframe-related skills will do comparatively well in the months and years ahead. But there's an important caveat: if you expect to continue working for the same employer in the same city, you may be in for a shock. Workloads will be moving, a lot. Turnover is likely to increase even while the number of jobs remains relatively stable. Fundamentally, however, business managers do appreciate how important their IT staff will be to help them reorganize their companies, especially IT staff skilled at rapid, well-executed shifts of workload.
There are other IT challenges facing financial companies. In particular, it is now abundantly clear that company managers do not have real-time visibility over their own corporate balance sheets. IT practices in the investment banking community have been largely "siloed," with little or no respect for central management oversight and business controls. I expect high demand for recentralized financial information with real-time executive "dashboards," to give managers much better information to make informed decisions about their financial assets at any moment in time. I think there will be serious questions asked and much different architectural patterns for implementing decision support systems. My view is that mainframes and mainframe-based solutions will play increasing roles in financial decision support, reversing recent trends. One notable and extraordinarily timely example is IBM's Scalable Architecture for Financial Reporting (SAFR).
What trends do you see amidst the chaos?
by Timothy Sipples | September 30, 2008 in Future Permalink | Comments (2) | TrackBack (0) |
More Potpourri
1. The Chicago Tribune reports on the excellent career prospects for new mainframe professionals in Illinois. Illinois State University Assistant Professor Chu Jong, associated with that university's mainframe curriculum, says it's not uncommon for his graduates to receive six or seven job offers.
2. You can now download the open beta release of IBM's WebSphere MQ Version 7 for z/OS (and for Linux on System z) at no charge. Click on "Trials and demos" on the left menu to get there. MQ V7 will be generally available in late June, 2008, so don't wait too long to take the beta for a spin. Please let IBM know what you think.
WebSphere MQ is the most popular reliable messaging transport for connecting basically anything to anything. Many enterprise architects argue that WebSphere MQ is foundational to successful service-oriented architectures, especially on System z. I agree.
3. IBM reports 1Q2008 earnings after the U.S. markets close on Wednesday, April 16.
4. The Blocks and Files blog asks, "Seriously, why does IBM bother?" This skepticism arises after IBM researchers announced a breakthrough in spintronics memory technology which could lead to a new class of storage devices within 10 years.
It's a fair question, but there are some simple answers. The basic answer is that IBM has had tremendous success commercializing (and profiting from) storage technologies, so this research is hardly unusual and is in IBM's self-interest. Examples include hard disks, floppy disks, and most tape-related technologies (such as vacuum column loading). For example, Alan Shugart at IBM invented the floppy disk to load microcode onto System/370 mainframes and peripherals. The fact that other companies might also benefit from IBM's research — as "free loaders" — is interesting but not directly relevant to whether IBM spends money on R&D. IBM has done quite well collecting both direct sales and royalties from these inventions. And yes, R&D is inherently risky. IBM has spent a lot of money researching so-called millipede storage, and it's extremely unclear whether IBM will ever see any profit from that effort. But the only criterion that matters to IBM is whether the company itself is better off for investing billions in basic research. Given IBM's track record I side with the researchers: yes, it is, without a doubt.
It's also worth noting that there are some government subsidies that encourage certain types of research. The U.S. space program is one famous example. IBM does receive some government support, although the pharmaceutical and pure defense industries tend to receive a lot more.
I do think Blocks and Files raises an interesting point indirectly. If Wall Street is so focused on short-term quarterly results, putting pressure on research investments, how can society encourage more research? (Society is the ultimate "free loader." :-)) The traditional answer has been patents, but there are a lot of companies, including IBM, that think the patent system needs fixing.
by Timothy Sipples | April 13, 2008 in Future, Innovation, People Permalink | Comments (2) | TrackBack (0) |
Solaris is coming, Solaris is coming.....to System z
Wow....I felt the earth
shake just now....IBM and Sun have announced a live demo of Solaris on System z. Experts at Sine Nomine
– the same folks who were at the epicenter of the Linux on the mainframe
movement – have brought the two companies together and showcased the
possibilities at this week’s Gartner conference in Las Vegas.
Click here to see David
Boyes from Sine Nomine discuss the project from the Gartner event floor.
Amazing. But why are they
doing this? To make customers’ lives easier. One question I hear all the time
is: Is IBM trying to take over all computers? Does IBM believe that a mainframe
or a collection of mainframes can replace all the servers in a business? The
answer to that is simple: NO.
How do I know that? Because an IBM mainframe can never be
the sole computer in any business and the reason is, it doesn't have it's own
front end interface....well, maybe it does....the punch card
and the 3270 terminal.
But they really didn't take off the way IBM anticipated, so now you have ATM's,
Kiosks, Web browsers, cell phones, pda’s, PC's with 3270 emulators....a
cornucopia of front end devices as the human computer interface. In that
regard, IBM's mainframe has to work as a master collaborator to make sure that
it can interoperate with that wide range of front end processors and as such,
has had to augment the 3270 data stream with slick new XML and web services
built into it's systems.....but I digress....we are talking about Solaris on
the mainframe....is this to be a demonstration system, a proof of concept, or
an actual supported system?
It’s a demo today, but it's intended to be a supported
system – and it’s only a matter of time before we see it. We've learned already
that customers may desire open source computing on z/OS, zVM and Linux for System
z. They just aren't willing to service it themselves. Sure, they can download a
myriad of tools
from the internet, but they can't download a service contract. So they look
toward distributors or vendors to provide service for those offerings before
they'll put them into production. The same will be true of Solaris on System z.
On this lovely mainframe, we also realize that code development is
generally created on the desktop...an x86 platform and as such, typically
favors native operating systems as the deployment platform as well. Those
platforms are predominantly Windows, Linux and Solaris-based. But because of
openness and portability, Solaris and Linux can be deployed on multiple
hardware platforms now. Today, deployment of servers on the x86 platform has
been considered Scale Out
computing....just keep adding more and more server images, each dedicated to a
single operating system and typically a single application or data base server.
In many respects, these servers have the appearance of an appliance, because
they are "single" function devices. Those servers might satisfy the
needs of a lot of folks (e.g. clients), but typically excel at a single
function.
In the last couple of years, it seems like the next new
thing is Virtualization
and server consolidation...the ability to host multiples of these
"appliances" in a single container and, in doing so, make the
operational environment more green - use less energy, cooling, floor space,
etc, but still meet businesses' service level agreements. Well, you'd think
that virtualization just got invented. Nope - it's been around for over 40
years, with IBM's zVM as the cream of the crop in systems virtualization. We've
already learned the power of virtualization when associated with Linux for System
z. There have been a large number of deployments and over 90% of those
deployments are on zVM. With Solaris on System z, 100% of the deployments will
be on zVM. That's because the operational environment will take advantage of
some of the native System z resource sharing and management tooling, in
addition to offering the opportunity to manage each Solaris image
independently.
The virtualization available on System z through zVM has a
number of distinct advantages over the Johnny-come-lately virtual servers on
other platforms - the ability to run at 90% and higher system utilization
without fear of failover for a very large number of operating system images;
the ability to add or remove capacity on System z by turning on or off
additional processors without suffering a service disruption and in doing so,
meet tactical business processing needs on demand; the ability to leverage
hardware and system memory to communicate between operating system images,
which in turn reduces the number of system intrusion points; the
compartmentalization of one operating system image from another one to provide
an additional layer of security; the ability to use zVM services across Solaris
system images for common auditing, disk and tape back up processing.
The net of this is, it's the same code that you might be
running on a different hardware architecture, but when executed within the zVM
hypervisor, inherits much of the operational superiority from that environment,
with no significant additional cost. And then there is the hardware benefits of
the System z architecture...it had an OnStar
like call home capability and autonomic healing capability long before the blue
prints were written by GM or other server platforms. IBM's mainframe remote support
facility provides electronic diagnosis of system failures and with
redundant hardware built in, can switch over to the backup components and in
parallel "call home" to dispatch a customer engineer to correct the
problem. In the case of CPU failure, the z architecture will swap in a
processor, transparent to any operating system running on its hardware to
continue processing unabated. It's like changing the tire while the cars in
motion and calling ahead to have a new spare put in the trunk, again without
having the car stop.
Let's continue that automotive metaphor. The mainframe is
intended to be a super highway. There are folks that believe it's just a
parkway...allows only cars and it's not heavily traveled. Today's announcement
is just another occasion to demonstrate the super highway nature of the
mainframe. It enables all kinds of vehicles to travel on its roads and the
traffic is moving along very quickly. In fact, there are sensors in this
mainframe highway to detect bottlenecks and provide re-balancing workloads to
meet service goals, another value that Solaris on System z will be
able to take advantage of as well. But let's not forget, the interstate
highways are selective, they don't accept bicycles and pedestrians. There are
other roads, running at a slower pace for those folks to travel on. Yes,
they'll reach the same destinations, but it will take them a bit longer to get
there. So in that sense, the mainframe is not going to "take over
Solaris". There will be certain application and data serving workloads
that will more naturally appeal to a consolidated effort on a mainframe and
there will be other workloads that can continue to run independently on other
server hardware or for that matter be virtualized in an x86 environment, quite
possibly because they are stateless and don't need the resilience, security and
capacity management that a mainframe brings to the operational environment.
Linux on System z has been wildly successful in its ability
to consolidate workloads. Does Solaris present a "weakness" in the
force driving Linux ubiquity? No, quite the contrary. It's about flexibility
and choice. In many cases today, Linux is evolving as a server of choice in the
x86 world and collections of those servers can be easily consolidated to IBM
mainframes running Linux. But in some cases, a UNIX workload, like Solaris,
must first be ported over to Linux before it can take advantage of the
virtualization and scaling capabilities of the mainframe. In addition, some
operational tooling is different, so there might be a skills hit or a
procedural hit to make a change. Well, just as Linux is Linux, regardless of
where it's deployed, the same objective holds true for Solaris. Common code is
a re-compile vs a port and the objective is to enable the same operations
model, but offer some new capabilities, through zVM, that further reduce the
operational complexity of running many, many Solaris images on the same
mainframe.
So back to the celebration....there's a new choice coming to town, Solaris on z. The benefits of the Solaris operating system as many businesses have grown to enjoy on other architectures, with the benefits of the operations model and virtualization capabilities of IBM's System z mainframe and zVM. Long may they both prosper to give businesses the choices and flexibility they need to build global system deployments that meet their business governance, privacy, security and resilience needs in solving problems along with the myriad of other options available for them to deploy on this modern mainframe.
by JimPorell | November 28, 2007 in Future Permalink | Comments (4) | TrackBack (0) |
Introducing the 6th IBM Mainframe Operating System: Solaris?
The Associated Press reports on IBM and Sun's new collaboration announcement concerning the Solaris operating system. Sun's CEO Jonathan Schwartz called this announcement "a tectonic shift in the market landscape."
As most of you know, the IBM mainframe currently has five widely deployed and supported operating systems available: z/OS, z/VSE, z/TPF, z/VM, and Linux on z. A single machine can run all five, in any combination, in multiple secure instances dynamically responding to business demands, at the same time with the highest service qualities.
Next up it appears: Solaris on System z. In fact, Sine Nomine Associates already began work over one year ago to bring Solaris to the IBM mainframe, so maybe OS #6 is a lot closer to delivery than anybody knew. It took a little more than one year to bring Linux to the mainframe, for comparison.
Solaris is particularly popular among telecommunications companies who have racks and racks of smaller servers, typically running C and C++ code, to perform tasks such as call accounting. There's a lot of Web hosting on Solaris. Solaris is Fujitsu's preferred UNIXTM solution, and Fujitsu is one of the largest technology service companies in Japan. I could go on, but the introduction of Solaris on System z would help many customers around the world lower their costs of computing (including power, cooling, and data center space), scale up in addition to scaling out, improve the quality of their service delivery, and take advantage of increased choice and flexibility offered with all the applications and middleware available for the other 5 operating systems via in-memory, secure, high performance connections.
Solaris on System z would become in fact the third UNIX or UNIX-like operating system for the mainframe. Linux is "UNIX-like" of course, and z/OS is UNIX. (z/OS contains z/OS UNIX System Services, a complete, certified implementation of UNIX.) [Update: One develops z/TPF applications nowadays using Linux (e.g. gcc), and z/TPF is acquiring lots of in-built software familiar to UNIX users, so arguably z/TPF is at least trending toward acquiring UNIX-like characteristics.]
Now that Solaris on System z looks like a not-too-distant capability, do you think you'll be trying it?
by Timothy Sipples | August 16, 2007 in Future Permalink | Comments (6) | TrackBack (0) |
What's in a name? Send us your feedback.
In watching the recent Transformers movie - and seeing/hearing the word "mainframe" come up a few times - I began to wonder about just what the name stands for.
On one hand, it's not uncommon that some mainframe enthusiasts consider the name itself to be somewhat archaic - to suggest that perhaps the vitality and new innovation around the mainframe platform is cheated by the antiquity of its own name.
On the other hand, the most common public uses of the term recently seem to be coming from very youthful and non-intuitive sources.
I took a spin around the web and found a number of interesting uses of the word "mainframe." You might be surprised at what I found. Some are old and others new - but each of them are surprisingly cool.
So here is the question: Is the word "mainframe" lugging baggage behind it, or is it actually, well, cool?
Lady Mainframe - the avatar host of the popular internet gaming news source, Gaming News with Lady Mainframe.
...........................................................................................................
Techno DJ in Germany, known as "DJ Mainframe"
.............................................................................................................
From the mid-80's toy series G.I. Joe, here is computer specialist "Mainframe."
YO JOE!
.........................................................................
.
.........................................................................
Here's another toy named "Mainframe," complete with push-button action.
..........................................................................
Creative services and entertainment consulting firm.
by Kevin Acocella | July 13, 2007 in Future Permalink | Comments (11) | TrackBack (3) |
Skills shortage?
Been following a conversation on the IMS listserver about how to keep IMS data for 15-20 years (e.g. for legal reasons or whatever). Started with good technical recommendations and moved on to what about the skills? No point in having the data if you don't have anyone who knows IMS in 20 years time. Just read this great entry on the subject:
Why blame somebody else for missing skills. Every company should have a list of critical skills. If that company takes skill seriously, they will know that they need to educate some people with IMS. It should not be that difficult to hire some students from university, tell them if they start learning IMS they will have a job for the next 20 years and off you go. Or something like they did in Germany "Small computers, small salary, big computer, big salary". Or tell them a Java/PHP/C/C++/J2EE programmer competes in a global market with 100 Mio chinese and people from india. How about somebody who knows TSO and JCL? Likely to compete with less than 10000 people in the world. Sure it takes at least 2-5 years till they can walk alone, but it's worth the effort.
Spot on.
by pwarmstrong | February 22, 2007 in Future Permalink | Comments (6) | TrackBack (0) |
The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.