IBM's 4Q2008 Earnings
IBM announced 4Q2008 earnings on January 20, 2009. Overall hardware revenues were down 20 percent in the quarter year to year (16 percent at constant currency). However, System z hardware revenues were down only 6 percent (1 percent at constant currency). That was good enough for System z to hold revenue marketshare steady in a tough macroeconomy. Together with high-end POWER6 systems, System z reflected the continuing drive toward the most highly virtualized server platforms.
System z revenues were particularly strong in the Americas and in the financial services and industrial sectors. That's quite interesting, because those are some of the worst-hit areas in the global economy.
System z MIPS shipments (the amount of hardware capacity shipped in the quarter) grew 12 percent year to year, with specialty engine MIPS growing 43 percent. Customers continue to grow their mainframe installations rapidly and to add new workloads, enjoying ever-lower costs.
IBM does not break out System z software or services revenue separately. However, the System z-heavy Information Management software brand was the strongest.
|by Timothy Sipples||January 20, 2009 |
Permalink | Comments (3) | TrackBack (0)
Mid January, 2009, Mainframe Potpourri
There's so much mainframe-related news that it's already time to post another "Potpourri."
- BMC periodically conducts a rather large survey of mainframe customers, and you can learn about the survey results in this webcast. Words like "renaissance" and "growth" pepper their description, so that's a big hint about what their survey found.
- Kookmin Bank, Korea's largest bank, announced it is adopting the IBM System z10 as the platform for its next generation core banking application set. KB will become one of the largest mainframe installations in Asia. (Full disclosure: I spent a lot of time working with the KB team in this effort.) Korean Air also signs a 10 year contract which includes System z.
- The State of Oklahoma is handling record numbers of unemployment claims, unfortunately. While there are problems scaling offices and people to meet the demand, there are no problems whatsoever scaling their mainframe. Pennsylvania similarly has no problems scaling their mainframe. Scalability limits can cause severe economic hardship, as many bankrupt businesses know too well. Mainframes scale up (and down) to handle highly variable business demand with aplomb.
- Four students are the first Mainframe Scholarship Fund winners. Congratulations to Adam, Sandeep, Scott, and Bob.
- Headhunters and prospective employeers are trying to raid distressed Satyam Computer Services and secure its best talent. Top of the list: "mainframe developers with 2-8 years of experience."
- Like many other employers in these troubled economic times, Attachmate has had to lay off 10% of its workers.
- IBM Research announced a tremendous breakthrough in magnetic resonance microscopy, which some are saying ranks with the invention of the optical microscope. This innovation should lead to major advancements in medicine, biology, and many other fields, including computer engineering and server design. In related news, in 2008 IBM once again received more U.S. patents than any other company.
- Need some more fashionable t-shirts?
- I have no idea what this video means:
This one I understand a little better:
And this fellow found his perfect mate:
|by Timothy Sipples||January 14, 2009 |
Permalink | Comments (1) | TrackBack (0)
Q&A Teleconference for z/OS Associate Certificate Program
You are invited to learn more !
If you or other colleagues in your organization are interested in the Marist z/OS Certificate program, please join us for a brief telephone discussion describing our Spring 2009 Enterprise Computing z/OS Certificate program.
Where & When
Date: January 15th-Thursday
Time: 12:00pm to 1:00 pm (EST)
Call in US#: 1-888-469-0495
You will have a chance to ask questions and learn about various aspects of the program:
- content and delivery
- participation and companies represented
- success and testimonials
- enrollment/application process
- tuition information
- history of the program
Please RSVP by Wednesday, January 14, 2009 via email to: email@example.com
Please forward this invitation to other prospective participants. Thank you!
|by JimPorell||January 9, 2009 in People |
Permalink | Comments (1) | TrackBack (0)
Are You Hiring Yet, Florida?
I'd like to heap some praise on Illuminata's Gordon Haff, writing at CNET.com. (Full disclosure: I bought 100 shares of CBS, CNET's parent company, a few weeks ago, and so far I'm very happy I did.) He responded to my mini-rant concerning Florida's (apparently crusty) unemployment benefits application. Please go read Gordon's post.
To expand on my comments a bit, does anyone else find it ironic that the people managing Florida's unemployment benefits application apparently are not yet hiring some workers to go fix and enhance the application? I mean, now would be a good time to create some jobs in Florida, wouldn't it?
As Gordon points out, yes, it's entirely possible this application is indeed crusty, with documentation deficiencies, arcane data formats, etc. Perhaps no one in-house wants (or knows how) to touch it, and probably that problem has been exacerbated through years of underinvestment. (Just like many of Florida's schools and bridges, but I digress.) But there are plenty of wonderful, talented people available who can come in and develop documentation, establish ongoing practices to avoid future documentation gaps, enhance the application (and databases) to provide new functions, weed out dead code, service-orient the application, conduct professional training to teach proper application maintenance practices, and so on. Such people are not free, although they may be more affordable in today's economy.
So how about it, Florida. (And I guess I'm talking to you, the good legislators in Tallahassee.) How about you hire some people -- ace COBOL (?) developers, a good application architect or two, trainers, whatever -- to go enhance your state's unemployment benefits application. You can solve this problem, and now is a great time to solve it.
|by Timothy Sipples||January 9, 2009 |
Permalink | Comments (1) | TrackBack (0)
Potpourri for Early January, 2009
Happy New Year everybody. Are you ready for a year without a financial crisis? Me too. Let's hope.
- IBM has a bunch of teleconferences scheduled on various mainframe-related topics. They're free, and often their quite educational. Some examples: Four Expert Tools to Advise You with DB2 for z/OS, Benefits of DB2 Stored Procedures and Web 2.0, Let pureQuery Improve the Quality of Service and Reduce Costs for WebSphere and DB2 Applications, Web 2.0 Made Simple for System z, Exploring the Human-Centric Aspects of BPM for System z, Understanding the Impact of Networks on DB2 and IMS Performance, IMS and Web 2.0 Go to Work, Rational Developer for System z and Problem Determination Tools, WebSphere MQ for z/OS V7 as the Messaging Backbone for SOA/Web 2.0/File Transfer, and CICS Performance Series: Blow the Doors Off CICS and DB2. Update: There's also a webcast on Migration to z/OS 1.10. Update #2: Novell also has a webcast exploring the benefits of running SAP on Linux on System z.
- Over at z/Journal, Mike Moser puts mainframe software costs into welcome context ("Mainframe Software Costs Too High? Think Again!"), while Jon William Toigo wishes for a more rational 2009 in mainframe appreciation ("Returning to Business Value Focus in Mainframe Computing").
- Ted Banks over at Law.com shares his thoughts about recent IT trends. For example: "...I hate cloud computing -- relying on Web-based tools or 'Software as a Service.' Anything that takes power away from individual users and moves it elsewhere is horrible. Apart from debates about reliability and security, it is a retrograde move back to the era of the mainframe, where we were reliant on the high priests of the systems department to get anything done." Don't lawyers do things like sue if there's poor reliability or security? But I think Ted is missing the point. Centralized computing models make an awful lot of sense to solve many business problems. For example, does he really want to re-run NOAA's weather model on his own PC to confirm that it's going to rain tomorrow? Does he grow his own wheat to bake his own bread? Computing models succeed (or not) based on how well they (and the people running them) deliver valuable -- and, yes, reliable and secure -- customer services. And we need both centralized and decentralized capabilities, although lately as wired and wireless networks pervade our lives centralized is certainly roaring back into vogue, and the mainframe with it. But don't conflate lousy people with choice of computing model. It's certainly possible to have the right computing model and the wrong people, in which case the solution is to get better people.
- A long-awaited sequel to (or perhaps remake of) Disney's mainframe-centric film "Tron" is nearing production.
- Will the popular press ever get it right about mainframe-hosted applications? I'm still waiting after seeing this one: "...the computer mainframe handling unemployment claims is 30 years old and won’t take many more technical improvements." I can guarantee that the State of Florida is not running a 30 year old mainframe. And "won't take many more technical improvements"? What on earth do they mean by that? That the application is holding a picket sign and threatening to march on the state capitol building, angrily knocking on the doors of legislators? Good grief, this is lame, Florida. Go appropriate some funds and develop whatever improvements you want already. Want to write new functions in Java (to pick something at random)? Go for it -- you already have it (on that not-30-year-old mainframe). Geez!
- Anne Altman, IBM's head of System z, is #49 on somebody's list of people with the most "Federal IT Power."
- Dana Blankenhorn thinks Microsoft is becoming a mainframe software company. I wonder if Ted Banks (see above) is concerned.
- Gizmodo thinks Wal-Mart's mainframes are as big as the Death Star from Star Wars. Actually, no, the commercial freezers in Wal-Mart stores are bigger (and more numerous). But the animation is quite fascinating (take a look), and those (small) mainframes have had a big business impact.
- Glenn O'Donnell at Forrester Research also wonders what the heck is wrong with HP: "It is almost a bit of a religious point for HP; the company is trying to be the anti-mainframe, but in reality, you are ignoring a big piece of most businesses if you ignore the mainframe." Memo to HP: don't do religion.
- IBM veteran and industry pioneer Jack Kuehler passed away just before Christmas.
- BluePhoenix is helping a Scandinavian bank move its mortgage loan application from a Unisys ClearPath mainframe to IBM System z, including DB2.
|by Timothy Sipples||January 8, 2009 |
Permalink | Comments (1) | TrackBack (0)
A Tale of Two Mainframe customers – one growing and one leaving the mainframe
This is the tale of two mainframe customers. One customer
has achieved a period of tremendous growth in their business, processing
transactions on the mainframe, while reducing expenses and becoming more
resilient. The other business chose to get off the mainframe at a significant
cost and in all likelihood, spends more today than they would have on the
mainframe. What’s interesting is that at one time, they shared the same system
infrastructure. And Clerity, a consulting firm, would like you to believe that
the non-mainframe customer got tremendous value in the move. Here are their
In any basic computer architecture class, a student will
learn that the fewer the number of data moves, the better for performance. Now,
in an era of regulatory compliance and privacy considerations, that becomes
exceedingly true because each instance of data must now be auditable and
recoverable which implies additional costs for each extra instance of data.
This becomes important when considering an outsourced computing environment. It appears that SIAC never got this level of education, while its customer, DTCC seems to have excelled in this computer architecture class. Even funnier is that Clerity has decided that SIAC is a model customer…that doesn’t bode too well for their other consulting arrangements.
So what really happened?
DTCC’s trading business was growing tremendously. But let’s have them tell you, in their own words:
Sometimes "insourcing" pays off more than
outsourcing. Until last year, two DTCC subsidiaries outsourced all their
infrastructure support activities to the Securities Industry Automation
Corporation (SIAC). Now, following the completion of a multi-year initiative,
DTCC has cut costs and bolstered business continuity by insourcing the
activities previously performed by SIAC into DTCC’s infrastructure. The two
subsidiaries are National Securities Clearing Corporation (NSCC) and Fixed
Income Clearing Corporation (FICC). ……
On top of strengthening the industry's business continuity and infrastructure, the project is yielding financial benefits, enabling DTCC to cut the industry’s overall operating expenses. In 2006, by leveraging DTCC's processing capabilities, insourcing has reduced DTCC's annual operating expenses an estimated $42 million, said William Aimetti, DTCC’s chief operating officer. This was one factor that enabled DTCC to lower its fees in 2006.
And going back to another DTCC newsletter, they explained that they got a 167% performance improvement, without a line of code change, because they reduced the number of data moves and connections necessary to process a transaction:
To keep ahead of transaction volumes that have been rising sharply over the past several years, DTCC has significantly increased the capacity of its mainframe database for equity processing. The system, called Trade Repository Processing (TRP), can now process at least 160 million sides per day. This 167% increase is nearly triple the previous capacity of 60 million sides.
What’s more, the TRP can handle the additional volume within the same time frames, thanks to changes that make the system perform more efficiently. In addition, for current volumes, the upgrade allows DTCC to deliver certain participant reports, such as the Consolidated Trade Summary, up to 45 minutes earlier.
DTCC was able to do all this without modifying any of their
customers applications. While DTCC was hosted on the SIAC systems, they found
that there were extra network hops and copies of data deployed. They also were
heavily dependent on SIAC to make changes to the infrastructure on a regular
basis. Because both of the SIAC systems were located in the New
York City area, DTCC was also afraid that a single
catastrophe would take out the redundant systems and affect their availability.
These were the fundamental concerns that led DTCC to move out on its own.
Prior to this decision, SIAC signed a multi year agreement for software, systems and services with IBM. This agreement included discounted pricing assuming capacity growth projections that SIAC provided as their objectives. By sharing the mainframe infrastructure with DTCC, SIAC dramatically reduced their own operational overhead which was predominantly associated with batch processing and account reconciliation in the evening, while DTCC used the same processing infrastructure for trade transactions during the day. Each had applications that overlapped each other though.
When DTCC pulled their two applications from the SIAC
system, SIAC was left with a lot of free daytime capacity, but still a
reasonably busy system in the evening. But SIAC was now completely responsible
for the costs of this system. The growth potential that they had promised to IBM
would no longer be possible and as such, the discounts they were offered were
no longer relevant. This underutilized
mainframe was now quite a bit more expensive to SIAC than it had been when it
was sharing the expense with DTCC. That’s a fact…nothing sweet about that.
Now SIAC could actually have downsized with their mainframe
and reduced their costs. Instead, they chose to “down size” to a distributed
environment. In doing so, they also needed to solve federally mandated business
resilience requirements, something they had ignored on their previous mainframe,
and build another data center.
So using SIAC’s own words:
In 2006, when the Shared Data Center team, the technology arm of the New York Stock Exchange (NYSE), evaluated its internal infrastructure in light of competitive market factors, changing regulations, and anticipated future growth, the decision was made to replatform its 1,660 MIPS mainframe workload onto IBM System p Model 595 servers running AIX and UniKix rehosting software from Clerity.
"Quite simply, we can now transact more business per
hour at a lower rate," said Francis Feldman, Vice President of the Shared Data Center
So let’s parse this a little bit. Notice the timeframe:
2006….it’s the same time that DTCC moved off the SIAC system. The 1660 MIPS is
the combined processing power for both DTCC and SIAC. The reality is that SIAC
NEVER used all that capacity themselves, even though they owned the system. So
while factually true, it wasn’t real. Changing regulations refers to the need
to develop a second site outside the New York metropolitan area. After converting the
applications, SIAC had to create automation and recovery scripts and deploy the
new servers at an alternative location as well. I am assuming that they
licensed the software for those systems as well. And this required changes
for all of SIAC’s customers as well. That cost certainly isn’t factored
into the migration costs for SIAC. Finally, I wonder what SIAC’s cost were when
they shared the infrastructure….is the new solution more or less than that
environment? Unfortunately, we’ll never know because the new environment
includes a new data center….but I could imagine it was less.
So, an alternative plan, at presumably far less expense, effort
and time, with little or no application changes, would have been to downsize
the mainframe to the size appropriate to SIAC’s new capacity requirements. To
meet the resiliency objectives, SIAC could have installed a mainframe in
another geographic location using IBM’s Capacity Backup pricing which would not
have charged for software usage except during a disaster, while still allowing
for regular disaster preparedness testing with no additional licensing costs. Many of IBM’s customers take advantage of this
model for availability processing.
And has availability improved? Click here for a list of outages that SIAC experienced in 2008. All types, but not associated with the mainframe. Perhaps SIAC changed their reporting structure in 2008, but I can’t find many outages listed in a search on 2007, though there is an awful lot of information dating back many years on their site.
As for SIAC, I don't know, but they don't spend nearly the amount of time bragging about their infrastructure as DTCC. That should give you a clue right there!
|by JimPorell||January 6, 2009 in Current Affairs, Economics, History, Systems Technology |
Permalink | Comments (1) | TrackBack (0)
The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.