Cloud Computing and the iPhone

Mobile phone I am sticking to my story -- the iPhone 3G is fantastic. There are some issues however. The iPhone is much more than a "cell phone" -- it is a platform. The six basic elements of the platform are the iPhone itself, the network (AT&T in the United States), iTunes, the "App Store", MobileMe and, most importantly, the applications.

Some are saying that since the new iPhone 2.0 software is available for the original iPhone that there is no need to upgrade to the iPhone 3G. It is true that there is no need to but there are a number of good reasons to. The new iPhone uses the new "new AT&T" 3G network which is claimed to be twice as fast -- as something. Speed claims are rarely delivered upon but no doubt that the 3G network is faster. The receiver in the iphone is also better even when communicating with an AT&T non-3G tower. I have noticed at least one bar improvement here at the lake where there is no 3G tower. The WiFi implementation is better too. Not sure if it is the hardware or software that is improved but it is much more reliable and doesn't get confused about whether to use the cellular signal or the WiFi signal. I am getting ahead of myself but one of the neatest new applications is TruPhone. TruPhone allows you to make a phone call from your iPhone via WiFi even if there is *no* tower of any kind. This happens. I was visiting friends in New Hampshire last weekend and we had brunch at a nice place in a remote area. There was no AT&T or roaming partner signal. None. No service. The restaurant, however, had a very nice free WiFi signal. With TruPhone you can make calls to anywhere in the world at a very low price -- pennies per minute. If you are calling another TruPhone user, it is free. I made some calls with it today and the quality was quite good.

There are other reasons to get the new iPhone. It is a bit thinner and more rounded and feels really nice to hold. It is a joy to use. The 3G has a real GPS receiver so when you use maps it is not an estimate of where you are based on cell phone tower triangulations -- it is using satellites to pinpoint exactly where you are. This opens up a slew of "location based" applications -- where is the nearest pizza place? What are the nearest geocaches? How do I get from where I am to wherever? The battery life is claimed to be better but I am not so sure of that. The iPhone has so much more to offer that I think the usage will be higher and maybe effective battery life will actually be less -- that is the case for me so far. Good idea to have a car charger on hand. One of the irritating things about the original iPhone is that you can't plug your favorite headset into it without a special adapter. The new iPhone accepts any headset and does so without any adapter. Bottom line, it is a really great device. There are many iPhone killers out there and more coming but I don't think they will match the overall experience of the Apple iPhone.

The network is another story and I have written about it in not so glowing terms in each iPhone update. I do think they are getting better. As I have always said, it depends on where you live. In the Northeast, Verizon has better coverage but AT&T is putting up new towers -- one just came online two miles from where I live in Connecticut. Naturally, major cities are covered. I also detect that AT&T customer service is really trying hard to satisfy their customers. The overall model of the industry is bad -- limited choice, get locked into two year contracts, and penalties if you want to move to something better.

iTunes continues to dominate online digital music sales but is facing more and more competition. I have been buying my music from Amazon. They have a nice downloader that puts the mp3 music directly into iTunes and there are no digital rights management restrictions. I like this because I can put purchased music on the iTrike. One of the other great applications on the iPhone is Pandora. This has become my music of choice and I play it through the Squeezebox. The Music Genome Project is awesome. If you love music, I highly recommend it.

iTunes is is integrated tightly (as all things Apple are) with the App Store. Both present easy ways to spend your money from your iPhone. I see this as a huge emerging trend. Call it m-Commerce (mobile commerce) if you want. While sitting in the dentist office awaiting your turn you can buy music and applications from your iPhone. An eBay application let's you spend your money -- or monitor your auctions-- there too. On launch day earlier this month there 500+ applications available for the iPhone. There will be many thousands of applications. So far, about 25% of them are free and supported by various flavors of advertising. You click to find the nearest pizza place and Apple gets a slice of the pie. Some are expensive but add huge value. I bought an aviation application for $69.99 that does everything a pilot can imagine. You can file flight plans with the FAA, check weather radar, airport runway lengths, pilot advisories, and much more. I am not a gamer but millions of people are and the iPhone accelerometer allows you to shake or wave the iPhone as inputs to the game. I have to admit that the Phone Saber is fun, albeit a bit geeky -- lets you take on Darth Vader. The impressive part to me is that the applications are stored in the iPhone but also in iTunes. When you sync you are syncing calendar, email, contacts, and the applications. When you click the App Store icon on the phone it tells you if any of your apps have an update available. When you do a search at the iTunes Store, the search results are organized by artists, albums, movies, etc. and applications.

On the flip side, organization is an issue. So far I have 55 applications. I expect to get many more. The human mind is amazing in terms of icon recognition. You just know that the Phone Saber is at the upper left of the fourth page of applications. But at some point it is overwhelming. I expect Apple or perhaps a third party developer will soon introduce an "app launcher" that allows you to tag an application as news, weather, financial, aviation, game, etc. and let you drill down to what you want.

Last, and I hope not least is MobileMe. Apple says it is the "Simple way to keep everything in sync". The vision is great -- your photos, contacts, email, and calendar are all pushed to your iPhone from the "Cloud". You can make a change on the iPhone and it shows up in Outlook or you can make a change in Outlook and it shows up in your iPhone. Those that work for companies that have Microsoft Exchange or IBM's Lotus Notes already have this kind of capability but there are millions of us who are "independent" and have our own mail server or use gmail, or Yahoo! or any of numerous other services. With MobileMe we can be like the "corporate" world but we can set our own policies and practices. We can have Exchange or Notes without Exchange or Notes. The cloud approach is clearly the next big thing (see prior stories on this and also by Irving), but Apple has stubbed their toe big time on this. There are numerous analysts, bloggers, and experts who have ripped them apart about the failings. As previously reported, I struggled with MobileMe the first few days but then it began to work properly for a few days albeit with some hiccups. Beginning this week it is not working properly. Calendar entries get duplicated, synchronization is sluggish or doesn't work at all at times. It is not like Apple to fail big time like this and I am sure they are scrambling to straighten things out.

I got an email from MobileMe@InsideApple.Apple.com the other day asking if I would be interested in a trial of MobileMe! Seems they didn't check their subscriber list first. The MobileMe web site says that "1% of MobileMe members have limited access to MobileMe Mail. Full service will be restored to these accounts on a rolling basis over the next few days". 99% and in a few days were good in the old days but not these days. I decided to try the online chat support to see if they could help resolve my problems. After sending my initial "instant message" I got a reply saying "A MobileMe Support Representative will be with you in approximately 26 minutes. We look forward to answering your questions". I got a reply while I had stepped out of the room for a minute and then had to start over and wait another 26 minutes. After 3 hours and 14 minutes the support rep said he had to escalate the problem to a specialist who would contact me by email. More than two days have gone by and I have had no email from Apple.

This all reminds me of the Fall of 1995 when we were preparing ibm.com to host the Olympic Games of 1996. It turned out to be the largest web site ever built. We had 54 outstanding engineers working on it and it turned out to be successful. Fortunately, we were able to convince the company to make a large investment in the infrastructure. I remember saying that "we don't how many people will come to the web site, we don't know when they will come, nor do we know what they will do when they get there". It was "trial by fire". That was 13 years ago. The lessons learned in 1995 served IBM well and it is now the largest web hosting company in the world. IBM doesn't always call it cloud computing, but they have built the largest clouds on Earth -- in the clouds. Apple has a lot to learn. I am confident they will. Their brand loyalty depends on it.

Related links
bullet Other patrickWeb stories about the iPhone


by John R Patrick July 30, 2008
Permalink | Comments (4) | TrackBack (0)

Response to Jeff Savit Blog

As part of the announcement of z10 IBM made some marketing claims about the large number of distributed Intel servers that  could be consolidated with zVM on a z10.  The example cited used Sun rack optimized servers with  Intel Architecture CPUs.  Sun Blogger Jeff Savit objected strenuosly to the claims mainly because of the low utilization assumed on the Sun machines that the claims compared to.  You can read it here:

http://blogs.sun.com/jsavit/entry/no_there_isn_t_aI responded, he responded.  When I was out of pocket  for awhile and did not respond soon enough and his blog cut off replies on that thread.  I am putting my latest response here.  Thanks to Mainframe blog for providing the venue to do so.  My latest responses to Jeff are in blue italics.

Posted by Joe Temple on June 24, 2008 at 11:28 AM EDT #

This format is very difficult for parry and riposte, but let's try. I would like to use different colors, but I can't (AFAIK) put in HTML markup to permit that. So: Joe's stuff verbatim within brackets, and each of his sections starts with a quote of a sentence of mine (which I identify, within quotes) for context. Each stanza identified by name and employer (this is Jeff speaking):

Joe(IBM): [[[Jeff, your post is rather long and rather than build a point by point discussion too long for a single comment I will put up several comments. Starting with the moral of the story: There are several: • quoting Jeff: "Use open, standard benchmarks, such as those from SPEC and TPC."

Better to use your own. They have not been hyper tuned and specifically designed for. They have a better chance of representing reality. But be careful not to measure wall clock time on “hello world” or lap tops will beat servers every time.]]] 

Jeff(Sun): In a perfect world, every customer would have the opportunity to test their applications on a wide variety of hardware platforms to see how they perform. But they don't, and they rely on open standard benchmarks to give them some information about how the platforms would perform. Or, they do have applications they could benchmark, but they're non-portable, or run solely on a single CPU (making all non-uniprocessor results worthless), or otherwise have poor scalability or any of a hundred other problems. Imagine comparing IBM processors based on the speed of somebody writing to tape with a blocksize of 80 bytes! Even if they get a useful result, the next customer doesn't benefit at all and has to start from scratch. It's not trivial to make good benchmarks that aren't flawed in some way. That's why the benchmark organizations exist - to provide benchmarks that characterize performance and give a level playing field for all vendors. IBM, Sun, and others are active in them - our employers must think they have value. Obviously there is "benchmarketing" and misuse of benchmarks. THAT is what I'm railing against. Hence, my following bullet that says "read and understand". But frankly, benchmarks Specweb/specwebssl/Specjvm, the SPEC fileserver benchmarks, and benchmarks like TPC.org's TPC-E provide representative characterization of system performance (with sad exceptions like TPC-C, which is broken and obsolete, but IBM still uses for POWER). The characterization of TPC-C as "old and broken"  may have something to do with Sun's inability to keep up on that benchmark.  One of the characteristics of TPC-C that none of the other benchmarks has is that it has at least some "non local" accesses in the transactions.  Sun's problem with this is that such accesses defeat the strong NUMA characteristic of their large machines.  One of the results of this  is that all machines scale worse on TPC-C than on the benchmarks Jeff cites. Since Sun is very dependent on scaling a large number of engines to get large machine capacity close to IBM's machines they are highly susceptible to this.   The effect  is  exacerbated by NUMA (non uniform memory access).  That is, a flat SMP structure will mitigate this.   The mainframe community's problem with TPC-C is that the non-local traffic is all balanced and a low percentage of the load.  As a result TPC-C still runs best on a machine with a hard affinity switch set and does not drive enough cache coherence traffic to defeat numa structures.  When workload runs this way it does not gain any advantage from z's schedulers or shared cache or flat design. Think of TPC-C as a fence.  There is workload on Sun's side and there is workload on the mainframe side of TPC-C.  All the Industry Standard Benchmarks sit on Sun's side and scale more linearly than TPC-C.  For workloads that are large enough to need scale that run on the Sun side of the TPC-C fence, IBM sells System p and System x.  When you consolidate disparate loads the Industry Standard benchmarks do not represent the load and  with enough "mixing"  the  composite workload will eventually move to the mainframe side of the TPC-C fence.  See Neil Gunther's Guerilla Capacity Planning, for a discussion of contention and coherence traffic and their effect on scale.  Particularly  read chapter 8, to get an idea about how the benchmarks lead to overestimation of scaling capability.    A lot of people have worked very hard to make them be as good as they are. IBM uses these benchmarks all the time - with the notable exception of System z.  System z is designed  to run workloads with non uniform memory access patterns, randomly variable loads, and much more serialization and cache migration than occurs in the standard benchmarks , where strong affinity hurts, rather than enhances throughput. It is the only machine designed that way (Large shared L3 and only 4 way NUMA on 64 processors). Also, the standard benchmarks are generally used for "benchmarketing".  As a result the hard work involved is not purely driven by the noble effort by technical folks that Jeff portrays, but rather by practical business needs, including the need to show throughput and scale in the best possible light.  That's the point, isn't it. It works in a monopoly priced marketplace where it doesn't have to compete on price/performance,  as it does with its x86 and POWER products. Where else are you going to run CICS, IMS, and JES2?  There are alternatives to System z on all workloads, it is matter of migration costs v benefits of moving.  Many applications have moved off CICs and IMS to UNIX )and Windows over the years. Sun has whole marketing programs to encourage migration.  In fact a large fraction of UNIX/Windows loads do work that was once done on mainframes.  As result the mainframe must compete.   Similar costs are incurred moving work from any UNIX (Solaris, HPUX,  AIX, Linux to zOS. Or moving from UNIX to Windows.  The other part of the barrier is the difference in machine structure.  This barrier is workload dependent.  Usually, when considering two platforms for a given piece of work one of the machine structures will be a better fit.   When moving work in the direction favored by the machine structure difference the case can be made to pay for the migration..  This is what all verndors do.  Greg Pfister (In Search of Clusters), suggests that there are three basic categories of work.  Parallel Hell, Paralle Nirvana, and Parallel Purgatory.  I would suggest that there are three types of machines optimized for these environments (Blades in Nirvana, Large UNIX machines in Purgatory, and Mainframes in Hell)  To the extent that workload is in parallel hell, the barrier to movement off the mainframe will be quite high.   Similarly attempts to run purgatory or nirvana loads on the mainframe will run in to price and scaling issues. IBM asserts that consolidation of disparate workloads using virtualization will drive the composite workload toward parallel hell, where the mainframe has advantages due to its design features, mature hypervisors and machine structure.

To the second observation about wall clock time on trivial applications: yes, obviously.

Joe(IBM): [[[quoting Jeff: •"Read and understand what they measure, instead of just accepting them uncritically."
Yes, particularly understand that the industry standard benchmarks run with low enough variability and low thread interaction that it makes sense to turn on a hard affinity scheduler. Your workload probably does not work this way.]]] 

Jeff(Sun): I'm not sure what's intended by that. Are you claiming that benchmarks should be run against systems without fully loading them to see what they can achieve at max loads? Hmm. Anyway, see below my comments about low variability and low thread count - which applies nicely to IBM's LSPR.]]]   I guess I am claiming that the industry benchmarks basically represent parallel nirvana and parallel purgatory.  I am asserting that mixing workload under single OS or virtualizing servers within an SMP drives platforms toward parallel hell.  The near linear scaling of the industry standard loads on machines optimized for them will not be achieved on mixed and virtualized workloads.  In part this because sharing the hardware across multiple applications will lead to more cache reloads and migrations than occur in the benchmarks.   I see Jeff's reference  to LSPR as a red herring for two reasons.  While LSPR has not been applied across the industry,  the values it contains have been used to do capacity planning rather than marketing. The loads for which this planning is done are usually a combination of virtualized images each either running mixed and workload managed  under zOS or  VM and zLinux.   This could not be done successfully if  the scalability were as idealized as the Industry standard benchmarks.   Second, I do not suggest that LSPR is the answer, but rather that the current benchmarks do not sufficiently represent the workloads in question (mixed/virtualized) for Jeff to make the claim that z does not scale as he did elswhere in the blog entry.  Basically,  to draw his conclusion he compares the LSPR scaling ratios to Industry benchmark results on UNIX SMPs. This is not  a good comparison.

Joe(IBM): [[[quoting Jeff: •"Get the price-tag associated with the system used to run the benchmark." Better to understand your total costs including admin, power, cooling, floorspace, outages, licensing, etc."

Jeff(Sun): That's what I meant. Great.  Because the hardware price difference that Sun usually talks about is only a small percentage of total cost.  The share of total cost represented by hardware price shrinks every year.

Joe(IBM): [[[quoting Jeff: • Relate benchmarks to reality. Nobody buys computers to run Dhrystone." Only performance engineers run benchmarks for a living.]]]

Jeff(Sun): Sounds like a dog's life, eh? OTOH, they don't have users...

Joe(IBM): [[[quoting Jeff: •"Don't permit games like "assume the other guy's system is barely loaded while ours is maxed out". That distorts price/performance dishonestly." Understand what your utilization story is by measuring it. Don’t permit games in which hypertuned benchmarks with little or no load variability and low thread interaction represent your virtualized or consolidated workload. Understand the differences in utilization saturation design points in your IT infrastructure and what drives them."]]]

Jeff(Sun): Your comment has nothing to do with what I'm describing. What I'm talking about is the dishonest attempt to make expensive products look competitive by proposing that they be run at 90% utilization, while the opposition is stipulated to be at 10%, and claim magic technology (like WLM, which z/Linux can't use) to permit higher utilization and claim better cost per unit of work on your own kit. That's nothing more than a trick to make mainframes look only 1/9th as expensive as they are. Imagine comparing EPA mileage between two cars by spilling 90% of the gas out of the competitor's tank before starting. As far as "no load variability and low thread interaction", I suggest you take a good look at IBM's LSPR. See http://www-03.ibm.com/servers/eserver/zseries/lspr/lsprwork.html which describes long running batch jobs (NO thread interaction at all) on systems run 100% busy (NO load variability). The IMS, CICS (mostly a single address space, remember), and WAS workloads in LSPR should not be assumed to be different in this regard either. This doesn't make LSPR evil: it is not - it's very useful for comparisons within the same platform family. But consider SPECjAppserver, which has interactions between web container, JSP/servlet, EJB container, database, JMS messaging layer, and transaction management - many in different thread and process contexts. I suggest you reconsider your characterization about thread interaction. Complaints about thread interaction and variability of load are misplaced and misleading.  The comparison of zLinux /VM at high utilization with highly distributed solution at low utiliation is valid, and well founded on both data  and system theory.   You could make similar comparisons of  consolidated  Virtualized UNIX v  distributed Unix,, VMware v Distirbuted Intel.  Any cross comparison of virtualized v distributed servers  will be leveraged mainly by utilization rather than by raw  performance as measured by benchmarks.  Thus the comparison Jeff complains about as dishonest does in fact represent what happens when consolidating existing servers using virtualization.   My second point is that in making comparisons between consolidated mixed worklload solutions that industry benchmarks are not represetative of the relative capacity or the saturation design point for each of the  systems in question.  There is no current benchmark to use for these comparisons.  This includes LSPR, Suns Mvalues, rPerfs,  as well as the industry benchmarks.  None of them works.  Each vendor asserts leverage for consolidation based on their own empirical results, or perceived strengths in terms of machine design.     I am saying that the scaling of these types of workloads is  less linear that the industry benchmark results and that  some of the things z leverages to do LSPR well  will  apply in this environment as well. Joe(IBM): [[[quoting Jeff: •"Don't compare the brand-new machine to the competitor's 2 year old machine" Understand what the vintage of your machine population is. When you embark on a consolidation or virtualization project compare alternative consolidated solutions, but understand that the relative capacity of mixed workload solutions is not represented by any of the existing industry standard benchmarks.]]] 

Jeff(Sun): We're talking at mixed purposes. What I mean is that one vendor's 2008 product tends to look a lot better than the competition's 2002 box, making invidious comparisons easy. Moore's Law has marched on.  The truth is that when you do a consolidation you usually deal with a range of servers some of which are 4 or 5 years old.  2 year old  vintage is probably farirly representative.  In any case Moore's law does not improve utilization of distributed boxes unless you consolidate work in the process of upgrading. Unless a consolidation is done the utilization will drop when you replace old servers with new servers.  For the consolidation to occur within a single application, the application has to span multiple old servers in capacity.  Server farms are full of applications which do not use a single modern engine efficiently let alone a full multicore server.   Jeff's main argument is with the utilization comparison.   The utilization of distributed servers, including HP's, Sun's and IBM's, is  very often quite low.  It is possible to consolidate a lot of low utilized servers on a larger machine. The mainframe has a long term lead in the ability to do this, that includes hardware design characteristics (Cache/Memory Nest), specific scheduling capability in hypervisors (PR/SM and VM), and hardware features (SIE).   How many two year old low utilized servers  running disparate work can an M9000 consolidate?   

Joe(IBM): [[[quoting Jeff: • "Insist that your vendors provide open benchmarks and not just make stuff up."
Get underneath benchmarketing and really understand what vendor data is telling you. Relate benchmark results to design characteristics. Characterize your workloads. (Greg Pfister's In Search of Clusters and Neil Guther's Guerilla Capacity Planning suggest taxonomies for doing so.) Understand how fundamental design attributes are featured or masked by benchmark loads. Understand that ultimately standard benchmarks are “made up” loads that scale well. Learn to derate claims appropriately, by knowing your own situation. (Neil Gunther's Guerilla Capacity Planning suggests a method for doing so)]]]

Jeff(Sun): This is not the "making stuff up" that I was referring to. I was referring to misuse of benchmarks in the z10 announcement, which IBM was required to redact from the announcement web page and the blogs that linked to it. I'm not arguing against synthetic benchmarks that honestly try to mimic reality, I'm arguing against attempts to game the system that I discussed in my "Ten Percent Solution" blog entry.  I have explained the comparison made for the z10 announcement above.   Jeff objects to the utilzation coparison which is legitimate. In fact when servers are running at low utilization most of them are doing nothing most of the time.  That is the central argument for virtualization which is generally accepted in the industry.  I am also pointing out that Industry Standard Benchmarks are not created in purely noble attempt to uncover the truth about capacity.  In fact they are generally defined in a way that supports the distributed processing, scale out. client server camp of solution design, which is why they scale so well.   Think about it.  The industry standard committees each vendor has a vote.  System z represents 1/4 of IBM's vote.   Do you think there will ever be an industry standard benchmark which represents loads that do well on its machine structure?  The benchmarks and their machines have evolved together.  They can represent loads from single application codes that are cluster or numa concious.   What happens to all of those optimizations when workloads are stacked and the data doesn't remain in cache or must migrate from cache to cache?  The point is that relevance and validity of  either side of this argument is highly workload dependent.   The local situation will govern most cases.  Neither an industry benchmark result nor a single consolidation scenario  is more valid than the other. 

Joe(IBM): [[[quoting Jeff: • "Be suspicious!"Be aware of your own biases. Most marketing hype is preaching to the choir. Do not trust “near linear scaling” claims. Measure your situation. Don’t accept the assertion that the lowest hardware price leads to the lowest cost solution. Pay attention to your costs, and don’t mask business priorities with flat service levels. Be aware of your chargeback policies and their effects. Work to adjust when those effects distort true value and costs."]]]

Jeff(Sun): With this I cannot disagree. That's exactly what I have been discussing in my blog entries: unsubstantiated claims of "near linear scaling" to permit 1,500 servers to be consolidated onto a single z (well, the trick here is to stipulate that 1,250 of the 1,500 do no work!) By definition servers running at low utilization are doing nothing most of the time.or to ignore service levels (see my "Don't keep your users hostage" entry). Actually virtualization  of servers  on shared hardware can improve service levels by improving latency of interconnects.  I'll also add "beware of the 'sunk cost fallacy'": you shouldn't throw more money into using a too-expensive product that has excess capacity because you've already sunk costs there.  Actually, adding workload to an existing large server can be the most effiicent thing to do in terms of power, cooling, floorspace, people, deployment, and time to market, even if the price of the processor hardware is higher.  These efficiencies and the need for them is locally driven.  In general there may or may not be a "sunk cost fallacy" .  In fact  you should also be aware of the "hardware price bargain fallacy".  Finally, Sun itself recognized System z and zVM as "the premier virtualization platform" when Sun and IBM jointly announced support of Open Solaris on IBM hardware.

by Joe Temple July 28, 2008 in Systems Technology
Permalink | Comments (4) | TrackBack (0)

2Q2008 IBM Earnings

IBM CFO Mark Loughridge highlighted these facts from IBM's 2Q2008 earnings results:

1. System z revenue was up 32 percent for the quarter (year over year); capacity ("MIPS") grew 34 percent. That means System z gained marketshare again. It also means customers keep getting lower per-MIPS pricing, especially for non-U.S. dollar buyers. (That's just simple math.)

2. Loughridge said, "...frankly, we were sold out." Apparently IBM built System z machines as fast as it could in the quarter, and it still wasn't enough to satisfy demand.

3. System z demand was particularly strong in Europe and the Americas, and in the financial sector.

4. IBM shipped a record number of System z specialty engines in the quarter, indicating strong demand for placing new applications on the mainframe.

5. In June, IBM sold its first mainframe ever in Vietnam, to a financial institution.

by Timothy Sipples July 17, 2008 in Economics
Permalink | Comments (0) | TrackBack (0)

Mid-July 2008 Potpourri

1. IBM and Sun StorageTek both announced that they are breaking the 1 TB barrier. Now a single enterprise tape cartridge will hold 1 TB uncompressed. For IBM it's the TS1130, shipping in September. Sun ships the T10000B this month.

Remember, get those tapes encrypted. You really want to read in the newspaper that your company allowed terabytes of customer records to leak? Hundreds of gigabytes leaking is bad enough. Encryption is standard on the IBM drive and optional on the Sun StorageTek drive. If you do buy the T10000B, our strong advice is to buy it with the encryption feature.

2. IBM is pouring a cool $1 Billion into updating its East Fishkill, New York, semiconductor plant. Another $0.5 Billion goes to a research center in Albany. East Fishkill's most important product? The very special System z microprocessors, currently quad-core 4.4 GHz beasties.

3. Robert Crawford has some ideas about how IBM should bundle mainframe software.

4. AmeriServ Financial saves $265,000 by moving to a new mainframe. And that figure must be accurate, because that's what the company told the Securities and Exchange Commission in their earnings filing. Good work, AmeriServ.

5. Sun upgraded their M4000, M5000, M8000, and M9000 servers with a newer Fujitsu-supplied Sparc processor. John Fowler, Executive VP at Sun, had to say something, so he remarked, "Today, we're freeing customers from the constraints of the mainframe with open systems along with long-term investment protection features that can future-proof their datacenters."

Mr. Fowler didn't elaborate on what those constraints might be or how a minor speed boost in a Sparc chip (which Sun evidently can no longer invest itself to produce) has any new relevance to, say, owners of business-critical z/OS-based application portfolios. Although perhaps he was referring to partner Fujitsu's mainframes. Fujitsu has failed to innovate or even enhance its mainframes, having effectively frozen its MSP and XSP operating systems. IBM is happy to talk with Fujitsu MSP and XSP customers, almost all of whom are in Japan, about how the best and least risky business choice, by far, is to move to System z.

We're also not sure if Mr. Fowler consulted with the Sun StorageTek marketing team, which had a much more significant product announcement as mentioned above — a product which gets attached in huge numbers to mainframes.

Our friendly advice to Mr. Fowler is that Sun should help customers rationalize their data centers and lower their energy costs. Sun might have some market opportunity there if they can increase the performance and virtualization potential in their servers. We also think Sun should continue to broaden its software product portfolio, including extending that portfolio even more to System z Linux and z/OS. Solaris for System z, for example, is intriguing to us.

Unfortunately the next day The Wall Street Journal reported that "Sun Lowers Earning Projection."

6. The U.S. Department of Defense's IT agency, DISA, has enabled its 45 mainframes for cloud computing. Now users can visit an internal Web site and request IBM mainframe and other server resources. In 24 hours or less, they get their server(s), on demand. Nice!

7. IBM is unveiling its plans for a new product: WebSphere MQ File Transfer Edition, including a version for its flagship z/OS operating system. I find so many organizations are struggling with FTP, trying to use it as an application integration vehicle. Sure, FTP is ubiquitous, but it's typically pretty terrible for tying application services together. For one thing, it's highly prone to failure and thus a management nightmare. So I'm very pleased to see this statement of direction, which should help organizations start improving application interaction links, a "Step Zero to SOA" in effect.

8. Meet Bill Kegley. Bill is 69 years old. He used to be a mainframe programmer, a noble and modern profession that kept him stimulated, focused, and solvent. But tragically he left his job. He then got bored, and he spun out of control. Now he's broke; his life ABENDed. As Pennsylvania's Pocono Record reports, "Hooked on sex: Man says escort addiction cost him $200,000. 'Stupid, huh?'"

We admire Bill's parsimonious diagnostic insight, something he no doubt learned before he retired. Let's hope he makes it back into a successful mainframe-oriented profession.

by Timothy Sipples July 17, 2008
Permalink | Comments (3) | TrackBack (0)

South Dakota Migrates (1 Application) Off Mainframe; Chaos Ensues

Certain sections of the government always seem to be reacting to long-ago technology fads, and a good example is "mainframe downsizing." As we've documented in exhaustive detail, the modern mainframe is enjoying a renaissance for myriad reasons. Unfortunately the good people of South Dakota are being let down by their state government leaders.

Yes, let's be blunt: somebody(ies) screwed up big. As if $4.50 gas wasn't enough to worry about in a rural state with little or no public transportation, car and truck owners now have to wait in long lines for registrations they aren't even getting. That can't be helping state revenue collection. Now where's the accountability? Here in Japan we'd have the equivalent of the state Transportation Secretary's resignation by now. Instead we're hearing a lot of clearly lame excuses.

It's a little tough being an armchair quarterback several timezones away, hearing only the lame excuses, but I'm going to try anyway. I still have no earthly clue why otherwise smart people confuse application changes with platform changes. Again, as we've also documented in exhaustive detail, if you want to run some hunk of uber-cool 64-bit Java written by vegan cattlemen five minutes ago, go for it — your mainframe will happily oblige. One of the first rules in IT project management when you do change something is to contain the number and scope of changes as narrowly as possible to accomplish the mission. (Interestingly that's exactly what IT service company sales representatives don't like — they love the over-sized "big bang" projects — but I digress.) Evidently that basic rule wasn't practiced in this particular case, and now South Dakota is suffering.

by Timothy Sipples July 11, 2008
Permalink | Comments (2) | TrackBack (0)

The Pitfalls of Mainframe Linux

In another forum, an IT project manager from the financial sector asked what might be the downside to mainframe Linux. Since so many have touted the positive aspects of Linux on System z, in balance, what are the drawbacks? Excellent question!

My answer ...

I come from the "big iron" side, but have been a multi-platform player for decades so have some objectivity. Disadvantages of Linux on the mainframe are few and include:

  • high entry bar because of the class of hardware
  • lack of graphical console and confusion over text mode consoles
  • emotional stigma attached to mainframe stereotype

The first challenge is the bar to entry.  Even a low-end System z represents a bigger up-front investment than most laymen can make, so running Linux there is beyond the reach of hobbyists or casual Linux  users. When such a user shifts and begins to consider Linux professionally, there is a gap due to lack of exposure; they are unfamiliar with the mainframe port. As hardware goes, System z scales extremely well in both directions, up and down. But the very lowest end of the action is the realm of other hardware.  (got hand-held?)

A second negative is lack of sexy graphics.  While there are plenty of graphical devices for System z and its predecessors (S/390 and older), the Linux port makes no use of them. You can run X windows applications, as most of us mainframe Linux users do every day.  But the physical display for such use is attached elsewhere. This goes hand-in-hand with the lack of warm fuzzy feelings many consumers have about their computers: they can't usually touch and see Linux on System z so they don't get to know it in the same intimate way. Virtual machines help, and z/VM is the most robust virtualization environment, but it still does not fill in this gap of a graphical console for Linux.  (Then again, some users have the same problem with virtual machines as with mainframe hardware because of the need to touch, see, feel, ... scratch, sniff.)

But surely the worst thing about Linux on the mainframe is the political baggage of the word "mainframe". When people hear it, they immediately think of green screen applications which they hated in prior experience. Forget the advantage that the hardware can support retro code letting    companies defer costly package upgrades or painful package replacements. Never mind that the hardware and traditional operating systems have nothing directly to do with the in-your-face formalities of business. "It's a mainframe and we know that that means."  [sigh]

Our friend in the financial world probably expected to hear about dollar costs. Sorry, Charlie.  Better fishing elsewhere for that kind of stuff. He did, however, ask an important and uncomfortable question.  Now, I do not work for IBM and hold no stock in that company so my opinion is not tainted by personal gain: I can fairly say that the biggest con to mainframe Linux is social. There is also a real boundary for low-end scale down, but it is minor.

Mainframe Linux pitfalls?  Few and hard to find.

-- R;

by sirsanta July 10, 2008
Permalink | Comments (11) | TrackBack (0)

Forrester: Service Level Shortcomings Are Pervasive & Costly

Stephen Swoyer over at Enterprise Systems Journal reports on a new Forrester Research study which finds serious deficiencies in IT service delivery. The study finds that IT only meets Service Level Agreements (SLAs) about 75% of the time. Moreover, business users often find the SLAs woefully deficient — both poorly defined and, even when written and enforced, too limited to serve more demanding business needs.

Anecdotally I am finding the same situation among IT organizations around the world. Many organizations fail to define service levels in business terms, and they often have poor dialog with business stakeholders to help them understand risks and outcomes. They do not even agree on how to measure the outcomes, and frequently there is no measurement. It all sounds disturbingly familiar.

I remember one recent case when a major insurance company went into production with infrastructure to support their entire core application portfolio — underwriting, claims, etc. — but the IT service team failed to provide any disaster recovery. (Yes, literally zero DR.) One data center fire or explosion would have put the company out of business, probably disrupt that entire country's economy, and send a bunch of company executives to prison. The CEO, to his credit, found out about the gap and immediately ordered expensive remediation. Of course that remediation meant the whole IT project had a negative return on investment, but that's a story for another day.

So what do Forrester's findings have to do with mainframes? Everything, and in at least two ways:

1. Mainframe technologies and disciplines encourage the measurement and management of well-defined service levels to meet or exceed business goals. That remains a serious set of advantages for the platform. If Forrester is correct, that set of advantages is growing over time as business users become more demanding, as IT becomes ever more essential to myriad business processes.

2. If you have a mainframe, you have a challenge if you are not delivering these advantages to your customers. I still run into shops that schedule outages to IPL (reboot) every week to "fix" a 30 year old (and long ago defeated) memory leak. Come on! Do you think your customers appreciate that? Are you still taking online offline in order to run batch? It's 2008 — why? If you don't have Parallel Sysplex, why not? If you do have Parallel Sysplex, is it only for pricing purposes (a "ShamPlex")? Why not actually use what you paid for? Why are you still taking all of DB2 out of service for a version upgrade? Why are you taking the whole Sysplex out of service when you upgrade a frame or microcode? In other words, why are you forcing unnecessary outages on your users? Have you asked them lately how much they'd prefer to avoid outages?

Forrester suggests that the bulk of improving SLAs involves sitting down and talking in business terms with business users. Shocking, I know. :-) But are you?

I see a lot of RFIs and RFPs in my work. I am not fond of them, but I particularly dislike the ones that have little or no appreciation for service level requirements. If you're putting "99.99%" in your RFP and think you're done, you're in serious trouble. Most vendors (including IBM often enough) think you are describing a limit for unplanned outages. Are you, and is that what your business users expect? Then is it OK to have a planned outage every week for 6 hours for backup? (Salespeople being what they are, what do you think their answer is?) And that's just the start. For example, in disaster recovery — you did remember that, right? — do your users want full production capacity? Most vendors won't include that. (And don't you still need to develop and test at some point after a disaster? That may be the most critical time to develop and test. Where's that capacity?) What are the RTOs and RPOs for each business service? (You do have RTOs and RPOs well defined, right?) What sort of failures do you want to protect against? (Have you done a business risk analysis of some kind?) And, assuming you do a good job articulating all your requirements, how will you prove that the vendor is proposing a solution that will actually meet or exceed your standards?

I think it's also a good idea to understand the costs and implications for incremental SLA improvements. For example, if today you buy something that requires 6 hours of outage every week plus 24 hours of outage to upgrade certain infrastructure elements (such as the database version), how expensive (and would it even be possible?) to close those service level gaps? Business needs change, after all: let's call that service level scalability. Any platform you can think of that can manage to multiple service levels, and that can provide easy and comparatively low cost service level upgrades as business needs change?

I still vividly remember one conversation between a Web development team and a mainframe operations team. (Why they weren't talking with one another before our meeting is a mystery.) The Web team said, "We cannot take the Web application down every weekend, so we need to rehost the application from the mainframe." The mainframe operations team replied, "So, you want us not to IPL that LPAR each weekend?" Web team: "Uh, I think so, but what's an IPL?" Ops team: "An IPL is a reboot. OK, done. We won't IPL that LPAR this weekend or any future weekend. We'll send you a new SLA for your approval." Web team: "Why didn't you do that before?" Ops team: "Because you didn't tell us [in precisely the technical way we expected]." Web team: "It's just that easy? Do you need to do anything?" Ops team: "We weren't IPLing most weekends anyway — we just had that possible planned outage listed in our previous SLA with you. No, we don't need to do anything except send you a new agreement. We'll keep your application up and running." Web team: "Uh, thanks."

Yes, I agree with Forrester: IT organizations need to do much better in these areas. What do you think?

by Timothy Sipples July 6, 2008 in Economics
Permalink | Comments (0) | TrackBack (0)

Green Data Center Man

Will he save the day?

Some fun over the long weekend:

http://www.youtube.com/watch?v=VZUO5W7O7Gk&NR=1

by Boas Betzler July 4, 2008
Permalink | Comments (0) | TrackBack (0)



The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.