Application Development

The Mainframe Blog Home

Free Stuff for Your Mainframe: 2013 Edition

FREE! That's a word we all like to hear, though maybe "no additional charge" is more truthful since nothing in life is truly free. In the past I've listed many of the mainframe freebies available, and those posts have been popular. This post is an update since there are so many more mainframe freebies now. Please do grab these freebies, explore, and put them to productive use. After all, the price is right.


Free Mainframes

Free mainframes? Did you say free mainframes? Yes, you can get access to mainframes free of charge with certain caveats. Here are some examples:


Free Mainframe Operating Systems

Linux is an open source operating system licensed per the GNU Public License (GPL). This type of license means that you don't have to pay a license fee to obtain and to use Linux, though Linux distributors (such as Novell and Red Hat) charge fees if you want their optional support services. Here are some examples of Linux distributions and other operating systems available for zEnterprise:


IBM Freebies for z/OS, z/TPF, z/VSE, and z/VM


Other Freebies (Mostly for z/OS)


I'm sure I'm only providing a partial list of mainframe freebies, but it's a start. Have fun!

by Timothy Sipples August 29, 2013 in Application Development, Economics, Linux, z/OS
Permalink | Comments (0) | TrackBack (0)

NASDAQ Needs a Mainframe

NASDAQ needs a mainframe with mainframe software engineering.

by Timothy Sipples May 21, 2012 in Application Development, Financial
Permalink | Comments (0) | TrackBack (0)

Explosion of New Mainframe Software

I continue to marvel at how much new mainframe software is being introduced, and not just from IBM. Let's take a quick and necessarily incomplete tour:

  • IBM Financial Transaction Manager: Provides new and enhanced, pre-built, ready-to-roll support for financial industry transactions and messaging via SWIFT, as one example.
  • Tivoli System Automation Version 3.4: Extend automation throughout the zEnterprise and beyond, across many types of virtualized environments. A very important ingredient in successful cloud deployments now.
  • WebSphere Operational Decision Management: Add business rules flexibility to your enterprise applications, regardless of programming language. Helps dramatically cut down on the amount of coding you need to do.
  • WebSphere eXtreme Scale Version 8.5 and WebSphere Application Server Version 8.5: Exciting for their performance, support for the latest cutting edge Java Enterprise Edition standards, and the new lightweight Liberty Profile deployment option.
  • Business Process Manager Version 8.0 and Business Monitor Version 8.0: New iPhone/iPad capabilities for viewing, managing, and participating in sophisticated, optimized business processes.
  • CICS Transaction Server Version 5.1: Lots of improvements, including CICS's very own sophisticated, standards-based Web user interface environment (with JSPs, etc.), support for the WebSphere Liberty Profile, a big leap in Java performance and flexibility, pre-configured MQ DPL support for containers (no more 32K limit!), and lots more 64-bit support, among other features. A beta version of CICS TS 5.1 will be publicly available for download.
  • CPLEX Optimizer: Lots of mathematical optimization routines, ready to use right from your core applications on your mainframe.
  • Tivoli OMEGAMON XE Version 5.1: Wow, they dramatically enhanced the 3270 interface and made the graphical interface easier to deploy. I love the new interface!
  • GT.M from FIS Global: This is a very high performance key-value "NoSQL" database, available as open source on PCs for developers but also now available on z/OS and Linux on z with full support from FIS. GT.M is the foundation for FIS PROFILE core banking applications (now available for z/OS as well), but it is also a very popular execution environment for applications written in the M programming language, also known as MUMPS. The healthcare industry is chock full of important MUMPS applications, including the open source VistA software created by the U.S. Veterans Administration. Thus GT.M provides a wonderful new option for consolidating and simplifying thousands of healthcare industry applications onto IBM zEnterprise, some of which are still running on old DEC VMS systems, many of which are mission-critical.

    by Timothy Sipples May 10, 2012 in Application Development, Cloud Computing, Innovation
    Permalink | Comments (0) | TrackBack (0)

Class Reunion: "What is a Mainframe?"

Here's an animated video depicting a fictional conversation between two former high school classmates at their reunion. One is now a "genius" bank manager, and the other is a mainframe programmer. One is at least a bit smarter than the other. Enjoy.

by Timothy Sipples January 24, 2012 in Application Development
Permalink | Comments (3) | TrackBack (0)

New WebSphere Application Server Liberty Profile

A large and growing percentage of mainframes run JavaTM code. Even when you license only z/OS, you get Java at no additional charge. CICS Transaction Server, IMS, DB2, WebSphere MQ, Linux on zEnterprise — the list goes on and on — all support Java. If you want to write or run Java on the mainframe, there's nothing stopping you. Go for it!

I'm quite pleased to see that IBM has announced its beta program for WebSphere Application Server Version 8.5. One major new innovation is the WAS Liberty Profile which supports both z/OS and Linux on zEnterprise. The Liberty Profile for z/OS is tiny (by today's and yesterday's standards): the download is only 32 MB. It starts quickly and consumes very little memory. And you can download the beta version now to try yourself. Of course, anything that can run on the Liberty Profile can also run on WebSphere Application Server if/when you're ready. That's because the Liberty Profile is WAS, but with as-needed/where-needed function delivery, depending on your application's requirements. And yes, of course, you can access all the helpful JZOS methods from the Liberty Profile for z/OS.

I expect this new WebSphere Liberty Profile will be extremely attractive to mainframe customers and to mainframe software developers. (Did I mention it's tiny?) Please go give it a try today and let IBM know what you think.

by Timothy Sipples December 21, 2011 in Application Development, Innovation, Web Technology, z/OS
Permalink | Comments (2) | TrackBack (0)

Windows Server 2008 and Itanium: No, Obviously Not Better

I had a really good laugh watching this video:

That video was uploaded to YouTube on November 29, 2009 — less than two years ago. But just barely four months later, on April 2, 2010, Microsoft announced that Windows Server 2008 R2 would be the last Windows operating system for Itanium systems.

No, Windows 2008 and Itanium are not a "better alternative to the mainframe." They aren't even alternatives at all.

The Itanium turmoil in recent months is a good reminder of one of the hallmarks of mainframe computing: durability. As IBM celebrates its 100th birthday this week, I think the company's most astonishing accomplishment is the continuing fulfillment of a promise made in 1964, with the announcement of the System/360. It was a simple but powerful promise, one that requires extraordinary focus and continuous investment to keep: never again would people have to re-write applications unless there was a business reason to do so.

Here we are, over 47 years later, and IBM has kept that promise. Some people say the mainframe is "old." No, it's Itanium that's old, because Itanium couldn't even accomplish what the System/360 did starting in the mid-1960s: preserve your investment in your valuable code, written in any language — even 24-bit assembler.

Now, that's not to say you should keep all that old code. You might improve it, extend it, renovate it, adapt it, enable it, or otherwise do something with it. But those decisions should be based insofar as possible solely on business reasons, not on the whims of vendors. Breaking code is really, really expensive. It's durability that's extremely valuable — and thoroughly modern.

So if you've got a business process that you want to automate, and if the process (or at least its steps) might be around for a while, it's a very good idea to develop on the mainframe where you can keep running as long as you like. But it's not only that. That old code isn't trapped in some time capsule. You can fold, spindle, and mutilate it as much or as little as you want. You can run that old code right alongside code written 5 minutes ago, with old and new cooperating in myriad ways. It really is a magnificent accomplishment, one that hasn't been replicated.

Happy 100th birthday, IBM. And thanks for the durability.

by Timothy Sipples June 17, 2011 in Application Development, History
Permalink | Comments (1) | TrackBack (0)

Yet Another Reason to Like COBOL

Last year I spotted this full size display advertisement in one of the Tokyo Metro's stations, alongside the beer, fashion, and other usual billboards:

2009Jun26 COBOL Ad Tokyo Subway

The ad encourages people to contact the COBOL Corporation which has teams of happy COBOL programmers (judging from their Web site) ready and able to help your business modify your COBOL applications to solve new business requirements, quickly and without fuss. It's an on-demand COBOL programming hotline, in other words, and it's just as convenient as ordering sushi for delivery.

Let me cut through the noise: COBOL is a fantastic business application programming language. (So is PL/I, and so are some others.) It performs extremely well, it's adaptable, it's versatile, it's incredibly durable, it's object-oriented if you want (and procedural if you don't), mere mortals can read it, it's easy to learn and easy to troubleshoot, it's graphical, it has world-class solution accelerators like rules engines and data transformation kits, it supports XML and Unicode and EBCDIC, it runs the world (financially speaking, at least), and, in short, it simply works.

There are at least a million professional COBOL programmers. Just pick up the phone and dial.

by Timothy Sipples June 24, 2010 in Application Development
Permalink | Comments (2) | TrackBack (0)

Java Grown Up with the System z10

If you read the newspaper articles and watch IBM executives speak on television about the new System z10, you will hear them use some interesting phrasing concerning performance: "...up to 100% faster for CPU-intensive applications."

That's a bold claim, but I think what's most interesting about the System z10 EC is that, for perhaps the first time, this machine is a mainframe-supercomputer. Its quad-core processor technology is state-of-the-art in every respect — not just in I/O performance and execution integrity, but in sheer number-crunching punch. The hardware decimal floating point support is one example, but another example is how Java and XML workloads behave on this machine compared to its predecessors, which were already quite good.

I guess it's no secret that Java simply requires more processing capacity than, say, compiled C/C++ or COBOL, ceteris paribus. That's true on any platform. Processor capacity is never free — except during off-peak hours :-) — but in the large scale enterprise systems world (i.e. mainframes) we care about how code scales, because we worry about supporting the largest number of users as economically as possible. So we keep COBOL, PL/I, C/C++, and other code — and create billions of new lines each year — in part because the code execution is efficient and has evolved with the organization's performance needs in mind.

Before I was born the great debate was between COBOL and Assembler. High level languages like COBOL were slow and inefficient but more portable, and Assembler was peppier but harder to learn and maintain. The high level languages mostly "won." However, when IBM created the System/360 architecture, Assembler got a boost because it was more portable, able to run on any bigger System/360 or any successor. A lot of that Assembler is still running, and a lot of people are still writing new Assembler.

Anyway, there's a similar debate concerning Java, although that debate is probably winding down now. IBM and other vendors keep incrementally improving Java technology, with progressively faster just-in-time compilers and better platform optimizations. In 2004, when IBM introduced the zAAP technology, Java economics radically improved, and Java invaded huge numbers of mainframe sites. And now, in early 2008, we have the System z10. The new quad-core design once again radically improves Java's economics, with Java typically much closer to that "up to 100% improvement" figure than other workloads. So if you roughly double the core Java performance, increase the number of cores to as many as 64 per machine, triple the memory (for those memory-hungry Java applications, to trade memory for CPU), steer work better using HiperDispatch, allow up to 32 of those 64 cores to be license-free zAAPs, and run the whole thing on zNALC-priced z/OS LPARs that enjoy another "technology dividend," what does that do to the economics?

It means the System z10 is the ultimate Java platform, that's what. And the machine still runs everything else you ever wrote (or will write) in the other languages, faster and better.

Now, contrary to popular belief, path length still matters. Java is not the universal programming language; no language is. (Well, maybe Rational EGL, which lets you program in one easy language and deploy to either a COBOL or Java runtime, without learning either language. Darn useful, that.) But the new System z10 challenges traditional thinking: what if mainframe processing power were comparatively cheap and abundant, especially for Java? What if mainframes really were supercomputers?

System z10 is a disruptive technology.

It's fascinating working with Java developers who experience running their code on the mainframe for the first time. Their first reaction is typically, "It can run?" Then, "Wow, it runs!" Yes, write-once/run-anywhere really works, at least for J2EE applications.

But the next reaction is often, "Are you sure what the mainframe is saying about my code is true?" That's because there are no coding secrets when you run on the mainframe. WebSphere Application Server for z/OS can produce a flood of juicy SMF records to help expose poor quality (e.g. poor performing) code quickly and precisely. Tools like Jinsight (free and exclusively for System z) and Tivoli Composite Application Manager peer even deeper inside. The mainframe is the best Java diagnostic environment on the market, at least "out of the box." All this introspection can be a blow to the sometimes fragile egos of application developers who suddenly learn that, no, code to look up an account balance probably shouldn't require two full CPU-seconds per user. Make sure you understand that psychological dynamic.

One solution is to award bonuses to developers who meet or exceed performance targets. There are some complexities in how to run such a program, but the concept makes sense and can help establish a new performance-oriented esprit d'corps within the development community.

Sample Jinsight Screen
Jinsightlive02

by Timothy Sipples February 28, 2008 in Application Development
Permalink | Comments (1) | TrackBack (0)

zSummer University in Germany

From August 27th to September 7th, the IBM Development Lab in Boeblingen, Germany, will be hosting this year's zSummer University - two weeks full of lectures, practical workshops, and programming courses where selected students will learn about important System z hardware and software products and can put their new knowledge into immediate practical use. There will be plenty of opportunities for intense discussions and networking, not only during the workshops and educations, but also at the kick-off meeting and half-way party. Besides touching upon some more general topics like the Cell Broadband Engine processor and novel programming languages, the zSummer University will focus on:

  • System z hardware architecture,
  • z/Linux, z/OS, and z/VSE,
  • System z administration,
  • CICS, IMS, and DB2 on System z,
  • WebSphere on System z,
  • WebServices and SOA,
  • The Gameframe.

View this photo

If you are a student of an advanced studies program in computer science or a related field of study at a German university, are eager to learn more about System z, and have time from August 27th to September 7th, then apply for the zSummer University by sending us your application, including your resume and your motivation for applying, by July 31st. For the 25 participants, IBM will provide accommodations in Stuttgart city and free meals at the development lab. Contact information as well as further details are available at http://www.ibm.com/de/entwicklung/jobs/summer_uni.html .

(Stefan Letz contributed this posting)

by Boas Betzler June 27, 2007 in Application Development
Permalink | Comments (0) | TrackBack (0)

No Coder Left Behind

A Mainframe easy enough for my brother's kids


As escalating IT security threats, datacenter utility bills and other market conditions continue to drive new customers to the mainframe,  we're noticing an increased demand for mainframe skills.  While a cadre of initiatives over the past few years like the Master the Mainframe contest have produced over 15,000 new mainframe skills, today IBM announced a goal to turn every computer science student in the world into a mainframe-capable coder or administrator by 2011.  This is supported by a $100 million investment to make the platform so simple that even a windows programmer can use it <g>


As the character in Mainframe meets "The Office" said, "let me explain it to you like you're my brothers' kids:"  The new z/OS V1.8 includes a sharp GUI, which stands for "pretty pictures," as well as other tools to make life simple, including a "personal trainer" which checks out the system and makes adjustments and fixes.  And a new web resource, the z/OS Basic Skills Information Center, designed specifically for IT professionals who are new to the mainframe and z/OS.


Coders, switch to the mainframe.  No skills?... no problem.

by rjhoey October 4, 2006 in Application Development
Permalink | Comments (10) | TrackBack (1)

Mainframe code presents problems?

A discussion thread from the IMS-Listserver, that has been going over the last few days:

Did anyone get a chance to read the following article?

http://www.itworldcanada.com/Pages/Docbase/ViewArticle.aspx?id=idgml-5f1909fa-853e-41e0&Portal=2e5351f3-4ab9-4c24-a496-6b265ffaa88c&s=29799

This seems to imply that big banks will be migrating off the mainframe (IMS) real soon.

Also, it mentions SABRE which is American Airlines TPF (Transactions Processing Facility) also known as ACP (Airline Control Program).  Most TPF applications are written in assembler.  They are very small and very fast.  TPF programs track everything from reservations on millions of airline flights, advanced seat selection and on-time performance of aircraft.  I wonder how well a network of distributed servers can handle what is essentially a huge inventory application with very perishable product?

Any comments?

-----------------------------------------------------------------------

WOW. As an alumnus of the long defunct Eastern Airlines; this brings back memories that are older than I admit to being.  Eastern collaborated with IBM on the original ACP and sold a copy to American for what was then the 'world record' price for a piece of software. It may still be the 'record'; depending on how you define 'single function'

software.

Notably, AMR only speaks of the high cost of conversion.  It doesn't sound to me like they are seriously considering replacement.  If anyone from SABRE monitors this forum; it would be fascinating to hear how many transactions per second they are processing these days.  That would spawn a discussion in this forum of whether or not IMS could yet handle that volume.

IMS transaction integrity would be a HUGE leap forward.  When an ACP transaction died, the application program was responsible for backout/recovery/cleanup/everything.  Ponder that the next time an airline can't find a record of your reservation.

I pursued converting Eastern's reservation system to IMS Fast Path in the mid 1980s.  Our ACP group viewed that as a call for jihad and Eastern was spiraling into bankruptcy so rapidly that the only positive outcome was giving me something to discuss with Peggy Rader.

ACP/TPF was a brilliant piece of code for it's time; but, that was a long time ago.  It never approached IMS Reliability.

Dale

P.S. 'Financial' institutions will likely migrate away from IMS whenever someone produces a database & transaction processor that is more reliable than IMS (sometime after pigs learn to fly).

-----------------------------------------------------------------------

Great discussion!  As an Alumnus of the defunct Piedmont Airlines and USAIR, I could not agree more with Dale.  Brilliantly put.  However, the sad news here is that the Mainframe is NOT a strategic direction for my corporation no matter what you say or what proof you have to show otherwise.  No way to get anyone to look at the tremendous cost of running everything on a gazillion servers or how many people it takes to support those servers not to mention the non-24x7 availability.

The most puzzling thing to me is the three-day reorg.  When was the last time one of those ran on the mainframe, IMS especially?  Probably only one prior to converting it to make-sense products like MAXM Reorg or even Online Reorg if using HALDB.

I feel IBM let us all down.  In technical conferences of the past, I have brought that topic up.  IBM Marketing expected the individual corporate techies to drive technology and save the mainframe.  That went away in the 80's and early 90's.  Who listens to us now?

Now the CIO's get the common trade journals.  Where was the mainframe being marketed there?  Where was IMS, MVS, CICS or DB2 being marketed?

No where.  That, my friends, is the problem. Newer technologies came and crept up on us and our naivety felt the dependable nature of the Mainframe would win out.  Wake up.

We all know we have the most stable systems around.  You can't compare the response times or reliability with Open Systems.  But let's make way for all those servers which are now beginning to take up areas in the data center that resemble the old DASD and CPU farms.  My goodness, has anyone seen the new servers?  They look just like old 360's.  Try to convince upper management the Mainframe is the best investment and they now look at you like you just swallowed poison.  How very tragic.  I worked on both TPF and IMS at

Piedmont

and USAir.  I hate to sound like a skeptic, but the writing is on the walls, at least here.

-----------------------------------------------------------------------

OK, the first thing I'm getting from this discussion is that at one time or another, just about every IMS DBA or systems programmer on earth must have worked for a major airline that has merged, gone belly-up, or come close.  Count me in that group as an ex-US Airways alumnus.

<<begin rant>>

Like others, I find this discussion fascinating.  I'd been working as an IMS developer for less that a year in the early 80s when "friends" and trade publications began advising me that IMS was a dinosaur and would soon be eclipsed and replaced by DB2.  The fact that we're all still discussing it tells you how accurate that prediction turned out to be.

During that same period, I attended night school at the

University

of

Pittsburgh

and was advised by the head of the CompSci department that mainframes and COBOL applications would soon disappear and be replaced by DEC and VAX clusters and languages like Pascal and

ADA

.  Likewise, I've watched and watched as client server applications and DBMS became hotter and hotter, causing the same publications to write the mainframes off long ago.  How much of any of this has turned out to be true?  I'll leave that to all of you.  Has the short sighted mentality that makes executives want to convert everything to the hot new servers and DBMS systems eroded IMS' domination?  Sure, the same way that DB2 did at one time.  Yet, IMS continues to evolve and becomes even stronger and more dependable with each release.

At one of my recent employers (and there have been MANY), during an orientation session the CIO discussed the technology the company was using and would use in the future.  There company hires a significant number of IROCs (Idiots Right Out of College), and most of them know only that CompSci educators have told them mainframe technology is outdated, and one questioned the CIO's statement that the mainframe was absolutely integral to the company's plans.  The CIO put it into perspective by suggesting that rather than focusing on the terms "mainframe" and "servers" and seeing the two as being different species, that people unfamiliar with mainframes instead view them as the biggest, fastest, most powerful, most dependable, most cost effective servers imaginable, and understand that the mainframe is simply the best choice for many businesses processing high volumes of transactions and requiring speed and dependability above all else.  Isn't that what IMS, z/OS and mainframes are all about?  At most places I've been where both IMS and servers were used, the server application required far more DBAs and far more outages than IMS.  When company executives make decisions to arbitrarily convert to server applications because that's what they see being hawked in magazines and trade journals, they're making technology decisions the same way they'd shop for a car, and that's a disservice to the companies for whom they're supposed to be guardians. 

<<end of rant>>

I agree with Ivan, who said earlier that the demise of mainframes and IMS has been greatly exaggerated.

                                                                           

-----------------------------------------------------------------------

I really like the discussion this brings out, with Avram comments, I started out my career in 1969 with the state of

Pennsylvania

on a 360 model 50 with 512K. we ran applications written in assembler to all the processing for the states unemployment compensation systems. You know what, these systems are still running today, I do not know how, but they are, and there was never a requirement to change or rewrite anything due to hardware or software changes made by IBM. They were always downward and upward compatible. My old job has long been outsourced, but IMS is still there, for how long I do not know. I'm sure someone is looking at how they can rewrite all these OLD applications to run on the much cheaper hardware that will take who knows how many servers to support and how large of a staff will it take. How reliable will it be. Zelma mentioned 3 day reorgs, I have seen 2 day upgrades for this other hardware. Has it ever taken 2 days to upgrade an IMS release, I think NOT.

After the end of the Vietnam war, I can remember working 7 days just to get 5 days work done, unemployment went sky high in the early 70's, but we still got the claims processed.

Today these systems even have GUI front ends on them, to give them the modern look. They are kinda of like the old Timex commercials, they take a lickin' and keep on ticking.

­-----------------------------------------------------------------------

Re the question: "Anyone have any idea from IBM side for example, what's the relative revenue and profit between various *series?  If mainframe is still profitable for IBM..."

Earlier someone posted a link to a NY Times article from this month that stated this;

"all mainframe-related hardware, software and services account for a quarter of its [IBM's] revenue and, more important, about half of I.B.M.'s total operating profit"

So one would assume pretty important then!

I know of a large company in the

UK

that chose pSeries as its platform of choice for a large new application, despite having been heavily invested in zSeries already and understood the scalability, performance and resiliency advantages of z and liked the mature management procedures that came with it, however they saw pSeries as the cheaper option. IBM had also recommended pseries to them. Now a little way down the line as the application expands and evolves they are finding that pSeries is actually proving less economical than zSeries!...you can assume that gap will only widen as the application grows yet further.

So perhaps IBM think they can get more money from their customers by selling them non-mainframe products!?!

I think IBM need to encourage their mainframe customers to leverage their mainframe investment. All too often I think IBM is sending consultants who know nothing about mainframes through the doors of their customers.

-----------------------------------------------------------------------

My concerns:

- Marketing for mainframe in individual shops, more and more shops the application people have access to business and not system people.

Applications tend to market to business in silo view and don't need to worry about integrated application advantage at all. Business naturally will favour the slick views and the appearant savings.

- Marketing for mainframe as a whole - Missing.  Anyone have any idea from IBM side for example, what's the relative revenue and profit between various *series?  If mainframe is still profitable for IBM, some money should be channel into making a better image. Or does IBM think a wild mazed IT environment is really the best thing for IBM?

- Charge back scheme in various shops are probably more mature in the mainframe, and I wouldn't be surprised that many distributed system cost other than equipment is buried within the mainframe chargeback.

- Software cost on mainframe - what a impressive system, that alone can bury mainframe, can't say anymore!

- When your application enviornment start to migrate little bits to distributed enviroment, this creates an interesting inbalance because they will take all the easily done and cheap parts with them, leaving the difficult parts in the mainframe, thereby making the mainframe with higher percentage of mutants. After a few iterations, the applications left on host all look like dogs breakfast, and guess what, the system people have to deal with them. And then the application people say 'see, z is difficult and unmanageable' - a nice twist from 'the application is a mutant'.

On the other hand, I myself believe that distributed computing for high volume integrated environment is got to be hoax, we will be in high demand when the wave is over.

by pwarmstrong May 12, 2006 in Application Development, Contest, History
Permalink | Comments (2) | TrackBack (0)

Wachovia mini reference at industry analyst SOA on z conference

Wachovia is going end to end mainframe SOA.

The presentation was interesting- basically Wachovia had an ageing proprietary middleware infrastructure it chose to overhaul and pretty much completely replace... with a mainframe-based system running WebSphere tools. Really this can be seen as a "new" mainframe customer, in that respect.

Keith Harris, VP of retail architecture and integration, Wachovia:

Wachovia is the
4th largest bank in the US
has 3,900 financial centers
and is the 3rd biggest brokerage in the US (another best kept secret?

Core SOA services, such as: Authentication, logging services etc.

Wachovia has some aggressive uptime requirements. the SLA is 99.9% uptime including scheduled outages.

Keith said: "basically, It can't be down."

Results- 7 months since install

Wachovia has exceeded SLAs, and now runs millions of transactions a day

zAAP will provide 92% offload.

WebSphere on z gives: Proximity to data, Achievable SLAs.

Thanks for the info Keith...

by James Governor May 4, 2006 in Application Development
Permalink | Comments (5) | TrackBack (0)

more liveblogging: Robert LeBlanc on zSeries SOA

From the IBM SOA on z industry analyst conference:

I have known Robert LeBlanc for a few years now. He did a good job at Tivoli, turning the organisation once and for all into a true IBM division, rather than a maverick Austin outpost. Now he is running WebSphere as SOA kicks into high gear at IBM.

Robert claims IBM was able to spin out its PC business to Lenovo seamlessly, without interrupting the supply chain, by taking a SOA approach, which is a story I hadn't heard before.

So this mainframe SOA thing. What are the numbers?

He said CICS saw the fastest version to version upgrade by customers in 35 years… to take advantage of new XML and Web Service functionality.

WebSphere Application Server (WAS) on System z grew 21% in 2005. That is pretty impressive growth. IBM also claims 25% of z accounts are now running WAS.

What is the news?

CICS Transaction Server v 3.1 has been enhanced. Who knew this product would hot enough to justify six month revs?

One enhancement is to integrate CICS Service Flow runtime with Websphere Process server. That potentially means "workflow virtualisation" between the two environments.

I had a short conversation yesterday with John Shedletsky, "Mr Mainframe TCO", about language. One big problem for the mainframe is that mainframe people speak a different language from distributed people. Many architects literally don't know what their mainframe people are talking about. John pointed out something very insightful.

CICS is just a container. Say that to a Java architect and they are going to grok it straight away. Sounds simple doesn't it? I would like to see John pursue this line. Maybe we need a mainframe to distributed translation wiki?

But back to Robert.

He spoke to IMS enhancements: IBM last year offered an IMS SOAP interface. The new announcement is an XML adapter on IMS connect, enabling direct connection to IMS without needing WAS an an intermediary. Expose the transactions. Note: I need to get some clarification here. SOAP is one thing-but in many respects native XML is more interesting because it potentially enables more RESTFUL development and data access models. If I am a mainframe shop that wants to offer an online information service, its likely to get more consumers if I offer a more lightweight API, rather than forcing everything through SOAP. SOAP is anathema to many Web 2.0 developers, and IBM definitely needs to position z for Web 2.0. IBM needs to think more about XML data sources not just XML Web Services.

But for those that do like WS-* and metadata-driven development Robert said IBM is going to bring Registry/Repository to the mainframe by year end.

Its also interesting to hear IBM's approach in that regard. While IBM has been saying that its Reg/Rep architecture is federated, what does that mean in practice, and why is it relevant to mainframe?

("The return of the data dictionary" -is this one for the mainframe to distributed babelfish wiki)

The point is that IBM plans to position z as a master registry manager (like master data management), across multiple SOA end points. Maybe it’s a "Registry of Record" but I think 37Signals already owns that acronym ;-)

All in all it seems z and SOA is really happening. Wachovia, a customer, also came in and presented. More on that soon.

by James Governor May 4, 2006 in Application Development
Permalink | Comments (6) | TrackBack (0)

Questions for the mainframe commmunity: Ruby, RACF?

Last week James McGovern from The Hartford posted some questions, some of which concern the mainframe.chips

Initially I just posted some answers to my blog. But actually it makes sense to surface the questions here too. You guys are the experts right. So how about it? Ruby and RACF futures?

1. Does Ruby run on Z/OS? If not, anyone game to work with me to port it? Hopefully, if we port it, the folks in the media will tell the story of how open source can also be created by folks who work in large enterprises and not just software vendors.

3. Does anyone know of any open source equivalents to RACF? Ideally looking for something that plugs into the SAF interface.

4. When will EnterpriseDB run on Z/OS (other than on Linux on the host) or for that matter MySQL?

5. Anyone know of a COBOL SOAP stack? Not interested in all those fancy integration products

6. Any thoughts on how to integrate SXORE and/or Infocard with RACF?

7. As I understand, the Mono Project now runs on the mainframe. Should folks in the enterprise consider running .NET applications there?

I tend to think the notion of a mainframe shop deciding to run an "open source equivalent of RACF" has two hopes- Bob Hope and No Hope. But note that James is not asking how to replace the mainframe, just how to do more interesting things with it.

Some might ask why James is both critiquing Ruby, and asking for porting help. Well I guess its called being a contrarian. So do any of you know about Ruby on z? I suggested Netrexx but that's of course answering a direct question with an alternative... never a great look.

by James Governor March 20, 2006 in Application Development
Permalink | Comments (6) | TrackBack (0)

EBCDIC -vs- The World

My good friend and former co-worker Mike related today his struggles with 'autoconf' on MVS.
He has a better grasp of cross-platform issues, like 'make' logic that works as well on z/OS as on Unix, than most people I know personally.  Mike tells me the './configure' step works fine, but that a specific package using it refuses to support EBCDIC and it sounds like a religious matter. [sigh]

When I first encountered "Open Edition" (as it was called then), I was delighted and dismayed.
First I launched a shell and found all those Unix commands that I had seen on other platforms. But when I brought in a TAR file with my own bag of tricks, it failed.  The archive was intact, but my scripts crashed.  Trying to eyeball one of them, I got garbage.  Then it hit me: they were all in ASCII. But more significantly, the system was EBCDIC.  Duh!

I assumed what so many others assume: If it's Unix, it's ASCII. But I was wrong.
It took several months before I could accept that OVM and OMVS being EBCDIC was not only okay, but was and is "the right thing".  But developers who do not know our beloved mainframe environments have not walked this path and may react against it.  (As the authors of this package Mike is wrestling with appear to have done.)

The designers of the POSIX specification and of the X/Open Unix brand were very careful about what is defined and where, what is required and how.  Just what makes a system Unix?  For ten years, MVS has passed the test and is Unix branded.  But surely none of us expect "a Unix person" to accept MVS that way.  The single biggest difference between OpenMVS and "real Unix" is the character set.  It is a curse and a blessing.

Let me first mention the blessing.
CMS and z/OS, even with a POSIX face, must be EBCDIC for the sake of the legacy code they run. For all their faults,  this is one place IBM is exceptional:  They support historical workloads.  (They do it better than a certain other vendor of operating systems which shall remain nameless.)  The old code works.  But the old code uses EBCDIC for character representation.  After chewing on this for more than half a year, I realized that it must be so for the POSIX side as well, or there would be grossly confusing results.

In theory, the character set should be as easily replaced as most other aspects of the system. (For example, we let users run CSH instead of Bourne exclusively, which has grave consequences if they want to do scripting.)  In practice, the character set is more deeply entrenched.  When moving from one Unix to another, the theory was  "just recompile".  In practice, we know it doesn't work so smoothly. This is bad.  This is sad!

Programmers make assumptions.   I know: I'm a programmer, so I'm just as guilty.
There are ways to render any application source "character set agnostic".  Such techniques take time and practice.  Is it worth the hassle?  Yes!  Today,  the unnamed authors of the unidentified package Mike is wrestling with reject EBCDIC.  It's not that they can't as much as that they won't.  What is heartbreaking is that they have already done the tough part:  they deal with  differing character encodings.  Supporting EBCDIC for them would be no extra mile (IMHO), and their attitude paints them into a corner where they'll have trouble with any new-and-wonderful encoding yet to be devised.

Thankfully, compiler writers tend to be more disciplined than the rest of us.  The foundation is strong:  Any special character is represented by a well defined and always expected meta-character or escape sequence. Notably, newline is always  coded as   "\n",  never as  0x0A.  Even the most ASCII-entrenched Unix hack will chastise the programmer who uses the hexadecimal rather than the symbolic.   We all need to be more consistent.

The problem does not simply go away when we are more diligent.
There continue to be situations where character encoding bites us.
But as source code grows more robust, we can make progress.

-- R;


by sirsanta November 21, 2005 in Application Development
Permalink | Comments (3) | TrackBack (0)

Automation is Everything, and Nothing

Years ago,  in the euphoria of some project,  I exclaimed to my manager,  "Automation is everything.".  He wisely replied,  "Sure.  Just be careful about what you automate.".

The beauty of automation is that it lets us take our hands off of the plough.   The trouble with automation is that we tend to take our eyes off of the field too.   With any task,  there are parts of the job that are repetitive,  so we get bored.  We then figure out some way to subdue the problem with a machine,  and that's good!  But we dare not forget the bigger project.  When we do,  there is danger.

This past week,  I was re-building a batch of common software packages for use at the office.   We want these things built for Solaris, AIX, HP, OSF1, USS, even Windows if possible.   We also build them on Linux on PC, mainframe,  hopefully PPC and SPARC,  and maybe Alpha or other less-business-vogue boxes.   I demand the standard recipe:

        download the source (if needed)
        unpack the source (if needed)
        ./configure
        make
        make install
        repeat on other platforms

It's that  ./configure  step where the process drags on and on.
Most often,  compilation and installation runs measurably faster than the configuration where the package must determine  "what do we have available on this platform?".   There are DOZENS of searches,  some requiring a tiny compilation of their own.
       
Many of these configuration scripts are based on GNU AutoConf.  That's fine.   This is not meant as a diatribe against AutoConf.  But like all automation,  AutoConf must be used with care and thought.  Each package author must do due dilligence  [say that three times really fast!].  Every package developer must take care about what support is needed,  which libraries are required,  where code must be #ifdefed for speed or handled at run-time.  Take your eyes off of the greater picture and you hose your customers.   They won't like it.
       
What does this have to do with mainframe?   USS and Linux,  of course!
And further,  mainframe strength is not CPU cycles to throw away, but that tremendous I/O power.   So poor  ./configure  performance is all the more painful on mainframe Linux and on USS.

What does this have to do with automation being everything?  (or not!)   Just this:  Be careful about the layers.  And AutoConf is a good example.   Don't use wrappers, APIs, scripts, HLLs or other such tools as an excuse for distancing yourself from essential labor.   Stay in the game.

We thank you for your attention.

-- R;

by sirsanta October 23, 2005 in Application Development
Permalink | Comments (0) | TrackBack (0)

Open Documents -- Part 3

Open signThe battle over OpenDocument Format has begun and Microsoft is using their traditional brass knuckles approach. It was revealed this week in some blogs that a recent article, "Massachusetts Should Close Down OpenDocument", which ran at Fox News was written by a journalist hired by Microsoft. (See an interesting rebuttal). The stakes are high. The issue is who owns documents, the document creator or the software that was used to create the documents.

Let's make it personal and down to earth. Mr. and Mrs. Smith and their children all have computers on the local area network at home. They recently had a busy weekend. Mr. Smith created a presentation which he will take to a conference and present using his ThinkPad. Mrs. Smith wrote a newsletter which will be distributed to dozens of members in a local non-profit organization she belongs to. The Smiths' daughter completed a school term paper replete with graphical images, clip art, and photographs. The Smiths' son is a graduate student in business and he developed a spreadsheet to reflect a ten-year financial plan for a new business idea. Who owns these four documents? (read more)

by John R Patrick October 15, 2005 in Application Development, Future, History, Innovation, People, Systems Technology
Permalink | Comments (3) | TrackBack (0)

Open Documents

Open signThe debate about the OpenDocument format is just beginning. Massachusetts put a stake in the ground with their decision to adopt ODF for all employees in the Commonwealth and for anyone doing business with them. This may go down in history as a bold and important move. But Microsoft, which opposes ODF, will not give up easily.

There was an OpenOffice.org 2005 conference in Koper-Capodistria, Slovenia last week at which a professor delivered a keynote speech entitled: "Should I Adopt OpenOffice?". It is reported that after taking a few questions from the audience, a loud voice boomed out from the back of the auditorium saying "In the spirit of full disclosure, I am a Microsoft technical officer." The person then launched at attack on the professor about the information that had just been presented. The gentleman then claimed that the European Union had accepted Microsoft file formats as "sufficiently open" and finally, he directly attacked the new OASIS OpenDocument Format. It was further reported that the professor had not even mentioned the OpenDocument Format or Microsoft's "Office Open XML". Needless to say, Microsoft is very defensive about the subject. Why? They have a monopoly and they want to keep it. Maintaining some degree of control over the details behind the formats gives a vendor more flexibility in developing their software and in deciding when and how to offer upgrades. Having to work with formats that are controlled by an outside independent third party is definitely harder.

Microsoft's behavior is very reminiscent of IBM's behavior in the 1970's and 1980's. Numerous file formats were proposed by other vendors but IBM consistently maintained that the mainframe was the best place to keep data. IBM totally controlled the formats. The difference between IBM's behavior and Microsoft's is that IBM heard the market speak out about the Internet, open source, Linux, and other grass roots ideas and rather than fight the changes, IBM adopted them and in fact is leading the charge. Microsoft has done this in some ways, particularly in the area of web services, but when it comes to Office, they clearly want to maintain some hooks that are not open to the user. (read more)

by John R Patrick October 3, 2005 in Application Development, History
Permalink | Comments (0)



The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.