Come on Hursley the beer is warm and hoppy

Blogging can be like having a really good conversation in the pub.

Professor Howard Trevor Jacobs, professor of genetics at Tampere University, doesn't believe in the ivory tower -

"We are at our most productive when we share our thinking. One night of crazy brain-storming over a few beers is more likely to produce more exciting results than 20 years' solitary study in the lab."
Its time for Hursley folk to get down the pub and start swapping stories and ideas. A pint of Wadworths, perhaps?

Hursley is IBM's most important software development lab. It has an august history.

  • You know that storage thing, it kind of moved computing forward, called a Winchester drive? Invented in Hursley, which is of course just down the road from Winchester.
  • CICS and IMS - Hursley central.
  • MQ Series (yes I know IBM marketers like to call it WebSphere MQ but let's call a world beating integration technology a world beating technology, 10k customers can't be wrong).

Hursley was the original campus - long before Xerox PARC - IBM had engineers swanning around a country estate in a collegiate atmosphere solving the toughest computing problems. And lord knows Hursley today is a more attractive place to work than Redmond.

But where are the Hursley voices? They do the technical heavy-lifting, but leave the marketing to the marketers. I reckon its time for a change, and the mainframe blog is a good place to start.

We know Martin Packer is onboard. But why keep your Firefox coding tips behind the IBM firewall? Mainframe performance expert programs Firefox and Greasemonkey is surely a more interesting story than vanilla mainframe expert.... slough of your silos you have nothing to lose but your EBCIDC.

Its about reaching out to new communities. Folks that would gag if they read this:

I've not really gotten into EWLM and ARM - yet. But I think this APAR will be of interest to those who have. And I expect I will get interested in it when it's more fully established...

PK11801 DDF FULLY-ENABLED EWLM ARM SUPPORT describes how the DDF support for EWLM / ARM is being made available to the general population of DB2 Version 8 sites

Did I remember to say it was Hursley that first got Java religion, and drove it into the rest of IBM, helped on by a young turk called Simon Phipps, now chief open source officer at Sun Microsystems.

What's my point? Hursley has lots of bright young things that understand new development approaches and mainframe discipline. Maybe they are too busy working to blog...

But even if you can't blog you can comment on blogs, which can be just as useful. Any comments warmly received (just like the beer).

Mine? I will have a pint of Wadworths. Cheers.

by James Governor November 29, 2005
Permalink | Comments (20) | TrackBack (1)

I'm very pleased to have been invited to be a guest contributor. As a mainframe performance guy (who interlopes in DB2 quite a lot) I have my own developerWorks blog. That will tend to cover quirkier, perhaps deeper, things like SMF records and the like. And why DFSORT is so splendid. :-)

This one seems to have a "higher level" tone, so it's going to be interesting writing with that kind of voice, as opposed to my usual "It's More Fun To Compute"* style. But as they say in blogging "using your authentic voice is important" so who knows? :-)

* Reference to a great Kraftwerk album. :-)

by MartinPacker November 21, 2005
Permalink | Comments (1) | TrackBack (0)

EBCDIC -vs- The World

My good friend and former co-worker Mike related today his struggles with 'autoconf' on MVS.
He has a better grasp of cross-platform issues, like 'make' logic that works as well on z/OS as on Unix, than most people I know personally.  Mike tells me the './configure' step works fine, but that a specific package using it refuses to support EBCDIC and it sounds like a religious matter. [sigh]

When I first encountered "Open Edition" (as it was called then), I was delighted and dismayed.
First I launched a shell and found all those Unix commands that I had seen on other platforms. But when I brought in a TAR file with my own bag of tricks, it failed.  The archive was intact, but my scripts crashed.  Trying to eyeball one of them, I got garbage.  Then it hit me: they were all in ASCII. But more significantly, the system was EBCDIC.  Duh!

I assumed what so many others assume: If it's Unix, it's ASCII. But I was wrong.
It took several months before I could accept that OVM and OMVS being EBCDIC was not only okay, but was and is "the right thing".  But developers who do not know our beloved mainframe environments have not walked this path and may react against it.  (As the authors of this package Mike is wrestling with appear to have done.)

The designers of the POSIX specification and of the X/Open Unix brand were very careful about what is defined and where, what is required and how.  Just what makes a system Unix?  For ten years, MVS has passed the test and is Unix branded.  But surely none of us expect "a Unix person" to accept MVS that way.  The single biggest difference between OpenMVS and "real Unix" is the character set.  It is a curse and a blessing.

Let me first mention the blessing.
CMS and z/OS, even with a POSIX face, must be EBCDIC for the sake of the legacy code they run. For all their faults,  this is one place IBM is exceptional:  They support historical workloads.  (They do it better than a certain other vendor of operating systems which shall remain nameless.)  The old code works.  But the old code uses EBCDIC for character representation.  After chewing on this for more than half a year, I realized that it must be so for the POSIX side as well, or there would be grossly confusing results.

In theory, the character set should be as easily replaced as most other aspects of the system. (For example, we let users run CSH instead of Bourne exclusively, which has grave consequences if they want to do scripting.)  In practice, the character set is more deeply entrenched.  When moving from one Unix to another, the theory was  "just recompile".  In practice, we know it doesn't work so smoothly. This is bad.  This is sad!

Programmers make assumptions.   I know: I'm a programmer, so I'm just as guilty.
There are ways to render any application source "character set agnostic".  Such techniques take time and practice.  Is it worth the hassle?  Yes!  Today,  the unnamed authors of the unidentified package Mike is wrestling with reject EBCDIC.  It's not that they can't as much as that they won't.  What is heartbreaking is that they have already done the tough part:  they deal with  differing character encodings.  Supporting EBCDIC for them would be no extra mile (IMHO), and their attitude paints them into a corner where they'll have trouble with any new-and-wonderful encoding yet to be devised.

Thankfully, compiler writers tend to be more disciplined than the rest of us.  The foundation is strong:  Any special character is represented by a well defined and always expected meta-character or escape sequence. Notably, newline is always  coded as   "\n",  never as  0x0A.  Even the most ASCII-entrenched Unix hack will chastise the programmer who uses the hexadecimal rather than the symbolic.   We all need to be more consistent.

The problem does not simply go away when we are more diligent.
There continue to be situations where character encoding bites us.
But as source code grows more robust, we can make progress.

-- R;


by sirsanta November 21, 2005 in Application Development
Permalink | Comments (3) | TrackBack (0)

The Virtual Invasion

Virtualization is hot!
But don't take my word for it,
check out eWeek for November 7 (volume 22, number 44),
"The Rise of the Virtual Machines",  with three articles of interest.

Confessions of a fanatic:
See my post about V12N a couple weeks back.   I love VM!
It is comforting to read others affirming one's faith.  Virtual machines
are way cool.   We mainframers have known for decades.
But let's not sit here feeling vindicated.

There's still some confusion.
The phrase  "virtual machine"  is used for different things
with very different characteristics:  virtual machines in the
Java sense,  hardware emulators,  paravirtualization (eg: Xen),
resource allocation with on-demand class response,  and (my favorite)
virtual machines in the virtual memory sense.   Even in the eWeek
article by Jason Brooks,  there is some confusion about the abstracted
hardware layer.   Let's be clear.   (Then there are things called
"virtual" that do not even present a "machine",  but most of those
thankfully don't claim to.)   For the  in-the-virtual-memory-sense
I suggest the term hypervisor.

A hypervisor is one step beyond a supervisor.
Where paravirtualization requires a polite and cooperative guest
and emulators impose performance burdens,  the hypervisor makes
the underlying hardware transparent,  executing guest op systems
on the physical machine until some exception calls for intervention. 
VMware, MS Virtual Server, and z/VM are good examples of hypervisors.
The guest may be unable to detect the hypervisor,  and in any case 
requires no special re-coding.

Trying to group platforms and concepts ...

z/VM   the original hypervisor, requires zSeries
VMware   PC hypervisor, requires INTeL or AMD and a host OS
Virtual Server   PC hypervisor, requires INTeL or AMD for Windows host OS
     
FLEX-ES   zSeries emulator (hosted on Linux, previously Dynix)
Hercules   zSeries emulator (open source, any of several host OS)
BOCHS   PC emulator (open source, any of several host OS)
     
Xen   paravirtualization, requires guest cooperation
Virtuozzo   "OS virtualization", does not present a machine
JVM   Java Virtual Machine, a misnomer, but popular

With BOCHS,  you can run PC Linux on a SPARC running Solaris.
With Hercules,  you can run mainframe Linux on an RS/6000 and AIX.
These are emulators.

With z/VM,  you get multiple zSeries virtual machines
each running any desired zSeries operating system,
but you must have zSeries hardware.
With VMware,  you get multiple PC virtual machines
each running any desired PC operating system,
but you must have PC hardware.
These are hypervisors,  measurably more efficient than emulators.

Where it's really V12N,  look for the hypervisor.

-- R;

by sirsanta November 17, 2005
Permalink | Comments (3) | TrackBack (0)

ZapThink links to mainframe blog. Harry Potter speaks

We'll get this community rocking sooner or later. One of the central tenets of blogging is that in order to get attention you need to give it. So in a piece about service oriented architecture and the mainframe I called out ZapThink, the boutique analyst firm that specialises in SOA governance and management.

The question in hand? How granular to make the services you expose to be orchestrated in a SOA.
Ron Schmelzer was quoted as saying:

"Focus on the smallest problem you can and apply the SOA approach."

My rejoinder?

Focus is good but I disagree with the reductionism- how granular to make the service is an absolutely key question, which imho can't be reduced to make it small. Make the granularity of the service too small and you may end up doing something stupid."

I was pleased to see ZapThink push back in a new report here. The firm claims to have been misquoted, and here is an explanation of what the team over there really meant. Here is some clear thinking:

One of ZapThink’s repeated refrains is that SOA is an aspect of enterprise architecture, which itself is not a technology, but rather a discipline. An essential SOA best practice, is understanding just how to create the right set of Services so that they solve the right set of problems.

Ron continues

Use cases for SOA projects, however, are a different kettle of fish entirely. Instead of laying out specific “I want the system to behave this way” kinds of requirements, SOA use cases describe how various users may wish to leverage the Services that will be at their disposal—Services the architect must then identify, design, and schedule. And so, rather than assuming a set of fixed requirements ahead of time, Service-oriented architects must expect requirements to change. The resulting solution must then not just meet the transitory requirements of the business, but must also be able to meet future requirements as necessary.

So mainframe discipline + expectation of change = service oriented architecture...

He signs off thusly:

While us industry pundits might not always agree, the fact that we’re having this dialogue at all is a very encouraging sign. At long last, we’ve stopped talking about what SOA is, but rather how to go about doing it. This conversation is helping folks realize that the key to SOA success lies not with the selection of appropriate runtime middleware, but rather with the selection of the right Services, and implementing the right infrastructure to make those Services valuable to the organization.

And that is a conversation the mainframe staff should definitely be part of.

I have to say it is disappointing that nobody from IBM Software Group or zSeries systems commented on the last SOA mainframe blog entry. I mean, this is supposedly one of your most important initiatives. It hasn't led to a single blog post or comment. Maybe I was wrong - maybe the mainframe community isn't ready for blogging.

IF IBMers won't comment on the IBM mainframe blog then why would customers or partners? It's about making a contribution. Either this site is a conversational marketing vehicle or it isn't.

Still - the guys at Zapthink get it, which is goodness. Cheers Ron.... (saying that makes me feel just like Harry Potter. maybe I should send an owl to Hursley: mainframe bloggers and commenters required...)

 



by James Governor November 15, 2005
Permalink | Comments (6) | TrackBack (0)

How about a Sandbox Slush Fund for new zSeries workloads?

I was reading ITIndepth, Anura Guruge's punchy analysis site, when something struck me.

Anura points to an issue for IBM right now - the idea that Linux is for Intel only. [Of course given AMD's punchy performance against Intel, we should probably be saying x86, not Intel, but Anura's point stands. Its still not clear where POWER fits in.]

In another analysis he questions the economics of the zAAP processor, an offload processor for Java workloads on the mainframe. That is, a place to run Java workloads without paying through the MIPS.

Ok, so we have IFL (Linux offload) and zAAP (Java offload), and at some point in the near future, I expect to see, though this is pure speculation, that IBM will also offer an offload board for XML too.

Here is a primer on some of the economics and problems facing IBM and its customers when it comes to paying, or charging for new workloads.

I was thinking about how IBM could make economics less of an issue, and I thought, why not overdeliver?

We see a lot of overdelivering in the Microsoft world, when they are fighting in the trenches for new workloads against competitive platforms. I was talking an OEM recently, and he described how MS was providing all kinds of free service and support to encourage a closer relationship and technical hooks. I know of MS paying for the consulting and deployment fees charged by value added resellers in some public sector organisations, to ensure good outcomes. Why can't IBM do the same thing.

Why not include extra capacity- say 10%- with every mainframe shipped, as a new workload sandbox? Just call it a slush fund for customers. Its effectively free money, off the books, to be applied to new workloads, for new developments. Don't charge $125k for it. Sure IBM could put some restrictions on the processors - but the fewer the better. At RedMonk we're always thinking about ways to lower the barriers to entry. Sometimes doing so means don't use DRM, sometimes it means provide better documentation. But sometimes it also means provide free stuff, under the radars of corporate purchasing. That's why Linux became successful in the first place. But I would keep the-lets call it a slushpuppy-"free", like Google 20% time.

Why not use similar thinking for zSeries? Offer the free capacity to ISVs as well as enterprises- that would help with relationships in the zEcosystem. At the moment, mainframe ISVs are often workload constrained, which means they just dont have enough hardware to deliver a new cool feature in time to support a new zOS release, for example. The slushpuppy could help with that.

Of course the folks running IBM generally, and zSeries specifically, might think this idea is insane. Not charge for 10% of the box, on our most profitable product line? "Ludicrous - we would go out of business."

It is my contention however that if IBM doesn't do even more than its already in order to encourage new workloads to the frame then we'll see problems in future. IBM needs to underpromise and overdeliver.

The idea behind the economics is that if this 10% slushpuppy, or whatever it is, is made more accessible, then it will be used. If that drives new workloads and all the new acronyms and buzzwords to the box it will soon create an even bigger pie.

by James Governor November 9, 2005 in Innovation
Permalink | Comments (10) | TrackBack (1)

V12N == Virtualization

I'm slow.  Occasionally it takes several reads before I understand some new acronym.  But I'm lazy,  so I tolerate the acronym soup we live in.  I don't recall how many years this  "I18N"  shorthand has been around,  but it eventually dawned on me that it meant something important: internationalization.   (Try ten times;  tends to twist the tongue.)  It's a biggie.   I'll save those details for another soap box.

The new  "important thing"  is virtualization,  and it's a hot button,  even moreso (to me) than I18N.  Maybe it's more like an obsession.   But it can be a mouthful.  And if you're not a touch typist  (or even if you are!)  you might have trouble with the long form.  So let me recommend a contraction in the same vein as I18N:  V12N.

(z/OS crowd,  turn away for a moment.)   I love VM!  (Why some z/OS people find VM distasteful I do not understand.)  Looking back to my first encounter with VM,  I remember asking the guy demonstrating,  "Every user gets his own System/370?".  He assured me this was so.   Imagine that!   What flexibility!  The potential for virtual machines was immediately obvious, with increasing uses as the years have passed.   And now we've got VMware and other serious offerings for hardware other than Z.

Today,  the idea of  "virtual machine"  has reached that ultimate goal any technology could want:  commercial exploitation.  The business world is beginning to grok V12N ... and want it.  Sometimes it's virtualization in the metaphor sense,  like Java.  When techies talk "virtual",  we usually mean in the same sense as virtual memory.  Either way,  V12N is hot!

So here's a helpful handle:
Call it V12N and tell the laymen what you mean.
Preach it,  brother!

-- R;

by sirsanta November 4, 2005 in Systems Technology
Permalink | Comments (0) | TrackBack (0)

Google the mainframe?

Just came across this in the DBTA 5 Minute Briefing: Data Center (To subscribe, go to here):

IN-COM Data Systems has announced a "Google-like mainframe tool" that enables the searching of data files within zSeries and iSeries environments. SMART TS XL supports Boolean and proximity searches on disparate data types such as ASM, Cobol, Natural, JCL, Procs, CICS Maps, email as well as others. For additional information, go here.

I don't normally act as a salesrep for other peoples' stuff, but the combination of Google and mainframe intrigued me.

I met a man in South Africa this week, who says they have done more development in IMS in the last 10 months than in the last several years, and has been putting together a search engine for all that wondrous mainframe IMS data that hasn't been shoved across into something like DB2.

What are you doing with your data - migrating it or leaving it on a secure platform and providing new means to access it?    

by pwarmstrong November 4, 2005
Permalink | Comments (1) | TrackBack (0)

SOA is going to need mainframe skills and disciplines

One of my tenets on SOA is: "Screw the S, screw the O, its about Architecture."

The mainframe's shared resource model means it is naturally somewhat service oriented. Shared resources, shared services... same difference. Of course most mainframe apps weren't constructed with flexibility in mind, but that doesn't mean they aren't going to play a huge role in SOA deployments at the great majority of Fortune 500 firms.

What does this mean? SOA is a huge opportunity for people with experience of developing to, and running, large scale data center environments. That means the zEcosystem. Sure you'll need to learn some new jargon, and rethink a few things, but that's why its called an opportunity, rather than a certainty.

Joe McKendrick over at ZDNet is writing a blog on service oriented architecture that any mainframe-savvy architect should be reading, because Joe understands that SOA involves governance, and tensions between different takes on the subject, and brings these tensions out in his blog.

The last few years have not been about SOA, but JBOWS (just a bunch of web services). The distributed folks may have standardised the interfaces, but they haven't yet stepped up to the plate on quality of service, governance and architectural discipline.

In Joe's latest piece, a rundown of a SOA Insitute event, he argues: "The same deliberate methodologies and processes that go into managing mission-critical mainframe or data center applications need to be applied to SOA".

It does sound like an opportunity doesn't it?

Its always good to disagree with your competitors in public so I let me take this opportunity to push back against Ron Schmelzer at ZapThink. According to McKendrick he said: "Focus on the smallest problem you can and apply the SOA approach."

Focus is good but I disagree with the reductionism- in my opinion that is the way to create "lunchbox services" (see the article), not SOA. How granular to make the service is an absolutely key question, which imho can't be reduced to make it small. Make the granularity of the service and you may end up doing something stupid, illustrated by this great story from Service Oriented Enterprise.

SOA requires discipline and architectural rationalisation, with some data and process modeling, and certainly with some knocking together of heads, both internal and external. Don't take my word for it - check out this case study from Sprint. Ed Vasquez, leading Sprint's SOA efforts, is a great guy, very helpful, with plenty of experience of the macro-issues in SOA governance. I quote from the Infoworld article here: One of the nice things about SOA adoption is that adoption, implementation, and deployments can be incremental as long as you keep your eye on the bigger picture.”

Vendors in the mix at Sprint include Attachmate, GT Software and Infravio. But Ed's SOA governance approach is more important than any software. 

The granularity of mainframe services being orchestrated and managed is a gating factor for success in SOA. That is why people with mainframe skills should be thinking about it.

Its about architecture, right, and who understands that best but those with mainframe experience?

So how about a call to action to close, after my little bomb the other day: It would be interesting to see IBM create a zSeries SOA forum, comprised of a number of mainframe integration and performance management vendors, and ideally customer architects too.

by James Governor November 4, 2005 in Future
Permalink | Comments (9) | TrackBack (1)

Was Napoleon really reaching inside his blouse looking for his secure key ?

Napoleon_1 I recently returned from a visit to Warsaw in which I presented the z/OS Encryption Facility to about 100 customers most of whom expressed interest in the ability of this software in helping encrypt removable tape media helping protect personally indentifiable information from possible compromise. Customer's like the idea of using the centralized secure key management capability of the mainframe leveraging ICSF to help protect data on the mainframe. There is real interest from banks, public sector organizations and other customers in using the mainframe as a core component of managing enterprise wide security of data .

Following the meeting I made three calls in mainframe customers in Warsaw inlcuding a bank, public sector agency and industrial sector firm. Two of these customers are working with IBM Business Partners to deliver capabilities including delivery of enterprise-wide security, data management, SOA and mixed work loads. IBM relies heavily on Business Partners to help us serve our customers and this fuels our investment in the zEcosystem.

I'm interested in learning more about the case mentioned in the post below and learning the other side of the story. But as reported in CRN magazine [http://www.crn.com/sections/breakingnews/dailyarchives.jhtml?articleId=60400385], about 60% of our midmarket mainframe share comes from the channel.

Would IBM kill the zEcosystem?

Would Napoleon revisit Waterloo?

by rjhoey November 2, 2005
Permalink | Comments (1) | TrackBack (0)

Could IBM kill the zEcosystem?

I had a disturbing conversation with a mainframe ISV yesterday. It turns out IBM refused to have a partner discussion with the firm because it had its own product offering in the space. Bad move.

The exec ratcheted up the scare factor when he asked, in all seriousness, whether I knew anything about a secret high level plan by Sam Palmisano or other senior executives to stealth kill the mainframe. How bad does "support" have to be before an ISV starts to ask that kind of question?

IBM is investing tens of millions on education to ensure mainframe skills are sustainable.

IBM is investing tens of millions on marketing to ensure it sustains customer demand for the platform.

But what is IBM doing for the mainframe channel?

You could argue that without an ISV ecosystem a platform isn't a platform at all (a platform being something people build apps too). And if its not a platform its never going to grow.

The mainframe at IBM is something of a corner case, its true. But its still a platform that needs nurturing. zVM is amazing technology- but how many ISVs are writing to it, given licensing constraints?

Its pretty clear from my perspective that the mainframe is pulling through WebSphere sales, rather than the other way around. Anything that drives CICS and IMS workloads should be encouraged. So why is the tail wagging the dog?

IBM Software Group is doing solid work in a number of areas, and I applaud some of its developer partner efforts. But I don't see real momentum in terms of mainframe ISVs.

Would you write a new application for the platform if you thought IBM would nuke you for it?

Its time for a market reset. The last few years have been about trying to prevent ISVs "price-gouging our customers". The Candle acquisition was time to draw a line in the sand. IBM needs to start thinking of ISVs as partners again.

So what if they have a competing product. Does DB2 refuse to support SAP because of Netweaver? SAP is an interesting case because IBM gives it plenty of love when it comes to mainframe support.

Its all about balance. I appreciate IBM needs to keep the zEcosystem honest. But it also needs to keep it healthy.

Time for a serious mainframe partner play, perhaps even a channel czar for the platform. What do you think? Any mainframe ISV readers out there that agree, disagree?

One last thing - the company in question is getting amazing migration support at the moment from another platform player - SAP. What does that tell us?

by James Governor November 2, 2005
Permalink | Comments (4) | TrackBack (1)



The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of the Mainframe Weblog.