Blogs

RedMonk

Skip to content

IBM Systems and Technology Group: Redesigning for Developers

I just spent a couple of days with IBM in Rye Brook, NY, and I have to admit, I came away impressed. I should also mention that I landed quite a nice contract while I was there, so take that into account when reading this piece.

Why does IBM STG want to work with RedMonk, a developer focused analyst firm? Good question – and a question right at the heart of a major transformation in process at IBM. If you don’t follow IBM you may not know that the firm recently merged its hardware and software businesses under the leadership of Software Group supremo and noted executive geek Steve Mills. SWG acquired STG, and Mills is tasked with growing the combined business. Let battle with Oracle, another software + hardware play following the Sun acquisition, commence…

STG is already beginning to sound, and act, more like SWG. One of the key plays is to get IBM technologists out of the labs and closer to the customer. Software Group has worked hard to get a closer view of customer problems, but STG has traditionally been more of a field of dreams- build it and they will come. One of the mechanisms for change is the creation of a new consulting-led labs program, which in some cases will be free if the project is interesting enough to IBM. Essentially IBM is recreating its Technical Sales program for the 21st century. Lord knows why IBM dropped the program in the first place- customers really love having dorks come into their shops and offer solid advice with no hard sell.

Design Local, Deploy Global

What do I mean- getting close to the customer? As the father of a 4 month old baby girl I couldn’t help but be struck by work between IBM and the Toronto Hospital for Sick Children. The problem- its hard to spot infection in babies, and by the time you do, it can be too late. Infections can have a myriad of symptoms – so by analysing not one or two things but everything, brain waves, blood, temperature, skin tension, the machines are able to dwell on all the patients in a way clinical practitioner couldn’t. You might catch an infection a day earlier. The IBM Labs concept is “machines that observe the world”. The output in a clinical environment- lower child mortality rates.

A more prosaic example is IBM’s work with Centerpoint Energy in smart grids. The problem? Too much data. It sounds kind of obvious, but smart meters produce so much information- at Centerpoint 5bn meter reads a day – that its hard if not impossible to store and analyse. IBM folks helped develop an approach that just tracked changes rather than every poll.

Follow The Money

In terms of understanding the transformation its best to follow the money. IBM has decided its sales and go to market programs are too skewed to existing customers. Up to 95% of its Selling, General and Administrative Expenses (SG&A) are currently predicated on traditional enterprise accounts – that is, IBM’s existing customer base. But after an analysis with McKinsey IBM is set to drastically change the mix. It turns out that SIs and ISVs account for a significant proportion of server sales into enterprise customers – and IBM is going to change its budgets accordingly. ISVs are now a honeypot for IBM, and that means courting developers. IBM is looking to non-traditional customers in spaces such as analytics and cloud. Some of the numbers are staggering – IBM assesses the analytics market for example as a $205Bn opportunity

The other key is a far more aggressive focus on competitive winbacks and targeting.

The simple truth, and challenge for IBM, is that the vast majority of software developers today default to vanilla x86 – the consensus view is that the awesome is in the software, not the boxes. The rise and rise of VMware has shown that some functions traditionally expected to come from the Operating System are third party software concerns. Vanilla+VMware – that’s what IBM needs to compete with. It needs to convince the world that its own POWER microprocessors, and server and storage architectures have a value that goes beyond that which vanilla can deliver. Its worth noting that with some cloud Platform as a Service options x86 is to some extent disappearing as a concern – when you deploy to Heroku you just write and deploy a Rails app – the box really doesn’t come into it. So in some respects at least the wind is not against IBM.

Beyond Vanilla
Key to making the case for Beyond Vanilla is design. Design at the customer, design in the labs – Design. Given that the winner in tech is generally the best packager of an innovation wave the focus on design makes perfect sense. Design local, deploy global – IBM needs to learn from its customers, and serve them accordingly. With the current transition to Big Data, with NoSQL technologies such as Hadoop gaining mainstream acceptance, and new Big Memory architectures, IBM has another disruption to ride. We need to optimise for storage and memory. IBM is working on that – its no accident that its eX5 hardware extensions for Intel Nehalem are all about creating a memory bus across the servers. The industry is on the verge of a Reverse Intel- that is – microprocessors are no longer the engine of innovation and continuous improvement- memory is. Nehalem is arguably about all the memory onboard, not the cores. IBM is doubling down on that.

Design- as Mills said:

“Would you play golf with just one club?”

Its easy to be skeptical about the need for a range of specialised hardware form factors, but its specious. One of the great joys of covering the software market at the moment is the sheer diversity of tools. As I said on Twitter earlier today:

Awesome software is going geometric/cambrian: infrastructures, frameworks, languages. github is the hub. impossible to keep up.

Scala, Node.js, NoSQL, HTML5, JRuby, Redis, Django, etc. In software we’re realising that one size does not fit all- so why shouldn’t hardware follow suit? New engineering disciplines will be required-with a focus on design for purpose. That is IBM’s stated intention.

Key to success will be design across disciplines – pulling together hardware and software and services to find new ways to solve tough problems and win new customers. Key to success will be appealing to developers and practitioners, bringing them into the design process. Geeks at the point of contact.

Categories: IBM.

Tags: , , , , , ,

Comment Feed

2 Responses

  1. Very insightful James more to come in 10 days!

  2. James,

    There was a time when software development was chronically catching up with hardware capability. This was easily seen when the IBM PC was introduced. There has been leap-frogging between HW and SW abilities since then, but in general, the magic exists in SW. The latter is rather easy to say considering HW had reached a point about a decade ago where some amazing SW packages, stacks, and infrastructures could be built, and still are!

    While accurate:

    “In software we’re realising that one size does not fit all – so why shouldn’t hardware follow suit”,

    as far as HW is concerned, we still have execution units that basically execute opcodes read from memory – a 60 year old paradigm. How efficient that execution paradigm is determines the quality of our SW driven world. It is quite a synergistic relationship. It is no coincidence that Oracle is attempting to emulate the system synergy IBM has created, over these same 6 decades, buy purchasing box touting Sun. Similarly, HP purchased EDS to create a bridge between HW and SW services.

    IBM has developed HW/SW ecosystems whereby which not only can applications be efficiently built, but when deployed. scale on demand. CloudBurst, GPFS, SONAS, etc, and specialty applications such as STREAMS need to meet the extensive scale required to fulfill their promise. As you noted, when there is a memory size limitation in the Nehalem architecture, IBM designs the EX5 chipset. When customers require an SAP environment that not only fulfills today’s business requirements, but those of tomorrow, POWER-based solutions outperform Oracle+Sun solutions by 3X (see: http://www-03.ibm.com/press/us/en/pressrelease/33019.wss) and 4X that of HP solutions.

    The results of IBM’s inter-related SW development environment and execution platforms speak for themselves in world record performance. None of this is an accident but part of well though-out goals.

    David Davidian
    IBM Sr System Architect
    http://www.ibm.com/blogs/davidian



Some HTML is OK

or, reply to this post via trackback.