Skip to content

Amazon’s Anti Gravity Play: Notes on AWS re:Invent 2015

Amazon used re:Invent 2015 to emphasize the growing momentum of its cloud infrastructure business and mark its transition into a data and application platform.


re:Invent is kind of incredible. I don’t think I have ever seen a more committed set of conference delegates in terms of session attendance. Everyone was anxious to learn so pretty much every session was packed. I’ve certainly never been to a show where Adrian Cockcroft was seemingly less of a draw than a nearby vendor session on Big Data.

The hunger for learning shouldn’t surprise us given the velocity of Amazon service delivery – there is an awful lot of stuff to take advantage of. Amazon is a flywheel of new function delivery, and the company’s growing community evidently wants to take advantage of new services as they are delivered. Keeping up with Amazon can be a full time gig. Just one data point – my friend Ant Stanley saw enough market opportunity to launch a video learning platform, A Cloud Guru, dedicated to AWS, running on AWS platform services – for bonus points it’s worth checking out this post about its microservices/serverless architecture.


In terms of the platform play, in case you didn’t get the memo – AWS Lambda is a potential game changer, allowing developers to write stateless application functions in a variety of programming languages, triggered in response to service events, without needing to provision servers. Lambda turns Amazon Web Services into one big event-driven data engine. Events can be triggered by any change within AWS, from updates to objects in S3 or DynamoDB tables, Kinesis streams or SNS calls. Amazon manages all server deployment and configuration in handling Lambda calls – as Amazon calls it, “serverless” cloud, which maps fairly well to what everyone else calls Platform as a Service (PaaS).

Less Portability

So far Lambda adoption has been a little slow, partly because it doesn’t fit into established dev pipelines and toolchains, but also almost certainly because of fears over lock-in. Amazon historically has dominated the Cloud market precisely becuse it was Infrastructure, rather than platform services play. As I said back in 2009:

“Amazon isn’t the de facto standard cloud services provider because it is complex – it is the leader because the company understands simplicity at a deep level, and minimum progress to declare victory. Competitors should take note – by the time you have established a once and future Fabric infrastructure Amazon is going to have created a billion dollar market. And what then? It will start offering more and more compelling fabric calls… People will start relying on things like SimpleDB and Simple Queue Service. Will that mean less portability? Sure it will…”

So while we expect Lambda adoption to quickly pick up, it is following a similar trajectory to the broader PaaS market. Amazon will have to do some market making and hand-holding to encourage adoption of technology that could mean lock in.

Update: When thinking about fear of lock in however it’s important to note that pragmatism and effective packaging tend to trump Open when it comes to the enterprise, even in the age of open source. Every wave of Open technology is eventually pwned, as customers choose what they see as the best packager- think Windows vs NetWare, Red Hat, the Unix Wars, Oracle database etc. The best packager in any tech wave therefore wins and wins big, because convenience trumps openness in enterprise decision making, especially when it’s driven by lines-of-business. The perceived value of Lambda could begin to erode fears about lock in. Meanwhile enterprise customers are far less fear driven when it comes to public cloud than they were even a year ago and they’re increasingly ready to make strategic commitments.


Lambda will underpin new AWS offerings, initially in areas such as security compliance and governance, with the AWS Config service and AWS Inspector, given any system change or API call can be logged, and or related message routed and acted upon. With the cloud we have far better observability, and ability to turn assets on or off than we do with on prem architectures.

I had an interesting chat with Adrian Cockcroft on the subject of security just after the keynote. Long of the opinion that cloud is more secure than on premise, he said that he could easily envisage that within 5 years you won’t be able to get PCI compliance unless you’re in the cloud. Seems Sean Michael Harvell has similar ideas

The nearest equivalent general purpose cloud architecture currently being touted could be Thunder, from Salesforce, announced at Dreamforce last month, which is also going to be event-based, allows for streaming, rules-based programming, with federated data stores on the back end – notably bridging IOT logs with customer data.


The event-based programmability of Lambda, with a rules engine, is very interesting, but the data management side behind it is kind of stunning.

Amazon is delivering on a federated data store model – once the data is stored in the AWS Cloud, developers can choose their engine of choice to program to, whether that be MySQL, Oracle, SQLServer, MariaDB (announced at re:Invent), or the AWS Red Shift datawarehouse. The idea a developer doesn’t need to choose a specific data store, or move data around, before creating or extending an app is very powerful. This is NoSQL on steroids. Most web companies today are building apps that are comprised of multiple data stores, and Amazon is catering to that requirement, and readying for a future where enterprises start to make choices that look more like web companies. There is no single database to rule them all. Amazon, as in other areas, doesn’t try to create a single once and future solution, but does a great job of packaging what’s already out there. RedMonk has been writing about heterogeneous federated data stores since forever, so it’s gratifying to see cloud computing finally making it a reality.

In terms of competitive offerings it’s worth mentioning Compose.IO, recently acquired by IBM, in this context. Compose is also delivering support for multiple federated data stores in the cloud. Mongo, Elasticsearch, RethinkDB, Redis etc.

update: Amazon is also up with the in memory cache and queuing pattern, with AWS ElastiCache offering managed Redis and Memcached as managed services. Oh yeah- Amazon also announced AWS ES – managed Elasticsearch – at re:Invent. Here’s why that’s kind of interesting. Thanks for the reminder @jdub!

But Amazon didn’t just make it easy to write data applications, it also made a great argument-as-code for the new approach, with its new QuickSight analytics platform. QuickSight is the new AWS business intelligence and data visualisation platform, built on top of an in memory query engine Amazon calls Spice (Super-fast, Parallel, In-memory Calculation Engine”). The core innovation of QuickSight as I see it is the fact queries can be run across the customer’s data estate in AWS, regardless of whether it’s held in the RedShift datawarehouse, Kinesis Streaming platform, DynamoDB (Amazon’s Cassandra implementation) or one of the database engines supported by AWS RDS including Oracle, Microsoft SQLServer or Postgres.

QuickSight discovers sources where your organisation has stored data in AWS, and makes suggestions about possible relationships. Pay by the hour, no need for ETL or data movement (cost) overheads. Amazon claims that because it builds its offerings on top of standard open source technology customers should be less wary of lockin than traditional proprietary databases and applications. In summary, data gravity, commonly thought of a cloud drawback is now potentially an Amazon advantage.


Loving Snowball

To that end, my favourite re:Invent announcement was definitely the Heath Robinson contraption otherwise known as Snowball.


How to get large volumes of data into the cloud is a live issue. Even with dedicated pipes it could take hundreds of days to upload multiple Terabytes of data. Amazon wanted to make it easier to do, and therefore invented the contraption above, a dedicated encrypted storage appliance, which the customer fills with data, before Amazon manages collection and shipping to take the data and upload it for the customer, before returning the empty box. Note the onboard Kindle, to prevent label misprinting issues, and allow for tracking. The snowballs can store up to 50TB of data. You have to love a good hack.

Launching the Internet of Things

As mentioned earlier in this post, Amazon and Salesforce are both converging on the next gen streaming/rules/heterogeneous data store/cloud platform. But where Salesforce talked about IoT and customers as a way to talk about IoT, Amazon cut straight to the chase with IoT as a set of AWS services – native support for MQTT, and certificate management, for example. One of the most intriguing inventions was the introduction of “shadows” – cloudside models of physical devices in the world, maintaining state, which could also be programmed against. Having virtual models of every physical device in the network is a very powerful notion.

Leaving Las Vegas

There is probably plenty more to add but in summary Amazon is in a very good place now, arguably pulling away from the chasing pack of cloud providers. The company is getting better at selling to the enterprise, and now has a number of new and compelling services to target those billions of dollars to spend. It is talking up hybrid, though it still has plenty of work to do in that regard. Meanwhile it continues to offer services startups can and will take advantage of. Amazon is now a platform.


Categories: Uncategorized.

Want to Encourage Diversity in Tech? Then Be Quiet. Thoughts on Ada Lovelace Day.

Ada aged 4

At Monktoberfest a couple of weeks back Justin Sheehy gave a great talk about how he, as a privileged white dude, was trying to dial it down in order to give other voices a chance to be heard. You see we tend to show off, and have been encouraged since we were kids to try and be the smartest kid in the room, which we end up equating to being the loudest. If you’re confident, you’re right, right? In adulthood a lot of us run around declaiming in stentorian tones as if we were on the London stage. I am one of these loud ones.

But on the drive home from Portland with Justin and David Pollak something suddenly crystallised for me. Justin hadn’t just given a talk about being less dominant, he has re engineered his speaking register. He speaks far more quietly now, than he did a year ago. I am sure that quiet gives others on his team a chance to shine. Nature of course abhors a vacuum.

I have been mulling this over. I always try and make sure my eldest son doesn’t talk over his sister for example, and that I listen attentively to her. In working life, I have a long way to go, and I only hope I can begin to emulate Justin’s quiet example. I was reading Synaptic Lee today and I am sure it’s the right thing to do.

As someone who’s interested in the computational aspects of neuroscience, I’ve experienced male voices talking over me and men favouring to talk to other men. These don’t happen often, but highlight the fact that women’s voices are often not recognised in STEM subjects. When women areheard, sometimes there is the expectation that they must perform even better in order to prove themselves as worthy as men. This brings to mind a psychology study well-known to my fellow grads that, when asked to specify their gender before a difficult maths test, women performed much poorly than when told their gender would be undisclosed.

So, sad to say, society is still struggling to present women on equal grounds in science, technology, engineering and maths. Let’s take this one day to shine a spotlight on women in STEM.

Men need to listen more and talk less, and we might just create space to make a better industry. We need to shut the hell up and listen. And then maybe Augusta Ada King’s legacy can begin to be properly felt.


The image above is Ada, aged 4.


Categories: Uncategorized.

inflections – emc dell etc



Categories: Uncategorized.

Elasticsearch and Splunk: Google Trends

Last week while idly pondering the rise of Elasticsearch Logstash Kibana (ELK) stack I thought I would compare Elasticsearch with Splunk on Google Trends. The results were interesting.

There are obviously some significant caveats- for one, the search terms I used were dictated by what might actually be valid using Google Trends – “ElasticSearch” vs “Splunk”, rather than ELK vs Splunk, which is closer to what I had in mind, but might have been confused with large red deers.

The evolution of Elasticsearch has been interesting. From its beginnings as a full text search engine built on Lucene it has grown into something more far-reaching. Packaged by Elastic, a commercial company, as an analytics toolset bringing together ElasticSearch, Logstash as as a general purpose data collector (what began as a single purpose project, a tool for collecting and transforming logs, has now become a general purpose ETL tool), and Kibana for data visualisation and exploration. ELK is an easy developer friendly toolkit for data analysis.

It’s also probably worth mentioning Greylog, another commercial company building on Elasticsearch that has set itself to more directly target Splunk. Meanwhile fluentd is another open source project that performs similar functions to Logstash, developed by Treasure Data.

Splunk however is already a publicly traded company, IPO in 2012, with significant enterprise penetration that specialises in log management for operational analytics and security. 2015 total were $450.9 million, not bad for a company helping clients to mine data they have traditionally thrown away or ignored. Splunk is not primarily an open source company. Splunk is far ahead in revenues, and enterprise adoption model than any open source competitor. But the ELK stack is used in some similar spaces, and ELK is finding a role for general purpose analytics, not just analysis of logs for operational data specific.

Splunk vs Elasticsearch is very much an imperfect comparison and yet Elasticsearch is clearly a funnel in its own right, and ELK is finding its own DevOps ops led customer base. Enterprise ops teams are likely to choose Splunk, but Cloud developers and DevOps teams at this point are likely to favour ELK.

Unfortunately I wasn’t able to attend Splunk’s user conference this week, which might have helped me parse how the company will deal with the rise of open source competitors. This post isn’t intended as a negative against Splunk, but rather to note that in terms of Google Search Volume at least, Elasticsearch is the mix.


Elastic, Splunk and Treasure Data are all RedMonk clients.

Categories: Uncategorized.

Dreamforce 2015: We’re Going To Need a Bigger Boat

Last week I travelled to San Francisco for Dreamforce 2015. The annual conference is actually kind of weird in that it is optimised for scale but it also fun. Seriously – I  much prefer small conferences. To me 150 is the right number for a really good community-based event. Not @benioff – oh no…. this year Salesforce had 150,000 people attending, and, strangely it it actually felt like a community… having fun. That’s right – Dreamforce is fun. And did I say – One Hundred and Fifty Thousand People.

My colleague Fintan has already written about the increased developer focus on display this year, and I will just double down on that. The developer zone was crazy – I would guess at a rough estimate there were five times as many attendees as last year. There were people in lines everywhere, waiting happily for stuff, whether that stuff be schwag, training, or just a seat to sit and hack. There was quite literally nowhere to stand, or even sit down. You could get cold beer though – thanks Wind River.

The most popular sessions in the Dev zone were about Salesforce Lightning – the company’s Javascript framework designed for UIs that don’t suck. I know, right. Enterprise software user experience is still generally pretty bad, and much though is a lot more usable than Siebel ever was, it was still looking like software from 10+ years ago. So Lightning brings a Javascript framework approach, for drag and drop programming to Force data back ends. Salesforce is not primarily about elite developers. It’s about software that humans can use to compose new services. Where Microsoft in its pomp had “IT Pros”, Salesforce today has “admins”, folks that configure services to tailor them to enterprise needs. RedMonk is pretty skeptical of drag and drop magic, but Salesforce is doing a solid job of creating composition environments that will make its admins even more effective, and will be very familiar to people that have learned just enough Javascript to be dangerous.

More than a week later and I can still hear the cheerleaders in the Admin keynote with Parker Harris: “Admins, AWESOME. AWESOME Admins”. Salesforce is adept at making stars of its community.

For me however the red meat for developers came in the Heroku keynote.

Today Github is quite simply, where software gets built. With that as a core design principle Heroku introduced Heroku Flow for continuous delivery. Changes to apps today are managed using  pull requests, so Heroku built Pipelines around that, making reviews of pull requests into a consistent, managed, process. So you create a new app in production on Heroku to test the change, promote and review on Heroku, before merging from Github. Of course with Heroku you can also easily build and tear down environments, with containers under the covers.

Today it’s not enough to just offer hosting or even black box PaaS- developers are increasingly looking for services that allow them to deploy private networks in the public cloud, making geographical choices about where to host data – initially in Oregon, Frankfurt or Tokyo. Heroku also announced Private Spaces for programmability at the network layer. You can now do stuff like have a database (Postgres, Redis etc) with a static IP address. This kind of functionality is increasingly a big deal. Public cloud wins on convenience, but the market has made it very clear that data location and governance are not optional extras.

Code is important, but the most important thing in an ecosystem is learning and education. Trailhead, a gamified tech learning environment has the installed base all fired up, and is a welcome departure from the usual, Circa 1996. We’ll be watching with interest to see whether Trailhead can turn admins into coders, and coders into elite developers. It’s all about raising the skills base. The admin thing- Salesforce creates jobs, which is great. It also has an eye for diversity.

I had a good time, it was great to hang out with people like Arti, Jesper, Craig, Morten, John, and Steve. It was also great to wait for 45 minutes for a table at Samovar with James and Quinton – lovely people, proper dev evangelists.

I’ll be back next year. Salesforce paid my T&E. i am slightly terrified however, in case They shoot for 200k people next time. We may need a bigger boat.


Tags: , , , ,

Microservices and Disposability: On Cattle, Pets, Prize Bulls, Wildebeests and Crocodiles

Treat this post as a preview for my session at Dreamforce next week – Composability, Adaptability, and Disposability: RedMonk on Microservices. If you’re there you should come!

A few months back Darren Shepherd, one of the cofounders of Rancher Labs, was visiting my coworking space to hang out with Weaveworks, another Docker ecosystem startup. We were talking about containers and microservices, when I had an epiphany – while improved infrastructure disposability is a significant reason for adopting containers, it is actually definitional to microservices. So a Shepherd from a company called Rancher helped me better understand the now commonly used Cattle vs Pets distinction. You really can’t make this stuff up.

Like many others I had already been introduced to the Cattle vs Pets distinction by Adrian Cockcroft. See this excellent presentation by Randy Bias if you haven’t previously got the memo)  or if you prefer words Bernard Golden sums things up well (with added chickens!)

Traditional infrastructure is expensive and individuated — we give servers names, we lavish attention on them, and when they suffer problems we do evaluation, diagnosis, and nurse them back to health via hands-on administration. In other words, we treat them as part of a family. This is true whether the server is physical or virtual; they are long-lived and stable, and therefore deserve personal attention and emotional attachment — just like a pet.

By contrast, cloud infrastructure is treated as transitory and disposable; due to the highly erratic workloads typical of cloud applications, virtual machines come and go, with lifespans measured in hours, if not minutes. Because of the temporary nature of cloud infrastructure, it’s pointless to get attached to specific resources; therefore, cloud servers more nearly resemble cattle rather than pets. In other words, instead of getting attached to a cloud server, it’s better to view it as a disposable resource, temporarily used and then discarded. And, unlike a pet, one does not think of a steer as part of the family and nursed to keep healthy.

Disposability has been a long time coming – I remember the aha when I first heard how Google dealt with hardware failures – by simply throwing out the offending boxes every once in a while, with no need to do anything immediate. At Google the software architecture meant that hardware became disposable. Today that architectural idea(l) is becoming a core design principle for Cloud Native software – 12 Factor Apps. Everything should be disposable. Except of course the data… Apps are like fish, but data is like wine. Stateless apps still require a persistence layer.

One of the current industry terms we’re using for this approach is “immutable infrastructure”. One of the biggest problems in managing infrastructure is dealing with configuration sprawl – “old systems grow warts”. So instead how about never changing the image. When it doesn’t work throw it out and create a new image, rinse and repeat. Trash Your Servers and Burn Your Code: Immutable Infrastructure and Disposable Components.

With microservices we’re dealing with disposability at a different layer of the stack, but the pattern is the same. As Stephen argues the key to the current container frenzy is that the Atomic Unit of computing is now the app.

The explosion of Docker’s popularity begs a more fundamental question: what is the atomic unit of infrastructure moving forward? At one point in time, this was a server: applications were conceived of, and deployed to, a given physical machine. More recently, the base element of an infrastructure was a virtual recreation of that physical machine. Whether you defined that as Amazon did or VMware might was less important than the idea that an image resembling a server, from virtualized hardware and networking interfaces to a full instance of an operating system, was the base unit from which everything else was composed.

Containers generally and Docker specifically challenge that notion, treating the operating system and everything beneath as a shared substrate, a universal foundation that’s not much more interesting the raised floor of a datacenter. For containers, the base unit of construction is the application. That’s the only real unique element.

So microservices must to be disposable. If a microservice fails or is superseded a better service, then simply dispose of the old one. With Cloud Native apps there is safety in numbers, rather than “high availability” failover across two nodes – the crocodile may take out out one wildebeest, but the rest of the herd keeps running.


If you’re interested in digging into these concepts please join us at Dreamforce next week.


Categories: Uncategorized.

Dinosaurs can be Unicorns too

I don’t exactly know what Benedict Evans meant by his one word comment on this chart, but I found the data interesting because of the frankly incredible ongoing performance of Microsoft, rather that the potentially more sigmoid looking curve of Apple (a model of saturation in a population, an s-curve which will dramatically plateau).

Plenty of smart people talk about Steve Ballmer’s time as CEO at Microsoft as a failure but I really wish I could fail like that. I watch enterprise software, and Microsoft has been turning in organic double digit growth in multiple billion dollar plus businesses year after year for over 10 years now. That’s impressive execution. Microsoft servers and tools business is perhaps a triceratops – a dinosaur with more horns than a unicorn. Has Microsoft succeeded at everything? No – but show me the company that has. Execution is really hard – thus the idea of “unicorns”.

Microsoft catching IBM in revenues is what really strikes me, and is perhaps what Benedict was referring to. Outsourcing is not a good place to be right now.

Our current fetish for outsize valuations is certainly interesting – the companies we call unicorns are valued at over a billion dollars, but the term says nothing whatsoever about revenues. These valuations are based purely on private, somewhat illiquid markets (to be fair a lot of smart people think Mark Cuban is wrong about the bubble).

When I think of dinosaurs I think of incredibly successful life forms, that thrived over hundreds of millions of years. It is mankind that looks more like a blip. Maybe that’s what Benedict meant? A dinosaur offers sustained performance over time. I suspect of course, that’s not what he meant.


I leave the last word to Marc Benioff.

Some disclosure: Dell, IBM, Microsoft are clients and I am certainly not a stock picker.

Categories: Uncategorized.

Programming Language Rankings: Summer 2015

summer 2015 rankings

This iteration of the RedMonk Programming Language Rankings is brought to you by HP. The tools you want, the languages you prefer. Built on Cloud Foundry, download the HP Helion Development Platform today.


“The basic concept is simple: we regularly compare the performance of programming languages relative to one another on GitHub and Stack Overflow. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion (Stack Overflow) and usage (GitHub) in an effort to extract insights into potential future adoption trends.”

RedMonk continues its exploration of programming language discussion and contribution with our ongoing rankings. We know our methodology isn’t perfect, but has proved a useful guide to potential adoption vectors – notably in the case of Swift. [bonus update – after I posted this on Friday I realised I had missed some stuff important out. Namely talking about the growing influence of our regular rankings. You see, we’ve had some amazing highlights over the last six months – namely that our rankings were cited by Apple, both by Tim Cook in its Q2 earnings report, and in the WWDC keynote.]

Swift is certainly the first language to crack the Top 20 in a year. By comparison, one of the fastest moving non-Swift languages, Go, ranked #32 in the original 2010 dataset finally cracking the Top 20 in January of this year.

Less surprising is the ongoing jousting for top-spot by Java and Javascript.

As with last quarter, JavaScript maintains a slim margin on second-place Java, with the caveat that the difference between numerical rankings is slight. The language’s sustained performance, however, reflects the language’s versatility and growing strategic role amongst startups and enterprises alike.

The growth in interest in Javascript should surprise no-one – new frameworks emerge on a daily basis. It is an ecosystem characterised by a rewrite all the things mentality, with NPM becoming a key ecosystem hub – as ever it’s all about packaging. But the ongoing strength of Java is more interesting – given that we’re seeing a shift in programming models with the rise of 12 factor apps, continuous integration and micro-services. It will be interesting to see how Java continues to adapt in the era of the Cloud Native.

As Stephen says:

One of the biggest issues facing users today is, paradoxically, choice. In years past, the most difficult decision customers had to make was whether to use BEA or IBM for their application server. Today, they have to sort through projects like Aurora, Cloud Foundry, Kubernetes, Mesos, OpenShift and Swarm. They have to understand where their existing investments in Ansible, Chef, Puppet and Salt fit in, or don’t. They have to ask how Kubernetes compares to Diego. Bosh to Mesos. And where do containers fit in with all of the above, which container implementation do they leverage and are they actually ready for production? Oh, and what infrastructure is all of this running on? Bare metal? Or is something like OpenStack required? Is that why Google joined? And on and on.

There are two things to unpick there. Requirements lean to particular language choices – for systems programming Go has emerged as the natural choice, led by the inimitable Derek Collison of Apcera.

Go: A year ago, we predicted that Go would become a Top 20 language within a six to twelve month timeframe. Six months ago, it achieved that goal landing as the #17 language in our January rankings. In this quarter’s run, Go continues on that same trajectory, up another two spots to #15. In the process, it leapfrogged Haskell and Matlab

Systems programming is not general purpose application programming however- the biggest challenge for Java will be ongoing relevance in an era where stateless programming is the norm rather than the exception, and where fragmentation is seemingly a given. [second bonus update – joining the dots then, we’re still witnessing the slow motion fall out of a model where, as Stephen points out, so much of the core infrastructure was built into the app server. Java is going to need something like Spring Boot to sustain it going forward in this new world, after all, as we have shown frameworks lead language adoption.]

Fragmentation is creating opportunities for languages like Rust and Julia. We’re faced with an interesting model where we’re seeing stability at the core, but a fair degree of movement at the edges, with new languages swiftly (please excuse the pun) coming to the fore.

A final word of thanks to HP. The rankings are Internet-famous, but they have hitherto been something we didn’t charge for. By adopting a sponsorship model I think we have a nice balance between funding our open work, and highlighting a company that wants to engage more closely with developers. The New Patronage Economy, as it were.

Categories: Uncategorized.

The Developer Aesthetic: On developerWorks Open Tech

I have been thinking a lot lately that as Software Eats the World so a new aesthetic, a set of patterns, practices, modes, mores and in-jokes emerges – I call this phenomenon The Developer Aesthetic (with apologies to James Bridle). It’s a way of seeing the world and presenting ideas for consumption and feedback- through software of course. Some of this is clearly codified- for example Rails apps or Twitter Bootstrap – but some is more implicit. At RedMonk we believe that Developers are the New Kingmakers, and if you want to engage with them it pays to learn their aesthetic.

IBM’s is currently refreshing developerWorks, its venerable developer portal, and frankly it needs the update. Any site designed more than 10 years ago is likely to feel a little tired. But what’s a company to do when the hackers are going elsewhere? One of our clients asked us this week whether it even makes sense for tech companies to host their own sites for developers in the age of Github and Stack Overflow. The answer is federation – interesting information is distributed, just as much as today’s computing infrastructures are, which is why I really like the new design for IBM’s new developerWorks Open projects pages.

As you can see below, IBM introduces a clean header design, with an at a glance view of Github related activity on a project. When I first looked at the site, however, I noticed that not all the information in the header was clickable. I let IBM know, and Dirk Nicol fixed it within 2 hours (rather impressive if you’ve ever worked with IBM) The sidebar also adds to to the reduced click nature of the site, with nice use of microcopy, for example with the clone button.


node red on developerworks open tech

Note IBM is also using Slack, the new collaboration hotness, to talk to developers.


IBM is a client.

Categories: Uncategorized.

On HP Discover, Devops and the Developer Aesthetic


HP is very much a company in transition, again, as it prepares for a demerger that will leave two Fortune 500 companies – HP Inc (PCs and printers) and Hewlett Packard Enterprise (everything else). This post is about the latter. One issue with any analysis is that the situation could so easily dramatically change – rumours abound for example that the newly demerged HPE will immediately merge with EMC, for example. Leaving such speculation aside, HP’s direction of travel makes sense- becoming more developer-friendly.

For all the brickbats thrown at Leo Apotheker, one thing that began to change at HP under his short watch was the company view on developers.

HP has historically sold to seemingly everyone in IT except developers – it did a great job of selling to network managers with OpenView, service managers during the ITIL service management wave with Service Desk, QA teams with the purchase of Mercury Interactive, which essentially acquire the 2006 load testing market, and so on. HP was creating an Enterprise IT Management play, rather than a developer play. HP was also very much a delivery vehicle for packaged software ISV partners, rather than inhouse development (IT doesn’t matter, remember?) – such as SAP. 2006 however may have been Peak Waterfall – we’ve been moving to agile ever since.

So what’s a company to do when it has a stack entrenched in customer shops aimed at a previous way of doing things? Emphasize the culture change in moving to the new era, and retool accordingly. Which is where Redmonk comes in. We were invited to HP Discover recently to talk to customers and partners about DevOps and what it would mean for them. We weren’t talking product, but all the soft stuff. After all, as Adrian Cockcroft says: “DevOps is a reorg”. The conversations were fascinating. While HP does have some customers that are already wearing the t-shirt and the hipster beard, having reorged, rolled out Chef and Puppet, and then decided to standardise on Ansible, the great majority are still just wondering just to embrace this new thing. Change is hard. For traditional enterprise ops people is is especially hard.

Related to the DevOps conversation is the related transition to test-driven development and continuous deployment – given the business need to deliver more digital services to market faster. HP is decomposing its ALM tools to make them more applicable to the new era, in the shape of new tools like leanFT for functional testing, which supports Cucumber and JUnit, and of course Git, (now essential for any modern dev tool).

HP has now adopted the software engineering mantra of Shift Left testing, and needs to encourage its customers to do the same. The idea of Shift Left, is that unlike Waterfall, testing is moved earlier in the development cycle. Agile is all about about Shift Left- testing is something developers do as part of their routine – any good engineering manager today is all about the tests.

Of course you can’t completely eliminate bugs before pushing to production, no matter how solid your testing approach, so I also found HP’s mobile app crash analytics platform HP AppPulse mobile interesting – because it feels reasonably modern, although its not crashalytics, which has the Developer Aesthetic down cold, obviously.


Which brings me me neatly to Grommet. Not Wallace and Gromit, but Grommet – a UX framework from HP based on ReactJS. I was walking on the shop floor when a purple logo caught my eye. I stopped to find out more, and HP was just about to launch Grommet. The code was already on Github, but the team was still wrangling the logo, trying to decide on the right shade of purple, before the press release went out. We had a great conversation. Grommet is now a framework that instantiates a set of HP design guidelines, and apparently the HP Software CTO has now mandated than all new software will use Grommet and adhere to the guidelines. So HP now has a framework designed to help developers and designers work together, based on a popular open source project from Facebook and Instagram. So much so now.

Talking of the Developer Aesthetic another thing that impressed me was the process whereby a developer gets access to HP’s Idol OnDemand machine learning tools. Unlike the sign on process from pretty much every enterprise software and or cloud asset, this one is dead simple, in fact what you’d expect from a cloud native company. Shortly after noticing this I met Sean Hughes, HP’s new worldwide developer relations lead. He gets it.

idol login


Before signing off for now, I will just say that like IBM, HP is all about packaging the open source cloud stacks- HP HelionCloud, the company’s hybrid platform, is a Cloud Foundry for polyglot developers, OpenStack for ops people burger, which will be supporting Docker etc. One final piece of evidence HP wants to engage with developers- it is sponsoring our latest programming language rankings.

HP isn’t there yet, and the demerger could still throw up a lot of surprises, but the company is making progress in better serving the New Kingmakers.


HP is a client, paid my T&E.

Categories: Uncategorized.