Blogs

RedMonk

Skip to content

Monki Gras – The developer conference about craft culture

I wanted a theme for 2016 that riffed off the idea of a Software and Crafts movement. Earlier this year Dave Letorey was talking about brewing beer for this year’s event, so Homebrew stuck.That is, I had the theme early.We’re in the age of the side project, in our industry because of open source dominance, so why not look at passion and sustainability and making beautiful things, not because its a job but because its a way of life.There is nothing more purpose driven than homebrew, or baking, or knitting, or becoming a wine distributor. You do it because you love it, not because it’s convenient. There is something incredibly powerful about that.Just as with software the brewing industry is being completely disrupted by the growth of microbreweries, many of which began as homebrew. Thus Evin O’Riordain, founder of The Kernel Brewery, one of our speakers this year, when he outgrew it gave his first set of equipment to the talented Mr Andy Smith at Partizan Brewing.

Source: Blog | Monki Gras | The developer conference about craft culture

Categories: Uncategorized.

A conversation about Continuous Deployment and Continuous Everything

When I was at HP Discover earlier this year we recorded a podcast about continuous deployment and DevOps with Dana Gardner of Interarbor Solutions and Ashish Kuthiala, Senior Director for Strategy at Hewlett Packard Enterprise. You can listen to the podcast here. Here are some edits from the transcript, with hopefully a couple of nuggets. I am fascinated for example by the convergence of social monitoring and product management and if necessary recall- as exemplified by the GM example.

 

Kuthiala: The continuous assessment term, despite my objections to the word continuous all the time, is a term that we’ve been talking about at HPE. The idea here is that for most software development teams and production teams, when they start to collaborate well, take the user experience, the bugs, and what’s not working on the production end at the users’ hands — where the software is being used — and feed those bugs and the user experience back to the development teams.

When companies actually get to that stage, it’s a significant improvement. It’s not the support teams telling you that five users were screaming at us today about this feature or that feature. It’s the idea that you start to have this feedback directly from the users’ hands.

We should stretch this assessment piece a little further. Why assess the application or the software when it’s at the hands of the end users? The developer, the enterprise architects, and the planners design an application and they know best how it should function. Whether it’s monitoring tools or it’s the health and availability of the application, start to shift left, as we call it.

 

Governor: One notion of quality I was very taken with was when I was reading about the history of ship-building and the roles and responsibilities involved in building a ship. One of the things they found was that if you have a team doing the riveting separate from doing the quality assurance (QA) on the riveting, the results are not as good. Someone will happily just go along — rivet, rivet, rivet, rivet — and not really care if they’re doing a great job, because somebody else is going to have to worry about the quality.

As they moved forward with this, they realized that you needed to have the person doing the riveting also doing the QA. That’s a powerful notion of how things have changed. Certainly the notion of shifting left and doing more testing earlier in the process, whether that be in terms of integration, load testing, whatever, all the testing needs to happen up front and it needs to be something that the developers are doing.

 

Governor: We’re making reference to manufacturing modes and models. Lean manufacturing is something that led to fewer defects, apart from (at least) one catastrophic example to the contrary. And we’re looking at that and asking how we can learn from that.

So lean manufacturing ties into lean startups, which ties into lean and continuous assessment.

What’s interesting is that now we’re beginning to see some interplay between the two and paying that forward. If you look at GM, they just announced a team explicitly looking at Twitter to find user complaints very, very early in the process, rather than waiting until you had 10,000 people that were affected before you did the recall.

Last year was the worst year ever for recalls in American car manufacturing, which is interesting, because if we have continuous improvement and everything, why did that happen? They’re actually using social tooling to try to identify early, so that they can recall 100 cars or 1,000 cars, rather than 50,000.

It’s that monitoring really early in the process, testing early in the process, and most importantly, garnering user feedback early in the process. If GM can improve and we can improve, yes.

 

Gardner: I remember in the late ’80s, when the Japanese car makers were really kicking the pants out of Detroit, that we started to hear a lot about simultaneous engineering. You wouldn’t just design something, but you designed for its manufacturability at the same time. So it’s a similar concept.

But going back to the software process, Ashish, we see a level of functionality in software that needs to be rigorous with security and performance, but we’re also seeing more and more the need for that user experience for features and functions that we can’t even guess at, that we need to put into place in the field and see what happens.

How does an enterprise get to that point, where they can so rapidly do software that they’re willing to take a chance and put something out to the users, perhaps a mobile app, and learn from its actual behavior? We can get the data, but we have to change our processes before we can utilize it.

 

Kuthiala: Absolutely. Let me be a little provocative here, but I think it’s a well-known fact that the era of the three-year, forward-looking roadmaps is gone. It’s good to have a vision of where you’re headed, but what feature, function and which month will you release so that the users will find it useful? I think that’s just gone, with this concept of the minimum viable product (MVP) that more startups take off with and try to build a product and fund themselves as they gain success.

It’s an approach even that bigger enterprises need to take. You don’t know what the end users’ tastes are.

I change my taste on the applications I use and the user experience I get, the features and functionality. I’m always looking at different products, and I switch my mind quite often. But if I like something and they’re always delivering the right user experience for me, I stick with them.

The way for an enterprise to figure out what to build next is to capture this experience, whether it’s through social media channels or engineering your codes so that you can figure out what the user behavior actually is.

The days of business planners and developers sitting in cubicles and thinking this is the coolest thing I’m going to invent and roll out is not going to work anymore. You definitely need that for innovation, but you need to test that fairly quickly.

Also gone are the days of rolling back something when something doesn’t work. If something doesn’t work, if you can deliver software really quickly at the hands of end users, you just roll forward. You don’t roll back anymore.

It could be a feature that’s buggy. So go and fix it, because you can fix it in two days or two hours, versus the three- to six-month cycle. If you release a feature and you see that most users — 80 percent of the users — don’t even bother about it, turn it off, and introduce the new feature that you were thinking about.

This assessment from the development, testing, and production that you’re always doing starts to benefit you. When you’re standing up for that daily sprint and wondering what are the three features I’m going to work on as a team, whether it’s the two things that your CEO told you you have to absolutely do it, because “I think it’s the greatest thing since sliced bread,” or it’s the developer saying, “I think we should build this feature,” or some use case is coming out of the business analyst or enterprise architects.

 

Gardner: For organizations that grok this, that say, “I want continuous delivery. I want continuous assessment,” what do we need to put in place to actually execute on it to make it happen?

 

Governor: We’ve spoken a lot about cultural change, and that’s going to be important. One of the things, frankly, that is an underpinning, if we’re talking about data and being data-driven, is just that we have new platforms that enable us to store a lot more data than we could before at a reasonable cost.

There were many business problems that were stymied by the fact that you would have to spend the GDP of a country in order to do the kind of processing that you wanted to, in order to truly understand how something was working. If we’re going to model the experiences, if we are going to collect all this data, some of the thinking about what’s infrastructure for that so that you can analyze the data is going to be super important. There’s no point talking in being data-driven if you don’t have architecture for delivering on that.

 

Kuthiala: You’re right. We have a very rich portfolio across the entire software development cycle. You’ve heard about our Big Data Platform. What can it really do, if you think about it? James just referred to this. It’s cheaper and easier to store data with the new technologies, whether it’s structured, unstructured, video, social, etc., and you can start to make sense out of it when you put it all together.

There is a lot of rich data in the planning and testing process, and all the different lifecycles. A simple example is a technology that we’ve worked on internally, where when you start to deliver software faster and you change one line of code and you want this to go out. You really can’t afford to do the 20,000 tests that you think you need to do, because you’re not sure what’s going to happen.

We’ve actually had data scientists working internally in our labs, studying the patterns, looking at the data, and testing concepts such as intelligent testing. If I change this one line of code, even before I check it in, what parts of the code is it really affecting, what functionality? If you are doing this intelligently, does it affect all the regions of the world, the demographics? What feature function does it affect? It’s narrowing it down and helping you say, “Okay, I only need to run these 50 tests and I don’t need to go into these 10,000 tests, because I need to run through this test cycle fast and have the confidence that it will not break something else.”

So it’s a cultural thing, like James said, but the technologies are also helping make it easier.

 

Kuthiala: We were talking about Lean Functional Testing (LeanFT) at HP Discover. The idea is that the developer, like James said, knows his code well. He can test it well before and he doesn’t throw it over the wall and let the other team take a shot at it. It’s his responsibility. If he writes a line of code, he should be responsible for the quality of it.

 

Governor: The RedMonk view of the world, is that, increasingly, developers are making the choices, and then we’re going to find ways to support the choices they are making. The term continuous integration began as a developer term, and then the next wave of that began to be called continuous deployment. That’s quite scary for a lot of organizations. They say, “These developers are talking about continuous deployment. How is that going to work?”

The circle was squared when I had somebody come in and say what we’re talking to customers about is continuous improvement, which of course is a term again that we saw first in manufacturing. But The Developer Aesthetic is tremendously influential here, and this change has been driven by them. My favourite “continuous” is a great phrase, continuous partial attention, which is the world we all live in now.

Categories: Uncategorized.

What is your Integrated Joy Strategy?

Categories: Uncategorized.

Thingmonk Update: crazy speaker line up, venue etc

OK we’re about a week away from Thingmonk and we have still have some places left. i wanted to ping you again because details are a lot clearer now, and the conference looks pretty banging, to be honest. You should come if you haven’t signed up already.

Our speakers list is pretty crazy – we’re flying Matt Biddulph in from the San Francisco to talk about Thington, a platform built from the design perspective of conversations between machines and machines and machines and people.  Mark Shuttleworth of Ubuntu fame will be talking about how open source will underpin IoT, Dave McCrory CTO of Basho will give us the ins and outs of design decisions in creating a time series data store, Jeremiah Stone will discuss GE Industrial Automation’s technical decisions in becoming a platform company.

As in previous years we also have a big focus on user experience with Sophie Riches and Sam Wimslet of IBM talking about Design Thinking at scale, with Claire Rowland updating us on her reckons from Designing Connected Products: UX design for the internet of things, published by O’Reilly. Amanda Brock will give us the low down on the legal implications of a world in which machines regularly make decisions on our behalf. Yodit Stanton will update on us opensensors.io – expect some Clojurey goodness, and we also have Natalia Oskina from the same firm explaining how it is monitoring air pollution levels near Heathrow (how unusual to bring some facts to the expansion debate).

Like AWS? Kyle Roche, who runs IoT at Amazon will be spending the conference with us, and you can expect a deeply technical talk from him.  Thomas Grassl and Craig Cmehil will be talking about how they’re retooling SAP to make it more developer friendly in order to increase the company’s relevance to IoT, bridging old school manufacturing and resource planning software into the new world. Boris Adryan is a PhD and squeaky wheel – he’ll be telling us what we’re all doing wrong 😉

We’re also very excited to have Moo involved this year, speaking on the technical, design and industrial challenges of rolling out an entirely new product category – NFC enabled business cards. Well be using these cards to track consumption of Coffee, drinks and food at the event and announcing a hack competition with the winners being showcased at sister conference Monki Gras in January.

Last year Andy Stanford Clark wowed us by running his presentation on a raspberry pi powered by a hydrogen fuel cell. We liked it so much that this year we’re running the conference at the Arcola Theatre in Dalston, which oddly enough is also the home of Arcola Energy, which provided the fuell cell in question.

Food and drink will be to the the usual RedMonk artisan standards- expect Arancini balls and other delights, and the finest craft ales known to humanity – oh yeah this year we’ll also be bringing you some natural wines. Breakfast will come from our media partners The New Stack, who are all about pancakes!

Anyway hope to see you there. There are some good discount codes flying around so just let me know if you want one, and haven’t already seen one out in the open.

Categories: Uncategorized.

My Submission to call for evidence by Commission on Freedom of Information

Given the subject is Freedom of Information it seems appropriate I publish my response to the current government consultation on FOI here. I also strongly recommend you take part. FOI is an essential weave in the democratic fabric. 38 Degrees has made it really easy to respond here.

 

Why do you think Freedom of Information should be protected?

FOI leads to better governance and better outcomes for citizens and businesses.

How do you think government transparency could be improved?

FOI should cover all companies providing government services

Question 1: What protection should there be for information relating to the internal deliberations of public bodies? For how long after a decision does such information remain sensitive? Should different protections apply to different kinds of information that are currently protected by sections 35 and 36? (Note: ‘Sections 35 and 36’ of the Act cover policy formulation, communications between ministers, and information that would affect the free and frank giving of advice or expression of views.)

as the software industry has shown open source is a more effective management and production mechanism. we make better management and technical decisions in the open.

Question 2: What protection should there be for information which relates to the process of collective Cabinet discussion and agreement? Is this information entitled to the same or greater protection than that afforded to other internal deliberative information? For how long should such material be protected?

cabinet discussions are part of government decision making and as such should be covered by FOI

Question 3: What protection should there be for information which involves candid assessment of risks? For how long does such information remain sensitive?

being open allows the public to buy in and feel more ownership of major infrastructure projects, which can only be a good thing

Question 4: Should the executive have a veto (subject to judicial review) over the release of information? If so, how should this operate and what safeguards are required? If not, what implications does this have for the rest of the Act, and how could government protect sensitive information from disclosure instead?

the NHS example is salutary. if government is serious about reform it needs to be serious about transparency, giving decisions affect our health and lives

Question 5: What is the appropriate enforcement and appeal system for freedom of information requests?

of course decisions to be defensible.

Question 6: Is the burden imposed on public authorities under the Act justified by the public interest in the public’s right to know? Or are controls needed to reduce the burden of FoI on public authorities? If controls are justified, should these be targeted at the kinds of requests which impose a disproportionate burden on public authorities? Which kinds of requests do impose a disproportionate burden?

FOI should not only be a tool for people with money. it should be free at the point of use, even if it does involve overheads

Categories: Uncategorized.

OOW2015: Oracle in the post big outsourcing era – the DVLA story

I wrote a post recently about the huge changes facing the traditional outsourcing industry, driven by the need of enterprise customers to make digital transformations, increase product delivery velocity and improve customer experiences. What I didn’t expect however was that at Oracle Open World 2015 I would meet a customer doing just that, and sticking with Oracle as a cloud supplier.

The Driver Vehicle Licensing Agency in the UK is the very model of a traditional IT shop, or at least it was until the transformation currently being driven by the Cabinet Office and its IT change organisation the Government Digital Service (GDS).

I have spent a fair bit of time with GDS since its inception, partly because many of its people are friends of mine. It’s an organisation built from the ground up on the principles of the modern web – user research, agile development, open source and cloud. GDS is there to prove to hidebound civil departments that there is a better way of doing things, not by talking but by making things that look good and work well for the citizen. Of course motherhood and apple crumble always has its detractors – and there are those that claim GDS has only built pretty new front ends, without getting a handle on the processes and core transactional systems that constitute government IT. But then, antibodies embedded within billion dollar contracts would say that. You have to start somewhere, and the GDS mission is nothing less than the biggest IT transformation in any industry in any country in the world; it was always going to take time. The GDS transformation is now being distributed out to the Departments – The Ministry of Justice, Her Majesty’s Revenue and Customs and so on.

When Liam Maxwell started driving this change program, with direct top down support from Francis Maude, the traditional IT suppliers were directly in the firing line. It wasn’t just outsourcing that was under attack – Oracle was a particular bug bear – being an aggressive sales-driven company demanding considerable license fees for platforms that weren’t open source, such as the core Oracle database and Weblogic Application Server. It was hard to see how Oracle fitted into the new world. GDS made a public point of cancelling trips to Redwood Shores and visiting Mongo in San Francisco instead.

So when I met with DVLA in San Francisco at Oracle Open World 2015 I wasn’t sure what to expect – the old world or the new. It turns out I got both. Ian Patterson, who was seconded from GDS to run IT at the organisation, is definitely not afraid to take tough decisions, but he is also a pragmatist.

The DVLA’s IT outsourcing contract was a typical hideous outsourcing contract – the kind of thing we used to write about when I was a journalist at Computing in the mid to late 1990s – involving both Fujitsu and IBM, which had taken on responsibility for IT systems and the staff that managed them.

Patterson immediately identified the contract itself as the biggest problem in moving forward as an organisation.

“You’re going to spend £240m standing still, with £80m go to market for a new contract, in order to transform. I said why not transform now, to align?  Ditch the commercial constraints and create some internal capability.”

But the received wisdom is that cancelling such a contract would introduce too much risk. Patterson had to bring the new Permanent Secretary (the Civil Service term for the individual that runs a huge government department) along, and generally Permanent Secretaries are fairly risk averse.

The first task was to reskin the electronic vehicle licensing system, with the Systems Integrator claiming it would take a year and a million pounds. The new team rebuilt the citizen facing parts of the system itself in 7 weeks.

Patterson agreed that Financial and Procurement are not the right people to make choices about technical architecture and approach, which seems obvious, except it is how government IT has worked for the last 30 years.

“Why would you outsource skills and capability?”

I asked Patterson how the DVLA was moving forward given it had the contract in place. The answer was surprisingly straightforward – cancel the contract. After 22 years the DVLA once again took responsibility for running its own IT systems, in the process insourcing the staff that had previously been transferred to the supplier. 350 people came back with full TUPE (transfer of undertakings in terms of contractual employment terms) in order to run the mainframe and associated systems.

“Now we have full sight of the supply chain. We have full understanding of cost, and we know what’s in the black box of transactions and services”.

Then the GDS playbook kicked in. DVLA formed a partnership with startup incubator and coworking business TechHub and with local Universities. This allowed them to provide funding for 20 of their existing staff to reskill by undertaking Foundation Degrees in Computer Science whilst also recruiting talent from the local market.

“We picked people with the right attitude. The madness is we [usually] had people with tech background and education that took better paid jobs as managers, so let’s employ developers to work on our systems. Let’s create environments in which developers can thrive.

You can’t put digital in the corner. If you have separate corners the old keeps trying to stop the new. We have structures so people from the factory floor have a voice to the senior management team. We do a lot of internal communications. I protect people from the battles. That’s my job. I reengineered the team, but not all at once. I put a new senior team together from people inside the organisation.”

So what about Oracle?

“If you look at the licensing model, it was behind Systems Integrators. Now we can talk directly to people that build the software. You need a technological and strategic view. I don’t want to move to Oracle Cloud because it’s the only choice, but because it’s the choice we make. I want other cloud providers involved. I don’t want lock in if we move something to Oracle Cloud. If I want to put MongoDB in the Oracle Cloud, I want the flexibility to do that. The more that Oracle recognises the better it will be.”

Oracle is currently suffers negative perception for its aggressive approach to customers in licensing and relicensing deals, particular in virtual and cloud environments – so how does the DVLA deal with that?

“At the moment we have View Driver Record sitting in Skyscape (cloud platform) talking to our Oracle database. If you put that in cloud, you’d have to license every machine, so we got a license that allowed us to ping things around. I told Oracle if you’re willing to do that, I’ll have a lot of Oracle product. I said I will look at you in line with all the others, rather than just replacing you.”

Boom! It turns out the guy running IT at the DVLA sounds a lot like RedMonk.

“Software companies that are big are going to shrink or they’re going to change.”

But, Patterson added:

“Larry Ellison’s speech yesterday said all the right things to me.”

The bottom line is that in 2015 Patterson sees Oracle as a less risky bet than other cloud providers. Surprising perhaps, but he is a hard nosed type, and I have to say that although I have only met him once I trust him to spend the money I pay in taxes wisely, and make the right licensing and contractual decisions.

If Oracle can be flexible, and keep people like Patterson on side, it will be in a good position to make the transition to being a successful cloud provider. Plenty of people moan about Oracle’s approach to licensing discussions, probably justifiably, but let’s not forget who the customer is. As sadly we seem to have done for too long – outsourcing and vendor mega contracts can become too big too fail, but that’s actually not the supplier’s fault.

As the DVLA story makes clear contracts can be cancelled. Lock in is a two way street. Customers choose suppliers.

I should declare that Oracle paid my travel and expenses to OOW15, and set up the interview with Patterson, but I still felt he was playing a very straight bat, with me, and that he’s exactly what Oracle needs right now – to face really tough, IT savvy negotiators.

Categories: cloud.

ThingMonk 2015: Hacking Industrial, Scaling Processes, Digital Paper, Data, Design and even Legal Stuff

THING-MONK-WEB-A

So Thingmonk is fast approaching and it’s shaping up to be all kinds of awesome. Our key themes are data, process, and scaling the revolution that developers started. When we began the Thingmonk event series we felt we needed to make a statement about playfulness and getting stuff done. We achieved that with drones that took off when we poured a cup of coffee, and Adam Gunther wearing a Hasselhof leather jacket with programmable lights.

We avoided security as a topic, for example, because we wanted to encourage the building of services and proofs of concept. I have always felt that IoT was just M2M without the fear. But software developers do of course know how to scale things, and that’s what’s happening now. We’re seeing the emergence of powerful new back end cloud platforms – streams, lakes, events and various storage engines – packaging open source tools like Kafka and Cassandra in order to enable new uses cases. Dave McCrory, Basho CTO will be giving us the low down on design decisions for time series. Because of course the IoT is all about time series. Imagine a world where every time an engineer tightens a screw Bosch stores it in MongoDB and correlates it with CADCAM models of the machine part.

Companies like Bosch and GE are now using web technology to power the industrial internet, and I wanted to capture some of that energy. How do you scale an IoT service anyway? With that in mind, we’re proud to have a great speaker from GE itself – in the shape of Jeremiah Stone. If any one company is going to make a huge impact on the industrial world with IoT it’s GE.

For a hacker planning to build something epic at scale look no further than Matt Biddulph; we’re flying him over from San Francisco for the event. Matt Biddulph, who coined the term Silicon Roundabout for the Shoreditch cluster before it was a thing, flipped Dopplr to Nokia and is now building Thington, a conversational IoT platform. If you’re interested in what conversational IoT is about (machines talking to machines and people) come along. It’s a coup to have Matt, and it feels very right having had other Silicon Roundabout luminaries/early founders talk at the event in previous years – Alexandra Deschamps-Sonsino (Good Night Lamp, IoT london), Tom Taylor (Newspaper Club), Matt Webb (Berg).

If any company represents the success of Shoreditch as a thing it’s Moo – cashflow, revenues, design, reputation, crushing it. Having reinvented the business card once they’re now doing it again, by printing NFC into it with Business Cards+. That’s right – you can now business cards that ping. We have Kai Turner and Nick Ludlam of Moo talking about the practical implications of trying to build an entirely new industrial process at scale from a design and process perspective. At the event you will “pay for” coffees and adult beverages with Thingmonk credits stored on Moo cards. We’ll also kick off a hack competition to give you something to do over Christmas, with winners announced at sister event Monki Gras in late January 2016. There will be NFC payments systems printed on paper!

And Mark Shuttleworth – yes that Mark Shuttleworth – explaining how Ubuntu will play in the IoT world. And did I mention we have Mark Shuttleworth talking at the event. Claire Rowland is coming back to talk design. Tamara Giltsoff is also coming back to update us on building out small scale distributed energy, with monitoring, in Africa. And Boris Adryan – we do like a good PhD level curmudgeon on board. Amanda Brock meanwhile will be giving us the low down on the liability implications of a world where machines routinely make decisions in life critical contexts – which could be the most fascinating talk at the event.

I never intended this post to be just a list of speakers though – for that please head over to the Thingmonk site, take a look, then buy yourself a ticket.

We will also be hosting a London Pancake Breakfast with our friends at The New Stack. Hopefully with some gluten free. Our coffee will be epic as ever, this year provided by the wonderful Brunswick East.

It was super fun but we’ll try not to have a power cut this time, fun though hacking by candle and LED light is.

Categories: Uncategorized.

Cloud Native is Nice and All, but How Do We Get There?

And you may ask yourself
What is that beautiful house?
And you may ask yourself
Where does that highway go to?
And you may ask yourself
Am I right?…Am I wrong?
And you may say to yourself yourself
My God!…What have I done?!
– Once in a Lifetime, Talking Heads 1980

We spend a lot of time in this industry chasing the bright and shiny, hoping for a silver bullet we can buy to transform ourselves into digital paragons. But the truth is, IT is hard. Change is Hard. Change is about people. One of the latest buzzwords doing the rounds is Cloud Native. Some smart people have taken a stab at defining it.

“There is a rough consensus on many Cloud Native traits. Containers as an atomic unit, for example. Micro-services as the means of both construction and communication. Platform independence. Multiple language support. Automation as a feature of everything from build to deployment. High uptime. Ephemeral infrastructure (cattle not pets). And so on.

Bigger picture, the pattern Cloud Native platforms have in common is that they are a further abstraction. Individual compute instances are subsumed into a larger, virtual whole, what has been referred to here as a fabric.”

Or

“Cloud Native is about unifying a new generation of tools under a single brand.

The industry will undergo accelerated change if end users and vendors can rally around a simple concept. In the 1990s that concept was “The Web” – remember when every business realised it needed a web site? More recently, “The Cloud” and “Big Data” have played a similar role for changes in on demand compute and data analysis respectively. And so Cloud Native is a way to describe a revolution in which businesses make applications central.”

But as per the Talking Heads lyric above, often times it’s the journey rather than the destination that is so important. Just how do we get there, or how will we get there? After all, as I have said change is hard, hard enough to threaten more than one multibillion dollar industries.

The Cloud Foundry community was the first to really aggressively adopt Cloud Native as a rallying cry, so it shouldn’t surprise there is some maturity in thinking emerging from the community. And with maturity it seems, comes maturity models. This morning James Watters of Pivotal shared this chart.

I really like that a financial services company is not asking what they can buy to make everything ok, but rather what’s the journey they need to undertake in order to make a digital transformation. Interestingly the model maps pretty well to our current move up the stack from Infrastructure as a Service (IaaS) to Platform As A Service (IaaS).

Terms like Design For Failure are pretty scary for traditional IT shops, but without designing for failure you set yourself up to fail. Distributed systems break, so deal with it. Web companies expect things to break, and design accordingly. Netflix for example has open sourced a toolset called Chaos Monkey that break things on purpose to force a mode upon you. Cloud Native is about learning from the web though. This is the kind of somewhat counterintuitive thinking we need to embrace on the road to Cloud Native.

I’d be interested to know what you think of the model above. Feedback would be very welcome.

Categories: Cloud Native.

On Cloud Foundry, Pivotal, Bluemix and the Open Source funnel

Yesterday I had an interesting moment where the available data was completely the opposite of what I expected. I wanted to test my assumption that leading with the name of an open source project would capture more interest than company’s distributions of that project. My thesis was that Cloud Foundry for example would be a term people searched for more than IBM Bluemix. I was wildly wrong

 

Helpfully Google Trends allowed me to disambiguate “Pivotal” to some extent. The graph is interesting because the recent marketing efforts of Pivotal and notably IBM are paying dividends for both parties. Cloud Foundry shows a gentle growth curve, with Pivotal trending up nicely. The Bluemix graph however is much steeper. IBM has been engineering and marketing resources into Bluemix since January 2014 – it is a Ginni Rometti level strategic initiative and it seems to be working, if search volume is a leading indicator of potential adoption.

IBM financial results were disappointing again this week but in terms of leading indicators Bluemix is definitely moving in the right direction, with interest growing sharply. I know Pivotal is closing some significant deals right now, though i have a less insight into Bluemix’s sales motion. But in search volume it appears that Bluemix is set to overtake Pivotal in short order.

It’s beginning to look a bit like the 1990s app server market, with Pivotal playing the role of BEA, as two major players carve out leading positions in an emerging market, with Cloud Foundry playing the Apache web server role. The best packagers in tech waves win big, and the PaaS market is now apparently a thing.

While we’re at it, let’s check out another project. It turns out searches for Hadoop do indeed dwarf those of the distribution companies, which represents the pattern I expected to see with Cloud Foundry.

IBM, Pivotal and Cloudera are all clients.

Obviously basing analysis on Google Trends is subject to all kinds of caveats. But search volume does tell us something about intentions and opportunities.

Categories: Uncategorized.

The Tide Turns on Big Outsourcing – on cloud, agile, and rebuilding skills

Coming away from AWS re:Invent 2015 it was pretty clear that Big Change is in the air. Rumours of the Dell EMC deal were percolating, and Amazon was touting customers going “All In” on cloud. GE is of course the very model of a leader in the corporate world, so when it sneezes others catch a cold.

Paul Downey, in his role at the UK Government Digital Service, which was set up as a pilot ship to help the departmental supertankers with digital transformations, has plenty of experience of multi-billion pound outsourcing contracts. He recently wrote about an internal client concerned about the risk of new (and more effective) ways of working.

“The problem with agile is scope creep”, he said

“We run a tight ship here. We’re on a budget, spending public money and can’t afford for things to slip. We need a detailed plan and a fixed contract to hold our suppliers’ feet to the fire when they deviate from it. We have to deliver all of the features, and on-time!”

I’m used to people challenging agile as if it’s unconventional. As if it’s a new, untried, untested thing. The present is here, it’s just not evenly distributed.

But somehow on this day I wasn’t prepared for this challenge. I was flummoxed.

Maybe he did run a tight ship and just hadn’t seen the horrors I’d seen: the service delivered feature complete even though most of the features weren’t needed, the service so complicated it was unusable, feature complete lest the supplier invoked penalty clauses. The system which cost too much to change, which needlessly instructed people to post their passports to an office, needing operational staff post them back unopened. Systems procured with a fixed 10 year contract, and which were already obsolete before they were completed.

I have seen very similar arguments from people I consider to be shysters. The idea that massive outsourcing contracts don’t suffer from scope creep and related, massive cost overruns, would be funny were it not for the fact that in the public sector at least, it’s our money, paid in taxes, being wasted. The UK government has wasted tens of billions of pounds on failed IT projects over the last 20 years or so, and one huge step forward under the last coalition government was a more sensible approach to citizen service provision.

Statements like this are frankly just as striking as the GE commitment:

“These [huge] contracts were meant to be about lowering risk. But when you package up a huge range of functions into one contract the risk becomes impossible to manage”

– Tariq Rashid, then Lead Architect, UK Home Office

Packaging of risk into supposedly AAA stuff, managed by huge suppliers, is why I often refer to Big Outsourcing contracts as Collateralised Debt Obligations. All the technical debt is wound up and disguised, and becomes impossible to unravel. Then the project is Too Big To Fail – and the enterprise customer is left on the hook, throwing more money after bad. Traditional outsourcing was designed explicitly for environments that would not change. Any changes of scope are massively expensive. It is my belief that those who accuse agile of “not working” because of “scope creep” are guilty of a bad case of projection; they certainly fail to understand how agile does a far better job of meeting changing user needs. In the deck below I talk about how structural changes in tech and now being reflected in government strategy.

 

Big traditional outsourcing is no longer fit for purpose. It came from the era of IT Doesn’t Matter. It doesn’t map to any of the trends currently driving us forward as an industry – Agile, Design Thinking, Digital Disruption, The Data Economy, The API economy, DevOps, Minimum Viable Product, Continuous Deployment, getting closer to the user, proliferation of infrastructure choices, falling infrastructure costs, open source, the enterprise embrace of technology built by Web companies and so on. Take CapitalOne for example – which also featured at Amazon’s re:Invent 2015 – it acquired AdaptivePath, a UX agency because it knew customer experience was becoming critical in banking as competition grows. Enterprises now routinely compete with extremely well funded startups investing heavily in software engineering talent.

Traditional outsourcing on the other hand generally pours concrete onto applications and underlying infrastructure, expecting that companies won’t need to change in the 5-10 year horizon.

I spoke with Jay Pullur from Pramati, an Indian entrepreneur, recently about the thinking behind his acquisition of WaveMaker from VMware and he said that one reason a Java/RAD platform was interesting because in his home market Indian outsourcing are only too aware they need to up their game. 80% growth rates have shrunk to around 10%. Wipro reported today Q2 dollar revenue up 2%, EBITDA margin may fall.

The Wall Street Journal has written about this trend – India’s Outsourcing Firms Change Direction as ‘Cloud’ Moves In.

“AstraZeneca PLC is sharply scaling back the business it gives to the Indian outsourcing companies that it has long relied on for tech help.”

David Smoley, AstraZeneca’s technology chief, said he expects to half the $750m the company spends annually on outsourcing over the next two years.

RedMonk has been tracking the ongoing cratering of traditional software licensing fees for some time now but it’s going to get worse for vendors as companies start to unravel their huge outsourcing contracts and bring things back inhouse. Most traditional outsourcing contracts include significant licenses and maintenance fees for traditional on prem software, which also limits operating flexibility. Then of course there is a falling off in services associated with the packaged application market – According to Forbes reporting on IBM’s financial results this week Global Business Services (GBS) “reported a 13% year-on-year decline in revenue to $4.3 billion (down 4% in constant currency), primarily due to declines in traditional packaged application implementation“. [italics mine].

Outsourcing won’t go away of course but it will need to dramatically change. Smart people say we’ve just been doing it wrong, but it is going to be a wrenching transition for an industry that looks increasingly out of step with emerging operating models. At re:Invent Accenture announced a practice taking enterprises to the AWS Cloud but I will reserve judgement before seeing some customer results. Effective services companies for cloud-enabled digital transformation are going to need a completely different set of skills – CMMI Level 5, ITIL, J2EE aren’t going to cut it. They will need to be be design led, cloud native, and help customers to rediscover competencies they have lost over the last couple of decades.

I would look to companies like FutureGov as a model for more effective public private sector partnership. Or see ThoughtWorks, a company built to help enterprises catch up with the latest in application development thinking and methodology. My friends at YLD plan to move up the stack, having begun life as a Node.js consulting shop they are now aiming at broader Digital Transformation – there is some outsourcing and offshoring in the model; Founder Nuno Job now has developers in London and Lisbon, working on customer problems. Business should be working with people like Jeff Sussna, who will help them understand that we should build systems expecting them to break. One company well set up for the change is Pivotal – beyond its Cloud Foundry product business it is one of the most successful agile software development practices out there. IBM is now looking to emulate its model,

So cloud is hurting Big Outsourcing because it is underpinning new operating models for businesses. Cloud is of course itself a form of outsourcing, but one that allows for speed of delivery, and encourages reshoring of skills. But people and processes changes are needed to do the work. Just because customers have deployment and platform options like Azure, AWS, Google or Softlayer doesn’t mean they get are set up to get the best out of them.

update – check out Toll, cancelling a “strategic” outsourcing deal.

Categories: Uncategorized.