Skip to content

My Submission to call for evidence by Commission on Freedom of Information

Given the subject is Freedom of Information it seems appropriate I publish my response to the current government consultation on FOI here. I also strongly recommend you take part. FOI is an essential weave in the democratic fabric. 38 Degrees has made it really easy to respond here.


Why do you think Freedom of Information should be protected?

FOI leads to better governance and better outcomes for citizens and businesses.

How do you think government transparency could be improved?

FOI should cover all companies providing government services

Question 1: What protection should there be for information relating to the internal deliberations of public bodies? For how long after a decision does such information remain sensitive? Should different protections apply to different kinds of information that are currently protected by sections 35 and 36? (Note: ‘Sections 35 and 36’ of the Act cover policy formulation, communications between ministers, and information that would affect the free and frank giving of advice or expression of views.)

as the software industry has shown open source is a more effective management and production mechanism. we make better management and technical decisions in the open.

Question 2: What protection should there be for information which relates to the process of collective Cabinet discussion and agreement? Is this information entitled to the same or greater protection than that afforded to other internal deliberative information? For how long should such material be protected?

cabinet discussions are part of government decision making and as such should be covered by FOI

Question 3: What protection should there be for information which involves candid assessment of risks? For how long does such information remain sensitive?

being open allows the public to buy in and feel more ownership of major infrastructure projects, which can only be a good thing

Question 4: Should the executive have a veto (subject to judicial review) over the release of information? If so, how should this operate and what safeguards are required? If not, what implications does this have for the rest of the Act, and how could government protect sensitive information from disclosure instead?

the NHS example is salutary. if government is serious about reform it needs to be serious about transparency, giving decisions affect our health and lives

Question 5: What is the appropriate enforcement and appeal system for freedom of information requests?

of course decisions to be defensible.

Question 6: Is the burden imposed on public authorities under the Act justified by the public interest in the public’s right to know? Or are controls needed to reduce the burden of FoI on public authorities? If controls are justified, should these be targeted at the kinds of requests which impose a disproportionate burden on public authorities? Which kinds of requests do impose a disproportionate burden?

FOI should not only be a tool for people with money. it should be free at the point of use, even if it does involve overheads

Categories: Uncategorized.

OOW2015: Oracle in the post big outsourcing era – the DVLA story

I wrote a post recently about the huge changes facing the traditional outsourcing industry, driven by the need of enterprise customers to make digital transformations, increase product delivery velocity and improve customer experiences. What I didn’t expect however was that at Oracle Open World 2015 I would meet a customer doing just that, and sticking with Oracle as a cloud supplier.

The Driver Vehicle Licensing Agency in the UK is the very model of a traditional IT shop, or at least it was until the transformation currently being driven by the Cabinet Office and its IT change organisation the Government Digital Service (GDS).

I have spent a fair bit of time with GDS since its inception, partly because many of its people are friends of mine. It’s an organisation built from the ground up on the principles of the modern web – user research, agile development, open source and cloud. GDS is there to prove to hidebound civil departments that there is a better way of doing things, not by talking but by making things that look good and work well for the citizen. Of course motherhood and apple crumble always has its detractors – and there are those that claim GDS has only built pretty new front ends, without getting a handle on the processes and core transactional systems that constitute government IT. But then, antibodies embedded within billion dollar contracts would say that. You have to start somewhere, and the GDS mission is nothing less than the biggest IT transformation in any industry in any country in the world; it was always going to take time. The GDS transformation is now being distributed out to the Departments – The Ministry of Justice, Her Majesty’s Revenue and Customs and so on.

When Liam Maxwell started driving this change program, with direct top down support from Francis Maude, the traditional IT suppliers were directly in the firing line. It wasn’t just outsourcing that was under attack – Oracle was a particular bug bear – being an aggressive sales-driven company demanding considerable license fees for platforms that weren’t open source, such as the core Oracle database and Weblogic Application Server. It was hard to see how Oracle fitted into the new world. GDS made a public point of cancelling trips to Redwood Shores and visiting Mongo in San Francisco instead.

So when I met with DVLA in San Francisco at Oracle Open World 2015 I wasn’t sure what to expect – the old world or the new. It turns out I got both. Ian Patterson, who was seconded from GDS to run IT at the organisation, is definitely not afraid to take tough decisions, but he is also a pragmatist.

The DVLA’s IT outsourcing contract was a typical hideous outsourcing contract – the kind of thing we used to write about when I was a journalist at Computing in the mid to late 1990s – involving both Fujitsu and IBM, which had taken on responsibility for IT systems and the staff that managed them.

Patterson immediately identified the contract itself as the biggest problem in moving forward as an organisation.

“You’re going to spend £240m standing still, with £80m go to market for a new contract, in order to transform. I said why not transform now, to align?  Ditch the commercial constraints and create some internal capability.”

But the received wisdom is that cancelling such a contract would introduce too much risk. Patterson had to bring the new Permanent Secretary (the Civil Service term for the individual that runs a huge government department) along, and generally Permanent Secretaries are fairly risk averse.

The first task was to reskin the electronic vehicle licensing system, with the Systems Integrator claiming it would take a year and a million pounds. The new team rebuilt the citizen facing parts of the system itself in 7 weeks.

Patterson agreed that Financial and Procurement are not the right people to make choices about technical architecture and approach, which seems obvious, except it is how government IT has worked for the last 30 years.

“Why would you outsource skills and capability?”

I asked Patterson how the DVLA was moving forward given it had the contract in place. The answer was surprisingly straightforward – cancel the contract. After 22 years the DVLA once again took responsibility for running its own IT systems, in the process insourcing the staff that had previously been transferred to the supplier. 350 people came back with full TUPE (transfer of undertakings in terms of contractual employment terms) in order to run the mainframe and associated systems.

“Now we have full sight of the supply chain. We have full understanding of cost, and we know what’s in the black box of transactions and services”.

Then the GDS playbook kicked in. DVLA formed a partnership with startup incubator and coworking business TechHub and with local Universities. This allowed them to provide funding for 20 of their existing staff to reskill by undertaking Foundation Degrees in Computer Science whilst also recruiting talent from the local market.

“We picked people with the right attitude. The madness is we [usually] had people with tech background and education that took better paid jobs as managers, so let’s employ developers to work on our systems. Let’s create environments in which developers can thrive.

You can’t put digital in the corner. If you have separate corners the old keeps trying to stop the new. We have structures so people from the factory floor have a voice to the senior management team. We do a lot of internal communications. I protect people from the battles. That’s my job. I reengineered the team, but not all at once. I put a new senior team together from people inside the organisation.”

So what about Oracle?

“If you look at the licensing model, it was behind Systems Integrators. Now we can talk directly to people that build the software. You need a technological and strategic view. I don’t want to move to Oracle Cloud because it’s the only choice, but because it’s the choice we make. I want other cloud providers involved. I don’t want lock in if we move something to Oracle Cloud. If I want to put MongoDB in the Oracle Cloud, I want the flexibility to do that. The more that Oracle recognises the better it will be.”

Oracle is currently suffers negative perception for its aggressive approach to customers in licensing and relicensing deals, particular in virtual and cloud environments – so how does the DVLA deal with that?

“At the moment we have View Driver Record sitting in Skyscape (cloud platform) talking to our Oracle database. If you put that in cloud, you’d have to license every machine, so we got a license that allowed us to ping things around. I told Oracle if you’re willing to do that, I’ll have a lot of Oracle product. I said I will look at you in line with all the others, rather than just replacing you.”

Boom! It turns out the guy running IT at the DVLA sounds a lot like RedMonk.

“Software companies that are big are going to shrink or they’re going to change.”

But, Patterson added:

“Larry Ellison’s speech yesterday said all the right things to me.”

The bottom line is that in 2015 Patterson sees Oracle as a less risky bet than other cloud providers. Surprising perhaps, but he is a hard nosed type, and I have to say that although I have only met him once I trust him to spend the money I pay in taxes wisely, and make the right licensing and contractual decisions.

If Oracle can be flexible, and keep people like Patterson on side, it will be in a good position to make the transition to being a successful cloud provider. Plenty of people moan about Oracle’s approach to licensing discussions, probably justifiably, but let’s not forget who the customer is. As sadly we seem to have done for too long – outsourcing and vendor mega contracts can become too big too fail, but that’s actually not the supplier’s fault.

As the DVLA story makes clear contracts can be cancelled. Lock in is a two way street. Customers choose suppliers.

I should declare that Oracle paid my travel and expenses to OOW15, and set up the interview with Patterson, but I still felt he was playing a very straight bat, with me, and that he’s exactly what Oracle needs right now – to face really tough, IT savvy negotiators.

Categories: cloud.

ThingMonk 2015: Hacking Industrial, Scaling Processes, Digital Paper, Data, Design and even Legal Stuff


So Thingmonk is fast approaching and it’s shaping up to be all kinds of awesome. Our key themes are data, process, and scaling the revolution that developers started. When we began the Thingmonk event series we felt we needed to make a statement about playfulness and getting stuff done. We achieved that with drones that took off when we poured a cup of coffee, and Adam Gunther wearing a Hasselhof leather jacket with programmable lights.

We avoided security as a topic, for example, because we wanted to encourage the building of services and proofs of concept. I have always felt that IoT was just M2M without the fear. But software developers do of course know how to scale things, and that’s what’s happening now. We’re seeing the emergence of powerful new back end cloud platforms – streams, lakes, events and various storage engines – packaging open source tools like Kafka and Cassandra in order to enable new uses cases. Dave McCrory, Basho CTO will be giving us the low down on design decisions for time series. Because of course the IoT is all about time series. Imagine a world where every time an engineer tightens a screw Bosch stores it in MongoDB and correlates it with CADCAM models of the machine part.

Companies like Bosch and GE are now using web technology to power the industrial internet, and I wanted to capture some of that energy. How do you scale an IoT service anyway? With that in mind, we’re proud to have a great speaker from GE itself – in the shape of Jeremiah Stone. If any one company is going to make a huge impact on the industrial world with IoT it’s GE.

For a hacker planning to build something epic at scale look no further than Matt Biddulph; we’re flying him over from San Francisco for the event. Matt Biddulph, who coined the term Silicon Roundabout for the Shoreditch cluster before it was a thing, flipped Dopplr to Nokia and is now building Thington, a conversational IoT platform. If you’re interested in what conversational IoT is about (machines talking to machines and people) come along. It’s a coup to have Matt, and it feels very right having had other Silicon Roundabout luminaries/early founders talk at the event in previous years – Alexandra Deschamps-Sonsino (Good Night Lamp, IoT london), Tom Taylor (Newspaper Club), Matt Webb (Berg).

If any company represents the success of Shoreditch as a thing it’s Moo – cashflow, revenues, design, reputation, crushing it. Having reinvented the business card once they’re now doing it again, by printing NFC into it with Business Cards+. That’s right – you can now business cards that ping. We have Kai Turner and Nick Ludlam of Moo talking about the practical implications of trying to build an entirely new industrial process at scale from a design and process perspective. At the event you will “pay for” coffees and adult beverages with Thingmonk credits stored on Moo cards. We’ll also kick off a hack competition to give you something to do over Christmas, with winners announced at sister event Monki Gras in late January 2016. There will be NFC payments systems printed on paper!

And Mark Shuttleworth – yes that Mark Shuttleworth – explaining how Ubuntu will play in the IoT world. And did I mention we have Mark Shuttleworth talking at the event. Claire Rowland is coming back to talk design. Tamara Giltsoff is also coming back to update us on building out small scale distributed energy, with monitoring, in Africa. And Boris Adryan – we do like a good PhD level curmudgeon on board. Amanda Brock meanwhile will be giving us the low down on the liability implications of a world where machines routinely make decisions in life critical contexts – which could be the most fascinating talk at the event.

I never intended this post to be just a list of speakers though – for that please head over to the Thingmonk site, take a look, then buy yourself a ticket.

We will also be hosting a London Pancake Breakfast with our friends at The New Stack. Hopefully with some gluten free. Our coffee will be epic as ever, this year provided by the wonderful Brunswick East.

It was super fun but we’ll try not to have a power cut this time, fun though hacking by candle and LED light is.

Categories: Uncategorized.

Cloud Native is Nice and All, but How Do We Get There?

And you may ask yourself
What is that beautiful house?
And you may ask yourself
Where does that highway go to?
And you may ask yourself
Am I right?…Am I wrong?
And you may say to yourself yourself
My God!…What have I done?!
– Once in a Lifetime, Talking Heads 1980

We spend a lot of time in this industry chasing the bright and shiny, hoping for a silver bullet we can buy to transform ourselves into digital paragons. But the truth is, IT is hard. Change is Hard. Change is about people. One of the latest buzzwords doing the rounds is Cloud Native. Some smart people have taken a stab at defining it.

“There is a rough consensus on many Cloud Native traits. Containers as an atomic unit, for example. Micro-services as the means of both construction and communication. Platform independence. Multiple language support. Automation as a feature of everything from build to deployment. High uptime. Ephemeral infrastructure (cattle not pets). And so on.

Bigger picture, the pattern Cloud Native platforms have in common is that they are a further abstraction. Individual compute instances are subsumed into a larger, virtual whole, what has been referred to here as a fabric.”


“Cloud Native is about unifying a new generation of tools under a single brand.

The industry will undergo accelerated change if end users and vendors can rally around a simple concept. In the 1990s that concept was “The Web” – remember when every business realised it needed a web site? More recently, “The Cloud” and “Big Data” have played a similar role for changes in on demand compute and data analysis respectively. And so Cloud Native is a way to describe a revolution in which businesses make applications central.”

But as per the Talking Heads lyric above, often times it’s the journey rather than the destination that is so important. Just how do we get there, or how will we get there? After all, as I have said change is hard, hard enough to threaten more than one multibillion dollar industries.

The Cloud Foundry community was the first to really aggressively adopt Cloud Native as a rallying cry, so it shouldn’t surprise there is some maturity in thinking emerging from the community. And with maturity it seems, comes maturity models. This morning James Watters of Pivotal shared this chart.

I really like that a financial services company is not asking what they can buy to make everything ok, but rather what’s the journey they need to undertake in order to make a digital transformation. Interestingly the model maps pretty well to our current move up the stack from Infrastructure as a Service (IaaS) to Platform As A Service (IaaS).

Terms like Design For Failure are pretty scary for traditional IT shops, but without designing for failure you set yourself up to fail. Distributed systems break, so deal with it. Web companies expect things to break, and design accordingly. Netflix for example has open sourced a toolset called Chaos Monkey that break things on purpose to force a mode upon you. Cloud Native is about learning from the web though. This is the kind of somewhat counterintuitive thinking we need to embrace on the road to Cloud Native.

I’d be interested to know what you think of the model above. Feedback would be very welcome.

Categories: Cloud Native.

On Cloud Foundry, Pivotal, Bluemix and the Open Source funnel

Yesterday I had an interesting moment where the available data was completely the opposite of what I expected. I wanted to test my assumption that leading with the name of an open source project would capture more interest than company’s distributions of that project. My thesis was that Cloud Foundry for example would be a term people searched for more than IBM Bluemix. I was wildly wrong


Helpfully Google Trends allowed me to disambiguate “Pivotal” to some extent. The graph is interesting because the recent marketing efforts of Pivotal and notably IBM are paying dividends for both parties. Cloud Foundry shows a gentle growth curve, with Pivotal trending up nicely. The Bluemix graph however is much steeper. IBM has been engineering and marketing resources into Bluemix since January 2014 – it is a Ginni Rometti level strategic initiative and it seems to be working, if search volume is a leading indicator of potential adoption.

IBM financial results were disappointing again this week but in terms of leading indicators Bluemix is definitely moving in the right direction, with interest growing sharply. I know Pivotal is closing some significant deals right now, though i have a less insight into Bluemix’s sales motion. But in search volume it appears that Bluemix is set to overtake Pivotal in short order.

It’s beginning to look a bit like the 1990s app server market, with Pivotal playing the role of BEA, as two major players carve out leading positions in an emerging market, with Cloud Foundry playing the Apache web server role. The best packagers in tech waves win big, and the PaaS market is now apparently a thing.

While we’re at it, let’s check out another project. It turns out searches for Hadoop do indeed dwarf those of the distribution companies, which represents the pattern I expected to see with Cloud Foundry.

IBM, Pivotal and Cloudera are all clients.

Obviously basing analysis on Google Trends is subject to all kinds of caveats. But search volume does tell us something about intentions and opportunities.

Categories: Uncategorized.

The Tide Turns on Big Outsourcing – on cloud, agile, and rebuilding skills

Coming away from AWS re:Invent 2015 it was pretty clear that Big Change is in the air. Rumours of the Dell EMC deal were percolating, and Amazon was touting customers going “All In” on cloud. GE is of course the very model of a leader in the corporate world, so when it sneezes others catch a cold.

Paul Downey, in his role at the UK Government Digital Service, which was set up as a pilot ship to help the departmental supertankers with digital transformations, has plenty of experience of multi-billion pound outsourcing contracts. He recently wrote about an internal client concerned about the risk of new (and more effective) ways of working.

“The problem with agile is scope creep”, he said

“We run a tight ship here. We’re on a budget, spending public money and can’t afford for things to slip. We need a detailed plan and a fixed contract to hold our suppliers’ feet to the fire when they deviate from it. We have to deliver all of the features, and on-time!”

I’m used to people challenging agile as if it’s unconventional. As if it’s a new, untried, untested thing. The present is here, it’s just not evenly distributed.

But somehow on this day I wasn’t prepared for this challenge. I was flummoxed.

Maybe he did run a tight ship and just hadn’t seen the horrors I’d seen: the service delivered feature complete even though most of the features weren’t needed, the service so complicated it was unusable, feature complete lest the supplier invoked penalty clauses. The system which cost too much to change, which needlessly instructed people to post their passports to an office, needing operational staff post them back unopened. Systems procured with a fixed 10 year contract, and which were already obsolete before they were completed.

I have seen very similar arguments from people I consider to be shysters. The idea that massive outsourcing contracts don’t suffer from scope creep and related, massive cost overruns, would be funny were it not for the fact that in the public sector at least, it’s our money, paid in taxes, being wasted. The UK government has wasted tens of billions of pounds on failed IT projects over the last 20 years or so, and one huge step forward under the last coalition government was a more sensible approach to citizen service provision.

Statements like this are frankly just as striking as the GE commitment:

“These [huge] contracts were meant to be about lowering risk. But when you package up a huge range of functions into one contract the risk becomes impossible to manage”

– Tariq Rashid, then Lead Architect, UK Home Office

Packaging of risk into supposedly AAA stuff, managed by huge suppliers, is why I often refer to Big Outsourcing contracts as Collateralised Debt Obligations. All the technical debt is wound up and disguised, and becomes impossible to unravel. Then the project is Too Big To Fail – and the enterprise customer is left on the hook, throwing more money after bad. Traditional outsourcing was designed explicitly for environments that would not change. Any changes of scope are massively expensive. It is my belief that those who accuse agile of “not working” because of “scope creep” are guilty of a bad case of projection; they certainly fail to understand how agile does a far better job of meeting changing user needs. In the deck below I talk about how structural changes in tech and now being reflected in government strategy.


Big traditional outsourcing is no longer fit for purpose. It came from the era of IT Doesn’t Matter. It doesn’t map to any of the trends currently driving us forward as an industry – Agile, Design Thinking, Digital Disruption, The Data Economy, The API economy, DevOps, Minimum Viable Product, Continuous Deployment, getting closer to the user, proliferation of infrastructure choices, falling infrastructure costs, open source, the enterprise embrace of technology built by Web companies and so on. Take CapitalOne for example – which also featured at Amazon’s re:Invent 2015 – it acquired AdaptivePath, a UX agency because it knew customer experience was becoming critical in banking as competition grows. Enterprises now routinely compete with extremely well funded startups investing heavily in software engineering talent.

Traditional outsourcing on the other hand generally pours concrete onto applications and underlying infrastructure, expecting that companies won’t need to change in the 5-10 year horizon.

I spoke with Jay Pullur from Pramati, an Indian entrepreneur, recently about the thinking behind his acquisition of WaveMaker from VMware and he said that one reason a Java/RAD platform was interesting because in his home market Indian outsourcing are only too aware they need to up their game. 80% growth rates have shrunk to around 10%. Wipro reported today Q2 dollar revenue up 2%, EBITDA margin may fall.

The Wall Street Journal has written about this trend – India’s Outsourcing Firms Change Direction as ‘Cloud’ Moves In.

“AstraZeneca PLC is sharply scaling back the business it gives to the Indian outsourcing companies that it has long relied on for tech help.”

David Smoley, AstraZeneca’s technology chief, said he expects to half the $750m the company spends annually on outsourcing over the next two years.

RedMonk has been tracking the ongoing cratering of traditional software licensing fees for some time now but it’s going to get worse for vendors as companies start to unravel their huge outsourcing contracts and bring things back inhouse. Most traditional outsourcing contracts include significant licenses and maintenance fees for traditional on prem software, which also limits operating flexibility. Then of course there is a falling off in services associated with the packaged application market – According to Forbes reporting on IBM’s financial results this week Global Business Services (GBS) “reported a 13% year-on-year decline in revenue to $4.3 billion (down 4% in constant currency), primarily due to declines in traditional packaged application implementation“. [italics mine].

Outsourcing won’t go away of course but it will need to dramatically change. Smart people say we’ve just been doing it wrong, but it is going to be a wrenching transition for an industry that looks increasingly out of step with emerging operating models. At re:Invent Accenture announced a practice taking enterprises to the AWS Cloud but I will reserve judgement before seeing some customer results. Effective services companies for cloud-enabled digital transformation are going to need a completely different set of skills – CMMI Level 5, ITIL, J2EE aren’t going to cut it. They will need to be be design led, cloud native, and help customers to rediscover competencies they have lost over the last couple of decades.

I would look to companies like FutureGov as a model for more effective public private sector partnership. Or see ThoughtWorks, a company built to help enterprises catch up with the latest in application development thinking and methodology. My friends at YLD plan to move up the stack, having begun life as a Node.js consulting shop they are now aiming at broader Digital Transformation – there is some outsourcing and offshoring in the model; Founder Nuno Job now has developers in London and Lisbon, working on customer problems. Business should be working with people like Jeff Sussna, who will help them understand that we should build systems expecting them to break. One company well set up for the change is Pivotal – beyond its Cloud Foundry product business it is one of the most successful agile software development practices out there. IBM is now looking to emulate its model,

So cloud is hurting Big Outsourcing because it is underpinning new operating models for businesses. Cloud is of course itself a form of outsourcing, but one that allows for speed of delivery, and encourages reshoring of skills. But people and processes changes are needed to do the work. Just because customers have deployment and platform options like Azure, AWS, Google or Softlayer doesn’t mean they get are set up to get the best out of them.

update – check out Toll, cancelling a “strategic” outsourcing deal.

Categories: Uncategorized.

Amazon’s Anti Gravity Play: Notes on AWS re:Invent 2015

Amazon used re:Invent 2015 to emphasize the growing momentum of its cloud infrastructure business and mark its transition into a data and application platform.


re:Invent is kind of incredible. I don’t think I have ever seen a more committed set of conference delegates in terms of session attendance. Everyone was anxious to learn so pretty much every session was packed. I’ve certainly never been to a show where Adrian Cockcroft was seemingly less of a draw than a nearby vendor session on Big Data.

The hunger for learning shouldn’t surprise us given the velocity of Amazon service delivery – there is an awful lot of stuff to take advantage of. Amazon is a flywheel of new function delivery, and the company’s growing community evidently wants to take advantage of new services as they are delivered. Keeping up with Amazon can be a full time gig. Just one data point – my friend Ant Stanley saw enough market opportunity to launch a video learning platform, A Cloud Guru, dedicated to AWS, running on AWS platform services – for bonus points it’s worth checking out this post about its microservices/serverless architecture.


In terms of the platform play, in case you didn’t get the memo – AWS Lambda is a potential game changer, allowing developers to write stateless application functions in a variety of programming languages, triggered in response to service events, without needing to provision servers. Lambda turns Amazon Web Services into one big event-driven data engine. Events can be triggered by any change within AWS, from updates to objects in S3 or DynamoDB tables, Kinesis streams or SNS calls. Amazon manages all server deployment and configuration in handling Lambda calls – as Amazon calls it, “serverless” cloud, which maps fairly well to what everyone else calls Platform as a Service (PaaS).

Less Portability

So far Lambda adoption has been a little slow, partly because it doesn’t fit into established dev pipelines and toolchains, but also almost certainly because of fears over lock-in. Amazon historically has dominated the Cloud market precisely becuse it was Infrastructure, rather than platform services play. As I said back in 2009:

“Amazon isn’t the de facto standard cloud services provider because it is complex – it is the leader because the company understands simplicity at a deep level, and minimum progress to declare victory. Competitors should take note – by the time you have established a once and future Fabric infrastructure Amazon is going to have created a billion dollar market. And what then? It will start offering more and more compelling fabric calls… People will start relying on things like SimpleDB and Simple Queue Service. Will that mean less portability? Sure it will…”

So while we expect Lambda adoption to quickly pick up, it is following a similar trajectory to the broader PaaS market. Amazon will have to do some market making and hand-holding to encourage adoption of technology that could mean lock in.

Update: When thinking about fear of lock in however it’s important to note that pragmatism and effective packaging tend to trump Open when it comes to the enterprise, even in the age of open source. Every wave of Open technology is eventually pwned, as customers choose what they see as the best packager- think Windows vs NetWare, Red Hat, the Unix Wars, Oracle database etc. The best packager in any tech wave therefore wins and wins big, because convenience trumps openness in enterprise decision making, especially when it’s driven by lines-of-business. The perceived value of Lambda could begin to erode fears about lock in. Meanwhile enterprise customers are far less fear driven when it comes to public cloud than they were even a year ago and they’re increasingly ready to make strategic commitments.


Lambda will underpin new AWS offerings, initially in areas such as security compliance and governance, with the AWS Config service and AWS Inspector, given any system change or API call can be logged, and or related message routed and acted upon. With the cloud we have far better observability, and ability to turn assets on or off than we do with on prem architectures.

I had an interesting chat with Adrian Cockcroft on the subject of security just after the keynote. Long of the opinion that cloud is more secure than on premise, he said that he could easily envisage that within 5 years you won’t be able to get PCI compliance unless you’re in the cloud. Seems Sean Michael Harvell has similar ideas

The nearest equivalent general purpose cloud architecture currently being touted could be Thunder, from Salesforce, announced at Dreamforce last month, which is also going to be event-based, allows for streaming, rules-based programming, with federated data stores on the back end – notably bridging IOT logs with customer data.


The event-based programmability of Lambda, with a rules engine, is very interesting, but the data management side behind it is kind of stunning.

Amazon is delivering on a federated data store model – once the data is stored in the AWS Cloud, developers can choose their engine of choice to program to, whether that be MySQL, Oracle, SQLServer, MariaDB (announced at re:Invent), or the AWS Red Shift datawarehouse. The idea a developer doesn’t need to choose a specific data store, or move data around, before creating or extending an app is very powerful. This is NoSQL on steroids. Most web companies today are building apps that are comprised of multiple data stores, and Amazon is catering to that requirement, and readying for a future where enterprises start to make choices that look more like web companies. There is no single database to rule them all. Amazon, as in other areas, doesn’t try to create a single once and future solution, but does a great job of packaging what’s already out there. RedMonk has been writing about heterogeneous federated data stores since forever, so it’s gratifying to see cloud computing finally making it a reality.

In terms of competitive offerings it’s worth mentioning Compose.IO, recently acquired by IBM, in this context. Compose is also delivering support for multiple federated data stores in the cloud. Mongo, Elasticsearch, RethinkDB, Redis etc.

update: Amazon is also up with the in memory cache and queuing pattern, with AWS ElastiCache offering managed Redis and Memcached as managed services. Oh yeah- Amazon also announced AWS ES – managed Elasticsearch – at re:Invent. Here’s why that’s kind of interesting. Thanks for the reminder @jdub!

But Amazon didn’t just make it easy to write data applications, it also made a great argument-as-code for the new approach, with its new QuickSight analytics platform. QuickSight is the new AWS business intelligence and data visualisation platform, built on top of an in memory query engine Amazon calls Spice (Super-fast, Parallel, In-memory Calculation Engine”). The core innovation of QuickSight as I see it is the fact queries can be run across the customer’s data estate in AWS, regardless of whether it’s held in the RedShift datawarehouse, Kinesis Streaming platform, DynamoDB (Amazon’s Cassandra implementation) or one of the database engines supported by AWS RDS including Oracle, Microsoft SQLServer or Postgres.

QuickSight discovers sources where your organisation has stored data in AWS, and makes suggestions about possible relationships. Pay by the hour, no need for ETL or data movement (cost) overheads. Amazon claims that because it builds its offerings on top of standard open source technology customers should be less wary of lockin than traditional proprietary databases and applications. In summary, data gravity, commonly thought of a cloud drawback is now potentially an Amazon advantage.


Loving Snowball

To that end, my favourite re:Invent announcement was definitely the Heath Robinson contraption otherwise known as Snowball.


How to get large volumes of data into the cloud is a live issue. Even with dedicated pipes it could take hundreds of days to upload multiple Terabytes of data. Amazon wanted to make it easier to do, and therefore invented the contraption above, a dedicated encrypted storage appliance, which the customer fills with data, before Amazon manages collection and shipping to take the data and upload it for the customer, before returning the empty box. Note the onboard Kindle, to prevent label misprinting issues, and allow for tracking. The snowballs can store up to 50TB of data. You have to love a good hack.

Launching the Internet of Things

As mentioned earlier in this post, Amazon and Salesforce are both converging on the next gen streaming/rules/heterogeneous data store/cloud platform. But where Salesforce talked about IoT and customers as a way to talk about IoT, Amazon cut straight to the chase with IoT as a set of AWS services – native support for MQTT, and certificate management, for example. One of the most intriguing inventions was the introduction of “shadows” – cloudside models of physical devices in the world, maintaining state, which could also be programmed against. Having virtual models of every physical device in the network is a very powerful notion.

Leaving Las Vegas

There is probably plenty more to add but in summary Amazon is in a very good place now, arguably pulling away from the chasing pack of cloud providers. The company is getting better at selling to the enterprise, and now has a number of new and compelling services to target those billions of dollars to spend. It is talking up hybrid, though it still has plenty of work to do in that regard. Meanwhile it continues to offer services startups can and will take advantage of. Amazon is now a platform.


Categories: Uncategorized.

Want to Encourage Diversity in Tech? Then Be Quiet. Thoughts on Ada Lovelace Day.

Ada aged 4

At Monktoberfest a couple of weeks back Justin Sheehy gave a great talk about how he, as a privileged white dude, was trying to dial it down in order to give other voices a chance to be heard. You see we tend to show off, and have been encouraged since we were kids to try and be the smartest kid in the room, which we end up equating to being the loudest. If you’re confident, you’re right, right? In adulthood a lot of us run around declaiming in stentorian tones as if we were on the London stage. I am one of these loud ones.

But on the drive home from Portland with Justin and David Pollak something suddenly crystallised for me. Justin hadn’t just given a talk about being less dominant, he has re engineered his speaking register. He speaks far more quietly now, than he did a year ago. I am sure that quiet gives others on his team a chance to shine. Nature of course abhors a vacuum.

I have been mulling this over. I always try and make sure my eldest son doesn’t talk over his sister for example, and that I listen attentively to her. In working life, I have a long way to go, and I only hope I can begin to emulate Justin’s quiet example. I was reading Synaptic Lee today and I am sure it’s the right thing to do.

As someone who’s interested in the computational aspects of neuroscience, I’ve experienced male voices talking over me and men favouring to talk to other men. These don’t happen often, but highlight the fact that women’s voices are often not recognised in STEM subjects. When women areheard, sometimes there is the expectation that they must perform even better in order to prove themselves as worthy as men. This brings to mind a psychology study well-known to my fellow grads that, when asked to specify their gender before a difficult maths test, women performed much poorly than when told their gender would be undisclosed.

So, sad to say, society is still struggling to present women on equal grounds in science, technology, engineering and maths. Let’s take this one day to shine a spotlight on women in STEM.

Men need to listen more and talk less, and we might just create space to make a better industry. We need to shut the hell up and listen. And then maybe Augusta Ada King’s legacy can begin to be properly felt.


The image above is Ada, aged 4.


Categories: Uncategorized.

inflections – emc dell etc



Categories: Uncategorized.

Elasticsearch and Splunk: Google Trends

Last week while idly pondering the rise of Elasticsearch Logstash Kibana (ELK) stack I thought I would compare Elasticsearch with Splunk on Google Trends. The results were interesting.

There are obviously some significant caveats- for one, the search terms I used were dictated by what might actually be valid using Google Trends – “ElasticSearch” vs “Splunk”, rather than ELK vs Splunk, which is closer to what I had in mind, but might have been confused with large red deers.

The evolution of Elasticsearch has been interesting. From its beginnings as a full text search engine built on Lucene it has grown into something more far-reaching. Packaged by Elastic, a commercial company, as an analytics toolset bringing together ElasticSearch, Logstash as as a general purpose data collector (what began as a single purpose project, a tool for collecting and transforming logs, has now become a general purpose ETL tool), and Kibana for data visualisation and exploration. ELK is an easy developer friendly toolkit for data analysis.

It’s also probably worth mentioning Greylog, another commercial company building on Elasticsearch that has set itself to more directly target Splunk. Meanwhile fluentd is another open source project that performs similar functions to Logstash, developed by Treasure Data.

Splunk however is already a publicly traded company, IPO in 2012, with significant enterprise penetration that specialises in log management for operational analytics and security. 2015 total were $450.9 million, not bad for a company helping clients to mine data they have traditionally thrown away or ignored. Splunk is not primarily an open source company. Splunk is far ahead in revenues, and enterprise adoption model than any open source competitor. But the ELK stack is used in some similar spaces, and ELK is finding a role for general purpose analytics, not just analysis of logs for operational data specific.

Splunk vs Elasticsearch is very much an imperfect comparison and yet Elasticsearch is clearly a funnel in its own right, and ELK is finding its own DevOps ops led customer base. Enterprise ops teams are likely to choose Splunk, but Cloud developers and DevOps teams at this point are likely to favour ELK.

Unfortunately I wasn’t able to attend Splunk’s user conference this week, which might have helped me parse how the company will deal with the rise of open source competitors. This post isn’t intended as a negative against Splunk, but rather to note that in terms of Google Search Volume at least, Elasticsearch is the mix.


Elastic, Splunk and Treasure Data are all RedMonk clients.

Categories: Uncategorized.