Blogs

RedMonk

Skip to content

Monkchips in 3 Minutes Episode 2: Panama Papers, Gorilla, Financials.

The new format.

In which I talk about how The Panama Papers site, which is a great piece of work. You’ve read the news story, now check out the actual source material, which is powered by Neo4j.

I also talk about the morning paper by Adrian Colyer, in particular his take a scientific paper on Facebook’s absurd scale time series database Gorilla – it stores one trillion events per day in memory, as an in memory cache for Facebook’s logs for troubleshooting, then archived in HBase. Gorilla is optimised for writes and high availability.

Then filthy lucre – recent financial news from Apple, Amazon, Facebook and Google. Today the 4 Comma club is about trillions of events. One of these cloud companies however is going to be valued at over a trillion dollars at some point, possibly soon.

Anyway – hope you like Mi3M. Let me know what you think, and please do subscribe on Youtube.

 

Categories: Uncategorized.

Tags: , , ,

Dockerize vs Containerize. All the things.

dockerize

Of course Google Trends search volume comparisons are not exactly scientific, but they can be interesting and create useful data points. Thus is it with this search, showing how the containers conversation today is all about Docker.

Back in 2005 Solaris Zones was getting a fair bit of interest and containerize was performing in terms of Google Search volume. Has Solaris Turned the Corner? – nope, it never did. But the core technology idea was certainly relevant, which explains why and how Joyent and in particular Bryan Cantrill have become everpresents on the Docker conference scene – they’ve been doing containers since before they were a thing.

But Docker is at this point the name that really matters in containers – see The Docker Pattern. It’s the environment in which people are competing.

Docker is a client.

Feel free to point all the different searches I should have done. Like I say, present imperfect.

Categories: Uncategorized.

Financials, Cloud, Catchup: It’s Monkchips in 3 minutes!

So we thought we’d try a new video youtube format, something snappy, hopefully informative and easy to watch. It’s called Monkchips in 3 minutes – which is kind of self-explanatory. We’ve been doing a lot of fun stuff in video, trying to more accessible and well packaged. this is part of that.

This week’s episode is about the recent run of financial results from the enterprise software vendors, and that was before Apple turned in some disappointing numbers this week too. My premise – the cloud isn’t just hurting sales of compute and storage any more, it’s an across the board phenomenon which is making life harder for companies with broad portfolios as much as narrow ones. Cloud’s even hurting outsourcing. Check out the show!

Regarding news of our new hire Stephen wrote up a a welcome post I am sure Rachel Stephens will make the role her own.

Categories: Uncategorized.

7 Things I learned at Microsoft BUILD 2016

Microsoft Build Image for facebook

  1. I don’t care about Microsoft Ink.
  2. The Ubuntu support is lit. In the keynote Microsoft only talked about bash shell, but basically Windows 10 now supports apt-get, so the Ubuntu userspace is right there. Node.js devs in particular were all excited about the news.
  3. I changed my mind about Microsoft Graph after a great talk by Qi Lu, EVP, Applications and Services Group. The graph of ActiveDirectory, Exchange, Outlook, and Skype apps is an incredible, rich sticky asset, and Microsoft’s REST, JSON, WebSockets API direction makes all the sense. One end point to rule them all, indeed.
  4. Tie that into bot frameworks with some machine learning and Cortana for voice and stuff’s going to get unreal.
  5. Hololens is incredible. Tell me a specific time and then make me wait in line for 30 minutes? Grumpchips. One hour later and I was a kid full of joy and wonder. So immerse. So wow.
  6. Microsoft’s IoT play is all about B2B scenarios. I know right – where the money is.
  7. Xamarin – was like open source, nah let’s not do that anymore, then Microsoft acquired them and it’s all like MIT licensed. And now it’s free. More Test Cloud for more Azure. CI for mobile.

Obviously this is not the most extensive write up ever. But I wanted to get a placeholder down, so here it is. The good far outweighed the bad.

Categories: Uncategorized.

On Google’s Cloud Posture. GCP Next 2016.

A couple of weeks ago I was in SF at Google Cloud Platform Next 2016, an event designed to show Google is serious about competing in the public cloud market against Amazon Web Services, Microsoft Azure and IBM SoftLayer. The market and even some Googlers have been questioning the firm’s commitment to cloud as a line of business, so the company felt it needed to make a strong statement. Google duly totally over-rotated on personnel-as-proxy-of-seriousness on day one by having not just new cloud lead Diane Green, but also Sundar Pichai, CEO of Google, and then for good measure Eric Schmidt, Chairman of Alphabet all keynote. The session duly considerably overran. Never good on conference chairs. Maybe better to have just Diane next year.

I actually missed the day one keynote, which may be one reason I came away with a different overall impression from many commentators, who were seemingly underwhelmed.

The Day Two keynote was serious, committed, intelligent and challenging. The structure followed a nice pattern from high level, to low level details.

Google made the clearest commitment so far by any public company, let alone major cloud vendor, to renewable energy, of which more later. But TLDR Thanks Google, I have 3 children, and if Google scale is carbon positive that’s good news for every living thing on the planet, including my little ones.

Google’s Neils Provos then went deep on its layered approach to information security, an impressive framework should pressure competitors to up their game. Google security provides an excellent set of approaches for delivering on  security in the Cloud era. The geopolitical/industrial scene isn’t getting any easier, and information governance in the cloud is going to be a defining issue of our time for governments and businesses. On the flight out to Next I read an article in the FT about military contractors moving into the enterprise security market. Frankly any enterprise that calls BAE or Raytheon before it looks to Google about how to better secure information assets in the Cloud era is doing it wrong.

The final talk by Eric Brewer – he of the ubiquitous CAP theorem that drives so much modern distributed systems design –  made it abundantly clear that Kubernetes is not just window dressing to Google’s own internal Borg architecture, but rather represents a pretty fundamental change in Google’s cloud posture – the company is finally learning to play nicely with other children, although it still has plenty of work to do in that regard. Google now leads a project that is crushing it on Github, with outside contributors, and it feels good about that. See this post from Fintan for more. It wants to build a successful ecosystem. But doing so takes time, commitment, attentiveness, listening and a degree of humbleness (this last one doesn’t come easily to Google).

There is no company more Cloud Native than Google. The company literally invented this stuff. Defining Cloud Native Brewer pointed to development that expects cloud resources to just be available, “an infinite number of machines”; other characteristics include containers for packaging and isolation, with a micro-services orientation.

But the question remains – can Google package up its experiences and platforms in ways that makes them easily consumable by enterprises and startups? The best packager in any tech wave wins.

It was particularly noteworthy therefore to see Google’s positioning of Kubernetes with respect to Docker during Brewer’s talk. Containers is nothing new, Google was making open source contributions as far back as 2006, in the shape of cggrounds. But Google’s tooling was designed by the best engineers on the planet for the best engineers on the planet. It works but it’s very far from being easy to use.

“Docker came along and did a better job of the packaging, it does a nice job of how you handle libraries”

Which sort of sounds like faint praise but isn’t. Docker utterly killed it with making containers a first class citizen for software development, and Google knows it. More than many other competitive vectors Google is chastened by Docker. Google wants to make sure that it benefits from The Docker Pattern as much as, if not more so, than Docker does.

That said, Google made it very clear that while it sees Docker as moving the state of the art forward with containers for development, it believes it can do a better job managing containers in production with Kubernetes. Google plans to niche Docker. The contrast was ironically enough heightened during the keynote, when Docker dropped its new Mac and PC clients, making containers even easier to use on the desktop.

Bottom line is Google wants to win those Docker workloads, but needs an ecosystem to do so. A recent Docker survey shows Amazon EC2 Container Service is already a natural target for these workloads.

orchtools

While Amazon has dominated the first couple of rounds of the Cloud wars, this is going to be a long game. Enterprise workloads have only migrated to the cloud at the edges. Core transaction systems remain on prem. On that note Ron Harnick of Scalr said Next wasn’t boring enough

Where’s the bank that runs mission critical operations on GCP? Where’s the retailer that can run transactions faster on GCE than on EC2?

It’s the infrastructure, stupid. I mentioned above that many commentators were unmoved by the event. Of those I thought this post – Google’s Scalability Day – by professional curmudgeon Charles Fitzgerald was great. As so often lessons from history are particularly valuable:

In May 1997, Microsoft held a big press event dubbed Scalability Day. Microsoft was a relatively new arrival to enterprise computing and was beset by criticism it wasn’t “enterprise ready”. The goal of the event was to once and for all refute those criticisms and get the industry to accept that Microsoft would be a major factor in the enterprise (because, of course, that was what the company wanted…).

Microsoft at the time was an extremely engineering-centric company, so it processed all the criticisms through a technical lens. Soft, cultural, customer, and go-to-market issues were discarded as they did not readily compute and the broader case against Microsoft’s enterprise maturity was distilled down to the concrete and measurable issue of scalability. The company assumed some benchmarks plus updated product roadmaps would clear up any remaining “misunderstandings” about Microsoft and the enterprise.

The event was a disaster and served to underscore that all the criticism was true. It was a technical response to non-technical issues and showed that the company didn’t even know what it didn’t know about serving enterprise customers. Internally, the event served as a painful wake-up call that helped the company realize that going after the enterprise was going to be a long slog and would require lots of hard and not very exciting work. It took over a decade of very concentrated focus and investment for Microsoft to really become a credible provider to the enterprise. Enterprise credibility is not a feature set that gets delivered in a single release, but is acquired over a long time through the experience and trust built up working with customers.

I couldn’t help but think about Scalability Day while watching Google’s #GCPNext event today. After telling us for months that this event would demonstrate a step function in their ability to compete for the enterprise, it was a technology fest oblivious to the elephant in the room: does Google have any interest in or actual focus on addressing all the boring and non-product issues required to level up and serve enterprise customers?

As is their norm, Google showed amazing technology and highlighted their unrivalled infrastructure. And they have as much as admitted they’ve been living in an Ivory Tower since Google Compute Platform was announced in 2012 and “need to talk to customers more often”. Recognizing you have a problem is always the first step, but beyond throwing the word “enterprise” and related platitudes around, they did little to convince us they are committed to travelling the long and painful road to really serving enterprise customers.

So much all of this. Google doesn’t need to change its engineering. It needs to change its posture. It needs to be open and be seen to be so. It needs to give itself permission to come across as more human. Enterprises want to work with people that are like them.

The Cloud posture isn’t collegiate enough yet, although it is of course somewhat academic. Google is about the New Applied Science.

I think one obvious solution to a perceived arrogance issue is to focus more on partners. It was noticeable that Google didn’t feature partners on the main stage at least not on day two. It could have gained some kudos for example by featuring Red Hat talking about Kubernetes. We have seen a marked change at Google over the last few quarters, partly driven by infusions of new blood, but also in Google’s experiences working with outside firms – notably Red Hat. This didn’t come across quite as strongly at Next as it should have done.

But the narrative is there, waiting to be packaged. Over the next couple of years we will see Next become more about the ecosystem and less about the platform.

Categories: Uncategorized.

“A young programmer with standing desk”

A big part of the Developer Aesthetic is humour. Today via @martinlippert I came across a great Tumblr, Classic Programmer Paintings

“Painters and Hackers: nothing in common whatsoever, but these are classical painter’s depictions of software engineering (technically, might not be all classical but hey, this is just a tumblr)”

There are plenty of good examples, but I particularly liked

But of course what really makes a good joke is sharing it, which is why this instant reply was so perfectly on point

You don’t want to miss the details though

So much this. In case you’re wondering, according to @jackwmartin that’s not a Photoshopped iPhone, it’s Cupid holding a love note. The full painting is actually called A woman standing at a virginal.

baby selfie

Categories: Uncategorized.

Show Your Work. On Seth Godin, Google Maglev and Microsoft Sonic

After writing a post yesterday about advancing the state of the art, by taking an applied science-based approach, I found this tweet interesting

So I went to check out the post in question, and it struck a further chord. As Seth Godin says:

What works is evolving in public, with the team. Showing your work. Thinking out loud. Failing on the way to succeeding, imperfecting on your way to better than good enough.

Do people want to be stuck with the first version of the iPhone, the Ford, the Chanel dress? Do they want to read the first draft of that novel, see the rough cut of that film? Of course not.

Ship before you’re ready, because you will never be ready. Ready implies you know it’s going to work, and you can’t know that. You should ship when you’re prepared, when it’s time to show your work, but not a minute later.

The purpose isn’t to please the critics. The purpose is to make your work better.

Polish with your peers, your true fans, the market. Because when we polish together, we make better work.

This. Is how cloud computing is involving. In further related news, I also just saw this

At NSDI ‘16, we’re revealing the details of Maglev1, our software network load balancer that enables Google Compute Engine load balancing to serve a million requests per second with no pre-warming.

Google has a long history of building our own networking gear, and perhaps unsurprisingly, we build our own network load balancers as well, which have been handling most of the traffic to Google services since 2008. Unlike the custom Jupiter fabrics that carry traffic around Google’s data centers, Maglev load balancers run on ordinary servers — the same hardware that the services themselves use.

Hardware load balancers are often deployed in an active-passive configuration to provide failover, wasting at least half of the load balancing capacity. Maglev load balancers don’t run in active-passive configuration. Instead, they use Equal-Cost Multi-Path routing (ECMP) to spread incoming packets across all Maglevs, which then use consistent hashing techniques to forward packets to the correct service backend servers, no matter which Maglev receives a particular packet. All Maglevs in a cluster are active, performing useful work.

It is worth noting here that this is research paper sharing, rather than a code drop. Google of course didn’t open source Borg, but did open source an implementation of it, in the shape of Kubernetes. I am wondering whether that team will build their own implementation of Maglev which will be open sourced. Load balancers like Maglev would be beyond the scale needs of most organisations.

Google though isn’t the only one opening the kimono on Cloud network architecture. Microsoft just open sourced Software for Open Networking in the Cloud (SONIC), which builds on Azure Cloud Switch, a Debian-software based switch.

We’re talking about ACS publicly as we believe this approach of disaggregating the switch software from the switch hardware will continue to be a growing trend in the networking industry and we would like to contribute our insights and experiences of this journey starting here.

The challenge for traditional networking gear suppliers is going to become increasingly severe as the collaborative Applied Science approach, underpinned by cloud scale providers I described yesterday takes hold in that market. Enterprises and Web companies however are going to significantly benefit from all of this innovation, in both cost and capability.

Categories: Uncategorized.

Driving the state of the art. Cloud natives and the appliance of science

Zanussi Dishwashers 1990s UK appliances slogans The Appliance of Science

“Applied science is a discipline of science that applies existing scientific knowledge to develop more practical applications, like technology or inventions.” – Wikipedia.

I was really pleased last week when Fintan dropped this post The Welcome Return of Research Papers to Software Craft. At RedMonk we’ve been talking about the trend for a while so it’s good to see the idea captured as a post.

“Over the last two to three years a small, but very important and noticeable trend, has started around the world – a growing appreciation of the importance of primary research and academic papers among software practitioners. Those that are crafting software are spending more and more time understanding, learning from, and reflecting on research from the past and present.”

The post is really good. You should read it in full. But here’s a bit more before I jump in

The level of practitioner interest in research papers has risen to a point that the opening keynote atQConLondon this week was delivered by Adrian Coyler, author of The Morning Paper and a venture partner at Accel. Adrian walked people through a number of his favourite papers and challenged people to think a little differently about what is coming in the future.

It is hard to pinpoint quite what caused this renewed interest, but it is safe to say that the emergence ofPapers We Love, with the associated meetup groups, frequent discussions on forums such as Hacker Newsand blogs such as Adrian’s has created a wonderfully curated entry point to research papers for the curious. When people such as Werner Vogels at AWS remind us of the importance of papers, people sit up and take notice. As an industry we have had, at times, a tendency to forget to look at problems in detail, and instead focus on the quickest time to getting a product out the door.

One of the most recent Papers We Love talks came from Bryan Cantrill, CTO of Joyent, where he talked about BSD Jails and Solaris Zones, and as he noted at the start of his talk, while reminiscing about soda at the journal club his Dad, a physician, hosted:

“I always felt that it was really interesting that medical practitioners did this, and I always try to draw lessons from other domains, and medicine is very good about this, and we are not very good about this. We in computer science and software engineering are not nearly as good about reading work, about discussing it.”

This. I ran a conference on exactly this theme a couple of years ago with Monki Gras 2014: Sharing Craft. My thesis is that as an industry we’re actually improving in how we learn and share across disciplines but there is a lot of work to be done. When explaining the current state of tech Netflix pretty much always features because of its leadership in multiple areas. It pays above market rate to technical staff as a matter of course, for example, in order that it only attracts top talent. Netflix has crystallized the new way of working at scale, a way of working with intellectual property not in theory but in open practice. Netflix is in effect applied science. It carries out experiments in computing at scale in order to drive the business forward. It then open sources the code it used, rinses and repeats. Netflix doesn’t theorise, file patents and then ring fence its work. On the contrary it open sources code in order to drive forward the state of the art.

[BONUS UPDATE. following an @acolyler link this morning I discovered that Netflix had made all of this utterly explicit in a talk at QCon recently: Monkeys in Lab Coats: Applying failure testing research @Netflix. I don’t believe the video is up yet but can’t wait to see it)

Industry and academia need each other. Far from the tire fires of production, university researchers have the time to ask big questions. Sometimes they get lucky and obtain answers that change how we think about large-scale systems! But detached from real world constraints, systems research in academia risks irrelevance: inventing and solving imaginary problems. Industry owns the data, the workloads and the know-how to realize large-scale infrastructures. They want answers to the big questions, but often fear the risks associated with research. Academics, for their part, seek real-world validation of their ideas, but are often unwilling to adapt their “beautiful” models to the gritty realities of production deployments. Collaborations between industry and academia — despite their deep interdependence — are rare.

In this talk, we present our experience: a fruitful industry/academic collaboration. We describe how a “big idea” — lineage-driven fault injection — evolved from a theoretical model into an automated failure testing system that leverages Netflix’s state-of-the-art fault injection and tracing infrastructure

Netflix isn’t alone in this approach, of course. It’s how smart companies get things done. Stephen has written extensively about the Rise and Fall of the Commercial Software Market, and the stages our industry has gone through. Software is no longer the product, but increasingly a by-product.

We’ve made significant progress since Google, rather than open sourcing its own code, published the MapReduce Paper in 2004. Yahoo got Doug Cutting to build its own implementation, Hadoop, which it did open source, and the rest is history. Here are 5 Google Projects that Changed Big Data forever. The Research at Google site is a thing of beauty.

reseach at google

By engineers for engineers.

Ever since Google was born in Stanford’s Computer Science department, the company has valued and maintained strong relations with universities and research institutes. In order to foster these relationships, we run a variety of programs that provide funding and resources to the academic and external research community. Google, in turn, learns from the community through exposure to a broader set of ideas and approaches.

But by 2015 Google realised that open sourcing the code itself, rather than just publishing papers about its approaches, made sense. Why watch somebody else create another Hadoop or Mesos when Google could build a community around stuff it actually built – and so Kubernetes was born. Things got really interesting when Google’s engineers met engineers at Red Hat they deeply respected. When we write the history of Google this will be seen as a seminal moment, when the appliance of science became properly a community-based activity. The decision to open source some of Google’s core machine learning technology – TensorFlow – followed naturally on the obvious and growing success of a better, more collaborative model for applied science.

So Netflix and Google do it. Twitter definitely does it. Apple got the memo and open sourced Swift. Facebook crowed about the success of React in 2015. Uber and Lyft are both adopting the model. Pivotal is picking up code like NetflixOSS and OpenZipkin from Twitter for distributed tracing. You can bet someone at one of the tech giants is currently reading this paper, Message-Passing Concurrency for Scalable, Stateful, Reconfigurable Middleware and considering its implications. Oh look – it’s science as code, check out Kompics on Github. Maybe we should check it out in production. Let’s not forget that Linux began life in academia. And oh yeah Walmart… is making distributed systems contributions too.

Github, just mentioned, is a fundamental building block of the new applied science. The combination of open source, social coding (a little GIT thrown in for forking and testing and recombining) has utterly changed the game in software and distributed systems. There is no advantage in proprietary approaches – only advancing the state of the art. Well Amazon might argue, but we’ll see.

Open, practical innovation isn’t just a software phenomenon – check out Facebook’s Open Compute project, which implements some computer science fundamentals. There is a reason Peter Winzer, Ph.D and Head of the Optical Transmissions Systems and Networks Research Department at Bell Labs gave a talk at its most recent meeting.

Obviously I need to be a little bit careful about Golden Era thinking, but the applied science approach of cloud technology, with associated information sharing, is so very different from other spaces, in which science seems to become ever more commodified, but not commoditised. Pharma for example wants government to fund all the research, while it keeps all the profits. Companies are trying quite successfully to make genetics private science – patenting genes that occur in the wild, with terrible implications if you have a marker for say, breast cancer. The very foundations of science are being privatised. Researchers try to prevent others from replicating their work, rather than hoping for replicated experiments. It makes no sense. Tech however is showing us showing us something important about how to advance the state of the art, and that’s good for all of us. Not everything is perfect in tech, and the Industry finds ways to harvest data that should be public (or should that be private) but at least in distributed systems something very very interesting is happening. The Appliance of Science.

Categories: Uncategorized.

Throwing The Phone Around, On Mobile Ecosystems


So we made another video. No green screen this time, just straight talk. In this episode I talk about control versus openness in mobile ecosystems, and the battle between Apple IoS and Google Android. The really cool new Business Cards+ from Moo gets a special mention. What no NFC? How about some Android-first development.

Sponsored by IBM MobileFirst, again. Creative freedom for the win.

Categories: Uncategorized.

Rise of The Docker Pattern

Sharp Green by Debbie Clapper

Once you’ve been in the industry for a while the patterns become clearer. Enterprise technology adoption has some fairly distinct shapes. In 2009 I wrote

“Amazon is the new VMware. The adoption patterns are going to be similar. Enterprises will see AWS as a test and development environment first, but over time production workloads will migrate there.”

I dubbed this “the VMware pattern”. New technologies generally don’t emerge as fully-fledged production environments. They are adopted first and grow into the role. Docker is currently on a fast track through this process.

In the same post I wrote:

“Amazon isn’t the de facto standard cloud services provider because it is complex – it is the leader because the company understands simplicity at a deep level, and minimum progress to declare victory.”

For docker replace “simplicity” with “convenience”. Why is Docker is so hot? The answer is simple. Developer-led adoption, or as Andrew Clay Shafer puts it:

“It’s the fastest path to developer dopamine”.

At RedMonk we have never seen a technology become ubiquitous so quickly. Docker makes it simple to spin up a container which contains everything needed to run an app – the code itself, the runtimes, systems tools etc. Develop on your laptop, then in theory deploy to any server. Unlike virtual machines, containers include the application and all of its dependencies, but share the kernel with other containers, an efficient model which maps cleanly to current development thinking in areas such as continuous integration and microservices. Stephen, in a thoughtful explanation of the Docker phenomenon, argues that:

The explosion of Docker’s popularity begs a more fundamental question: what is the atomic unit of infrastructure moving forward? At one point in time, this was a server: applications were conceived of, and deployed to, a given physical machine. More recently, the base element of an infrastructure was a virtual recreation of that physical machine. Whether you defined that as Amazon did or VMware might was less important than the idea that an image resembling a server, from virtualized hardware and networking interfaces to a full instance of an operating system, was the base unit from which everything else was composed.

Containers generally and Docker specifically challenge that notion, treating the operating system and everything beneath as a shared substrate, a universal foundation that’s not much more interesting the raised floor of a datacenter. For containers, the base unit of construction is the application. That’s the only real unique element.

So alongside many of the other micro and macro trends we currently see, notably infrastructure fragmentation, Docker basically just makes sense – it feels right and represents how developers live now. Next however comes the fun part – Docker will begin to reshape how operations and IT work, just as VMware did in the virtualisation wave.

Enterprises needs to find ways to deliver more digital services to market faster, which means not only becoming more adept at developing, but also consuming new technology. Docker can help with that. In the age of continuous deployment dev/test underpins the deployment process. Everything needs to be constantly tested, and constantly refactored, with an eye to disposability rather than reuse. See Microservices and Disposability: On Cattle, Pets, Prize Bulls, Wildebeests and Crocodiles

Docker is on an exceedingly well-funded mission to transform itself from developer favourite to Cloud Native production environment of choice for the enterprise, moving from Open Container format to “single virtual computer” of choice. The transition though from developer-led to enterprise production grade takes time. We’ve seen this before, from MySQL to Mongo to Spring… or for those with rather longer memories think the early versions of Oracle. Automation, backup, compliance, logging, monitoring, networking, scheduling, storage management, orchestration, security, and basic engineering solidity don’t happen overnight.

There is now an ecosystem of companies building tooling to support Docker in production- startups like ClusterHQ, Datadog, Rancher Labs, Server Density, Sysdig, Treasure Data and Weaveworks. More established players such as AppDynamics, CloudBees and New Relic. Also of course the cast of existing suppliers looking to embrace and extend Docker- including Amazon Web Services, IBM, Microsoft, Oracle, Pivotal etc. Then there are outright competitive plays for the bigger prizes, such as Google Kubernetes. There will be negative commentary – growing pains are par for the course.

Docker is not going to have everything it’s own way – but the path is now set clear for Docker to become an industry standard production platform. We can call the path it’s on the Docker Pattern.

For further reference see also
IBM, Red Hat adopt “VMware Pattern” for Cloud. Disruption Strategy Emerges
Amazon Web Services: an instance of weakness as strength

Our clients include Docker itself, Amazon Web Services, IBM, New Relic, Oracle, Pivotal and Treasure Data.

Debbie Clapper made the beautiful pattern above.

Categories: Uncategorized.