tecosystems

What AWS Tells Us About Heroku 2.0

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Fork in the Road

Founded fourteen years ago in 2007, acquired in 2010 by Salesforce for $210M, Heroku is somehow still the canonical example of a certain style of application development. When RedMonk talks about the Developer Experience Gap, conversation inevitably and swiftly turns toward the one time startup and present day product. And for good reason: one of the original homes if not the originator of now standard developer experience conventions like git push as a deployment mechanism, buildpacks and opinionated toolchains, Heroku blazed the trail now being trod by the likes of Dark, Netlify, Vercel or multiple AWS services we’ll get to momentarily.

And yet, for all that it got right, the Heroku self-contained developer experience never took over the world. The AWS DIY-from-primitives model did. Even granting that AWS had an eleven month headstart chronologically, the story of why IaaS triumphed over PaaS is long, complicated and will vary depending on who’s telling it. And, it must be said, is far from complete.

In the aggregate at RedMonk, we’ve seen more discussion of Heroku over the last year or two than the previous decade combined. There are many reasons for this, but most come down to the basics of the Developer Experience Gap: the more pieces that have to be wired together, the more difficult an environment is to stand up, operate and debug. Primitives were the preferred approach when there were fewer of them. In a world where there are hundreds if not thousands of services to pick from, this experience begins to break down and abstractions begin to look increasingly attractive.

Attractive or not, the market has clearly concluded that the original Heroku vision wasn’t an entirely perfect fit for mainstream compute workloads, as is pithily stated here:

To be clear, one doesn’t need to subscribe to the notion that Heroku’s literally a dead end. Indeed, many if not most of its users and customers don’t believe that. It is enough to acknowledge, as above, the reality that Heroku’s trajectory fell far short of AWS and its fast followers Azure, GCP, et al. If it’s true that appetites for abstraction are increasing and that the one-size-fits-all model of Heroku hasn’t seen the traction that might have been expected, the obvious question is what comes next. What is the appropriate response?

It’s early yet, and different models are still competing, but AWS’s recent product moves have telegraphed the conclusion that they, at least, have come to: that there will not be a one-size-fits-all model.

Backing up, it’s been evident for some time that its current market dominance aside, AWS’ rapidly growing portfolio was a liability in certain contexts as much as a strength. The company’s organizational structure lent itself towards building out a very large number of primitives very quickly; it was much less adept at conceptualizing the integration of those services into a PaaS-like experience. This was, and is, the best way to compete with AWS. But as a company that prides itself on being responsive to customer feedback – as you’ve probably heard from them, several times – it was imperative that they adapt. And adapt they have.

Integrated services, to be clear, aren’t new at AWS: Elastic Beanstalk, for one, is a decade old. But we’re seeing an increase in emphasis on higher levels of abstraction from the company. The mobile focused AWS Amplify was released more recently in 2017; the deployment-automation oriented Proton dropped in 2020 and web-app platform App Runner made its debut last month.

None of these services are Heroku, or even a reasonable approximation of that general purpose property. They are, rather, smaller, more constrained and domain-specific integrations of related services targeted at a particular problem or set of related problems. There is also, as you would expect for a company with 17 different options for running containers, areas where they might bleed into one another.

Individually, the services are interesting. Collectively, however, they are a statement of intent and direction. Long the Home Depot of the computing industry, AWS is slowly but surely moving in the direction of not just supplying primitives, but helping customers choose, integrate and operate those primitives.

What AWS is implicitly saying, therefore, is that there won’t be a Heroku successor, but a dozen or more of them. This is an approach, notably, that is distinct from a variety of other fourth generation PaaS-like platforms such as the aforementioned Dark, Netlify, Vercel. et al. Much as the original Heroku and the newborn AWS offered competing visions for what infrastructure should look like, today we all have great seats for watching these respective ideas and approaches compete for mainstream enterprise time, attention and dollars.

May the best approach win.

Disclosure: AWS, Azure(Microsoft), GCP, and Heroku (Salesforce) are RedMonk clients. Dark, Netlify and Vercel are not current clients.

One comment

  1. One can imagine a universe in which Heroku wasn’t a “dead end” — but that universe would have required them to be far less proprietary about the ingredients that went into making the magic user experience that developers love about interacting with Heroku. You can see why they didn’t, though: the use case for this increased transparency would likely be for customers who had outgrown Heroku and were seeking to move to more DIY IaaS-type offerings, either for scale or cost reasons. Heroku’s strength was always this laser-like focus on providing a compute platform that got out of the way of the developer — and by implication, a focus on SMB customers — but there was no partnership, say, with a cloud provider to help customers “graduate” out of Heroku. Approaching it from the other direction, though, is what the hyperscalers are doing now, as you clearly point out: creating PaaS-like offerings (though much more domain-specific) and leaving the migration out of them an exercise for the reader. Either way, the hyperscaler still wins.

Leave a Reply

Your email address will not be published. Required fields are marked *