James Governor's Monkchips

The incredible shrinking time to legacy. On Time to Suck as a metric for dev and ops

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

“All this fuss over nothing. All this searching for something, all this reinventing the wheel, all this searching for something, that’s not real”

The Wheel, SOHN 2017

I’ve had a riff for a while about Developer Time To Suck, the half life of a product or project before developers decide it is time to find something else bright and shiny to play with. It used to be that a new technology had a few years before it was seen as boring. In tech, like in politics, you were the future once. But the window is shrinking all the time.

It used to be that a developer choice was cool for about five to seven years. Today not so much.

MongoDB is a case in point. It went from the new hotness, adopted by a bunch of 23 year old “architects” surfing the Node.js is Bad Ass Rock Star Tech wave, to the same people saying it totally sucked two years later. For mongoDB that was OK, because it’s convenience meant that enterprises had also adopted it in significant numbers – so it has a market to sell into.

Web companies are doing much of the development of core infrastructure today, and they have a far higher tolerance for disposability than vendors or enterprise customers. The combination of open source, cloud deployment, and social coding means that developers are building, and exposed to, more cool new technology than ever before.

Hadoop was the bee’s knees for about 5 years. Then, just like that, streaming was the  new thing, and everyone pivoted to Spark and Kafka stacks.

All this newness has a cost though – in terms of production deployment, but also sheer cognitive overhead. In 2014 my colleague asked: A Swing of the Pendulum: Are Fragmentation’s Days Numbered? In 2017 we can fairly safely answer – no. In every category we track we’re seeing cool new things emerge that developer want to use in production.  Pony and Rust are going to break out any time soon.

But Ops of course has always had to take up the slack, or should be take up the stack. Running the thing in production is not the same as running things on your laptop. Except of course, when it is. Clearly the direction of travel with container technology is to collapse distinctions between development and production environments. From a testing perspective we want local and deployment to be as far as possible the same.

As Ben Fletcher, Principal Engineer said in his excellent talk at Jeffconf recently:

‘We don’t like pre-production. We like localhost”

The organisation the FT runs is very much a web first operation, rather than a more traditional transactional enterprise shop. Not everyone is going to do multiple production deploys per day but that is direction of travel. It’s not just enterprises that sustain technical debt of course. We’ve had some fascinating conversations recently with startups that are unable to keep up with Amazon Web Services (AWS) pace of innovation. While they’d like to adopt AWS Lambda for example for some functions, they’re already bedded in a stack built on earlier AWS stacks.

This tweet got me thinking

Turns out of course it’s not just Developer Time To Suck that is shrinking. Operations is heading the same way. Folks at Pivotal are saying that operating systems don’t matter, as we’ve moved further up the stack. Cloud Native is a proxy for saying much the same thing. But then, something is being written right now that will supplant Kubernetes. If we’re not running our own environments in house, operations disposability become increasingly realistic. Cattle not pets, for everything.

But convenience and disposability always incur a cost.

Pivotal and MongoDB are both clients.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *