So we were talking the other day and my colleague Rachel asked – Is there any part of the software development life cycle in 2021 that is not smooshing?
“Smooshing” is the officially recognised RedMonk technical term for the phenomenon of software and systems categories overlapping, converging, fragmenting, coalescing, re-coalescing in new forms, maybe turning a little bit cloudy and vaporous every now, adjacent markets rubbing up against one another and leading to new forms, platforms and approaches.
Smooshing is really the history of the industry. We see waves of technological invention break and fragment onto the shores of the adoption landscape. The Cambrian explosion pattern of IT. That is innovation.
Over time, an understanding arises about what the packaging exercise should be, at what layer of the stack we should start simplifying things so the user can adopt the new technology more easily. At this point competition makes more sense across products, as it can be in terms of direct comparison, and allows for commoditisation. Markets then consolidate with market leaders emerging. The best packager wins, and wins big. I apparently wrote about this 10 years ago.
Think of Compaq packaging Wintel. Or IBM packaging all of the inventions in mainframe computing as the System/360. Windows packaging the IP stack to take out Novell. SAP packaging ERP/MRP/etc for client/server in the shape of R/3. Or let’s look again at Apple for a second: Nokia and even Microsoft had delivered all of the same phone technologies Apple later did–touchscreens and so on–but they packaged them far less effectively, focusing on functionality rather than user experience. On the desktop Apple packaged FreeBSD on Intel and made it beautiful.
Things shake out, not everything is a choice amonkst 5000 different ways of doing things. Relational databases are a good example: they emerged as an effective platform for information management for transactional, business applications. They came to dominate the database landscape throughout the 1980s and 90s. Relational became standardised, based as it was on SQL, a standards-based query language. Standardisation led to commoditisation. So Oracle came along and managed to outcompete competitors such as Sybase and Informix. The market shook out, but we still had a market where you had DB2 from IBM, SQL Server from Microsoft and then of course Oracle. Open source was in the mix too, with the creation of PostgreSQL and later MySQL. But basically you had a layer of database management and storage and access and then you wanted to add all of the stuff on there: tuning and optimisation, cost-based optimisation and so on. One of the ways that Oracle became number one in the enterprise relational database market was that it could support row-level locking, so it became the natural choice for SAP R/3 implementations. As R/3 itself, another packaging exercise that we dubbed Enterprise Resource Planning, became an industry standard, so Oracle came to dominate.
In the 1990s middleware wave, initially we saw all sorts of weird and interesting platforms, various different approaches to integration and object-based programming. But competition between Microsoft and Java led to standardisation up and down the stack. In the Java ecosystem we understood what you needed from a content management server, we understood what you needed from an application server, a portal server and so on, and Java standards emerged which helped that commodity process along.
Of course Java middleware was overly complicated. We had J2EE application servers rather than perfectly capable servlet engines, but at least they were clear categories in which competition allowed for lower pricing and interoperability. Straightforward comparison leads to easier consumption and leads to the ability of markets to commoditize. Prices can come down and infrastructure becomes cheaper and can therefore become pervasive. Skills become common, and automation is relevant across various platforms, which map clearly onto one another.
The early days of the personal computer revolution in the late 70s and early 80s were beautiful. We had this plethora, this beautiful explosion of so many crazy machines and form factors from so many different vendors. Osborne, Commodore, Sinclair, Apple, Atari, TRS-80, Olivetti, Texas Instruments. There were scores, hundreds of suppliers even, and then the list whittled down, as Windows won, and the industry standardised around the Wintel platform, driven by IBM’s dominance of the overall landscape at the time. New vendors emerged to take advantage of this standardization – see Dell and Compaq.
Big Technology is a packaging exercise. You have a packaging winner and then you have competitors that have to replicate its market choices. And this is something that we see happen again, and again and again in stacks.
Bursts of innovation, followed by periods of consolidation. The industry at the moment is seemingly at the peak of a fragmentation period. We do need to standardise to enable broader adoption, and a good example of an area like that would be – what should modern database and integration middleware look like based on GraphQL?
Stephen recently wrote about some of these smooshing processes in a great post – A Return to the General Purpose Database.
It’s not notable or surprising, therefore, that NoSQL companies emerged to meet demand and were rewarded by the market for that. What is interesting, however, is how many of these once specialized database providers are – as expected – gradually moving back towards more general purpose models.
This is driven by equal parts opportunism and customer demand. Each additional workload that a vendor can target, clearly, is a net new revenue opportunity. But it’s not simply a matter of vendors refusing to leave money on the table.
In many cases, enterprises are pushing for these functional expansions out of a desire to not have to context switch between different datastores, because they want the ability to perform things like analytics on a given in place dataset without having to migrate it, because they want to consolidate the sheer volume of vendors they deal with, or some combination of all of the above.
So general purpose database platforms are making a comeback, a packaging exercise after the fragmentation of the NoSQL era.
We’re seeing other markets where comparisons are becoming easier to make – we had a burst innovation around static site generators (Jekyll, Gatsby, Hugo etc). This led to a standardisation around development processes and tools, with dynamic capabilities provided by JavaScript and calls to third party APIs. Netlify gave this approach a name – JAMstack. For a good read into SSGs, with some interesting data, you should read this post by Rachel.
In the world of Kubernetes and associated technologies–as illustrated by the ever so complex Cloud Native Computing Foundation (CNCF) landscape–the industry is on the road to standardisation, partly led by major vendors and partly by customers and engineering-led open source adoption. So we’ll see platforms with guardrails that will allow for some portability and Safe Enterprise Purchasing and deployment, but also the canonisation of Grafana and Prometheus as part of these stacks. Which is to say the CNCF model seems to be working for enterprise customers.
Observability is a canonical example of shmooshing followed by a scramble to refine and redefine categories when it came to system telemetry and how we manage it. Honeycomb blew everything up, with a view of Observability as troubleshooting, a tool for developers as operators, for debugging and testing in production in organisations seeking Production Excellence. Vendors in the APM, log management and distributed tracing markets have all been forced to rethink their platform architectures for the new use cases. We’ve seen multiple acquisitions, and companies have embarked on significant refactoring projects. RedMonk has packaged some research looking at the market for DevOps infrastructure tools, particularly around Observability and CI/CD adjacencies, which are coming together to enable Progressive Delivery patterns.
What about incident management? As Rachel writes:
The incident management space is evolving and converging upon adjacent markets. Monitoring intersects with alerting, alerting moves into the remediation space, and remediation seeks automation. There are no clean categories anymore.
As I have argued above we’re in a standardisation phase – look no further than public cloud, with Azure and Google Cloud increasingly and intentionally creating closer mappings as far as they can with market leader AWS products.
We need to address the Developer Experience Gap. We can’t simply leave developers and practitioners to spend their lives in a tangle of baling wire, chewing gum and duct tape. The cognitive overheads of fragmentation are very real for individuals and organisations. Choice comes at a cost. We need choice architectures in order to be productive. Flow comes with opinionated platforms.
But here is the thing – there are of course conflicting and competing forces at work. We’re also, and remain in a golden era of possibility and invention, driven by the New Kingmaker trifecta of Cloud, Open Source and social (coding). Developers don’t need to ask for permission any more, and software can be a by-product, rather than a goal in itself. A software engineer in Lagos is building the hot new category killer as I write these lines.
So yeah, the industry is always complicated, but it also does stabilise from time to time. It’s a privilege to help people understand this stuff. That’s what RedMonk is for. If you’re interested in a consultation on this please contact [email protected].
Disclosure: AWS, Honeycomb, IBM, Microsoft, and Oracle are RedMonk clients.
Gilbane Advisor — Turkeys, smooshing, fragmentation, and e.coli, data says:
January 22, 2022 at 1:02 pm
[…] https://redmonk.com/jgovernor/2021/11/15/the-great-smooshing-fragmentation-and-not-fragmentation-in-… […]