My Stephen colleague yesterday published an interesting post The Difference Between SOA and Microservices Isn’t Size. In it he argues that developer led adoption of technology – the new kingmaker thesis – is why microservices are different from SOA.
Apart from being made to feel old – we now live in a world where SOA can be described as “archaic” I thought it was worth adding some colour about why and how things are different this time around, and why and how microservices are indeed different. There is of course more context we could have pointed to – SOA was of course a rehash of, and reaction to, concepts we’d attempted to make stick during the Object Oriented technology era and the delights of Corba. We could call out 4GLs and the idea of accelerated development through reuse.
Technology adoption always has a context, and when it comes to application development, we’re not in Kansas any more.
SOA was an artefact of its time. When SOA was codified open source was not yet the commonly adopted way of developing software, the Internet was in its infancy, and Waterfall was the commonly accepted approach to developing software. Application development, testing, QA and operations were effectively separate disciplines with their own deployment environments and budgets. There was no social coding. Search first development wasn’t a thing. Web companies were not making open source contributions as a side effect of their business operations. SOA celebrated reuse, and while there was an idea we might swap out components and services comprising a whole, we certainly weren’t aiming to make code disposable. Nobody expected to use SOA to make multiple code changes to production systems per day. There was no concept of continuous deployment. Netflix was still a hairball. Apps were stateful. Web Scale was barely a thing. Web programming meant HTML not Go. Forking was considered harmful. People were still buying and deploying on premises Enterprise Applications. Teams were fricking enormous. Containers was some weird crap on mainframes, and maybe Solaris if you were l33t. In his post Stephen mentions that Amazon was the only successful SOA. Indeed. Back then there was no Amazon Web Services, no universal permissionless deployment mechanism.
Fast forward to today. We’re cloud native. Open source has won. Agile has won. DevOps has won. CI/CD is so obviously the right approach. Git and distributed version control systems is very much the new normal. The idea of building services based on black boxes seems faintly ludicrous, although with the rise of serverless we’re getting a taste for it again. We build with an an expectation of disposability. The state of the art in testing, for systems that run on the internet, is light years ahead. I would probably go as far as to argue that testing itself is what is different. Better testing approaches, that better suit our deployment models, is why microservices are going to be broadly applicable. This excellent slide deck by Toby Clemson, with input from Martin Fowler, lays out effective microservices testing.
With SOA, on the other hand, we really hadn’t thought through how we needed to change testing at a fundamental level to make everything work in distributed systems engineering. We were still trying to make faster horses. With SOA, testing was something that other people did. With microservices, testing is foundation to the application development process. Good developers take pride in writing good tests. Testing is what you do when do you software engineering. And think of the organisations we had. Huge great unwieldy things.
“Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization’s communication structure.”
Conway’s Law is another reason SOA was so problematical. We had monolithic organisations trying to build distributed software. Today teams are small, they fit around a tapas table, and they invite the product owner to give feedback as they go along, and pay for the wine.
We’re building with an expectation of disposability. With the Docker Pattern of course you’re going to throw stuff away. Enterprises are now getting the message about minimum viable architecture. We don’t expect stuff to last the month, let alone be in production for 5 years.
With SOA and the XML Deathstar we tried to preordain every possible use case, edge case, approach. But that is not how evolution works. The new way is better, and that’s why developers are so enthusiastic about microservices.
As I sign off I should say that I haven’t mentioned any of the downsides of microservices. Distributed computing is hard. Writing good APIs and understanding service boundaries, and bottlenecks, and what you’re going to need to optimise for is hard. A monolith can be the most effective way to establish product market fit. Having come to bury SOA it’s also important to point out that a lot of the people involved in driving microservices lived through SOA. They learned the lessons and moved the state of the art forward. Progress isn’t linear, but a lot of the idea behind SOA were good ones. We just weren’t ready for them yet, technically or organisationally. We had trains and canals, not roads to drive on.
For now yes, microservices are on a saturation curve in terms of developer interest.
Paul Johnston says:
July 21, 2017 at 2:09 pm
I love the idea that we are now developing for disposability as a matter of course. If you take a serverless approach this is even more true. Every single piece of logic that you build is very very disposable, although maybe more “easily replaceable”. An advantage of some of the new approaches though is to add the ability to push back the need to rearchitect until later. That’s certainly what I’ve found. If you commoditise the complex parts of cloud (scaling and orchestration) then what’s left lasts longer. The move as I see it is very much towards reduction of internally managed complexity in exchange for an increase in cloud/vendor commoditised complexity. It means systems are built faster and the need to maintain those systems in the same way is reduced.
Jon Brisbin says:
July 21, 2017 at 4:14 pm
Every project I’ve been part of in the last (almost) decade has progressively adopted the century-old writing advice of “kill your darlings.” As time goes on we code more like Hemingway (hopefully without the end result!) than Henry James. The painful enumeration of particulars found in earlier work is increasingly replaced by “For sale: baby shoes. Never worn.” Call it “developing for disposability” or “agile” or “minimum viable architecture” but it amounts to the same thing: Features are King. For good or ill our work will be thrown away and replaced by the next developer so it makes little economic sense to spend “too much” effort addressing edge cases ahead of time. Having cut my chops in the Henry James School of Development it’s been a learning curve for me to adopt the Lost Generation Method of Development.
Chris Swan says:
July 21, 2017 at 5:21 pm
Before EC2 and S3 were six pagers Amazon Web Services was the name given to their SOAP services where you could look up ISBN/ASIN. Those end points gave those of us trying to do SOA things a means to test various tools, libraries and frameworks. So the original AWS was nothing to do with what we now call ‘cloud’.
No mention of REST (in either article)? In many ways the adoption of Microservices over SOA came about by the use of very basic protocols (and the languages and frameworks that spoke them) – JSON data via REST interfaces on HTTP(S) end points and DNS for discovery versus the heavyweight guff that went with SOAP, WSDL and parsing all that XML (not to mention the horrors of UDDI tmodels). Little changed in terms of architecture as SOA became Microservices, but the implementation details moved on a lot.
Of course Amazon had it right all along – they had a basic HTTP end point in addition to their SOAP interface.
July 22, 2017 at 7:08 am
Thank you. Not sure I agree with all of the points made, but it was because of you and others in the 2002-2004 timeframe that helped us prove SOA, distributed computing, and now micro-services makes sense.
Missing concepts around IoT and private micro-clouds that fit on embedded devices is the next wave.
Take a look at testcasecentral.com, as they are push a similar philosophy around testing.
July 22, 2017 at 2:14 pm
hey chris. REST as an approach was definitely a big deal in the transitions we’re currently seeing. in 2005 I wrote “SOAP is boring, wake up vendors or get niched” http://redmonk.com/jgovernor/2005/02/10/soap-is-boring-wake-up-big-vendors-or-get-niched/. The eBay deprecating SOAP moment was huge in its time. i think Stephen probably treated that as implicit in his argument, done, something we’d already covered. I didn’t intend my post to be a complete history of everything, but just to provide context. Stephen was pretty clear that the SOAP stack lost because developers didn’t want to use it.
Chedy Missaoui says:
July 23, 2017 at 8:11 pm
I think that mandating the use of XML based protocols(SOAP,WSDL) accelerated the death of SOA especially with the fast emergence of mobile/wearable devices. The microservice movement encourages implicitly the use of REST which is friendlier to mobile devices and is more convenient when trying to get things up and running fast.
I am looking forward for the democratization of tools that will allows us to easily build custom binary protocols that will be highly optimized for certain use cases (think thrift and protobuffs), these tools will allow us to require less resources for handling more capacity.
Yan de Lima Justino says:
July 24, 2017 at 2:31 am
Why Microservices, an approach to perform independent deploy, became a so shiny object? I’d say that microservice is a good implementation of service-orientation paradigm. It’s a solution space tool. The principles and constraints associated SOA paradigm are a problem space tool.
This Week in Spring – July 25th, 2017 | Alexius DIAKOGIANNIS says:
July 24, 2017 at 9:21 pm
[…] Governer follows up on Stephen O’Grady’s post on […]