A RedMonk Conversation: Justin Reock on Why Java Devs Use Apache ActiveMQ

A RedMonk Conversation: Justin Reock on Why Java Devs Use Apache ActiveMQ

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

In this conversation, Justin Reock, Deputy CTO at DX, chats with RedMonk’s Kate Holterhoff about Java development and message oriented middleware, focusing on Apache ActiveMQ. They discuss the importance of messaging in distributed systems, Justin’s experience writing middleware at EarthLink and OpenLogic, and ActiveMQ’s advantages for enterprise. The conversation also highlights the history of several additional Apache projects including Geronimo, HornetQ, ServiceMix, Camel, CXF, and Karaf. The discussion highlights the ongoing need for reliable messaging solutions, especially in the face of emerging technologies like AI.

Links

Transcript

Kate Holterhoff (00:09)
Hello and welcome to this RedMonk conversation. My name is Kate Holterhoff, Senior Analyst at RedMonk. And with me today, I have Justin Reock, Deputy CTO at DX. Justin is alumni of Cortex, Gradle, Perforce, OpenLogic, and EarthLink. Justin, thanks so much for joining me on the MonkCast.

Justin Reock (00:25)
Happy to be here.

Kate Holterhoff (00:37)
So I’m excited to have Justin on to continue my series on messaging. And today we’ll be focusing specifically on Apache ActiveMQ. I like to begin these conversations with a broad, high-level question because I am sorry to say that some folks might not be familiar with message-oriented middleware or moms, and some of our audience might even be of the opinion that queues are passé and we’ve entered an era of all streaming or all events for all use cases. So first of all, how do you define messaging? And second, Justin, what do you tell folks who ask you why it is important for distributed systems in 2025?

Justin Reock (01:01)
Yeah, sure. So a couple of really good questions there. mean, first of all, I think that you can define message oriented middleware as any application that kind of treats the message object as like a first class citizen, right? So it’s like the sub component of any message oriented middleware is going to be one that tries to provide routing.

even I would say infrastructure concerns to a degree, like high availability and that sort of stuff and scaling, but where the ultimate purpose is to take a message from one place and put it somewhere else, right? Which is actually like deceptively complex, but it’s the whole point of doing enterprise integration and all that type of work. Messages themselves, mean, obviously they can take a lot of forms. There are specifications that have been built around the proper anatomy of a message. Like, you know, a JMS message has like a preset set of headers as well

as pretty flexible set of properties that can describe what’s going on, as well as an actual payload that can be, in the case of JMS, or now I guess we’re all moving towards Jakarta 3.1, but the payload itself can be any number of Java objects, which can be really useful for really just two applications that may want to take the entirety of an object and pass it through. The second question is more difficult to answer. I was around during the whole switch from enterprise class message oriented

a word of stuff like Kafka. mean, Kafka came in and definitely just disrupted this entire space. But there’s so much stuff that you can do in terms of, I would say, sort of switches and knobs that are available when you’re dealing with a specification that’s really built around messaging as opposed to just an application that’s built around streaming. Getting tons of data from one place to another with relative accuracy is a good application for streaming.

Being able to set very clear delivery policies, being able to implement a whole bunch of different types of messaging patterns based on the application, having a choice between whether I want queued messages or whether I want to be able to build subscriptions with things like topics, having an ability to scale at the endpoint with things like virtual topics, which ActiveMQ introduced. These are all things that I think are still very, very valuable despite, I think, the streaming solution taking up so much of the share voice.

Thanks.

Kate Holterhoff (03:09)
That’s really helpful. so let’s talk a little bit about your background here. How did you become an expert on all things? I guess open source messaging, middleware. Yeah,

why do I trust you?

Justin Reock (03:21)
Well, I didn’t say that.

⁓ it’s actually one of these like accidental directions that seems to kind of happen. I was working for EarthLink. You mentioned that as part of the background. part of my job was building middleware between when somebody would place an order for broadband. EarthLink obviously was a private ISP. didn’t own any of the actual copper. And so if somebody wanted a DSL line or some sort of broadband solution like that, we would have to ship that order off to the local exchange

carrier that was responsible for that region. like, you know, AT &T or SBC or any of these other companies that were actually capable of like, you know, putting the signal on the line, we would then have a layer above that where it was like an EarthLink subscription that was going over this line. So somehow the software had to be able to place those orders. Somebody would come in order a new line. And then if they were in like SBC territory, we’d have to send a message off to SBC to let them know, hey, we need to go ahead and provision this line and get status updates and things like that. So the original solution

was built on Oracle BEA WebLogic. So like SOAP and BPEL implementation and sending like SOAP payloads off. And it was very hardware intensive because it was a bloated solution. I don’t know how else to really say that. And so I put together a prototype, a small proof of concept that would effectively replace the BEA WebLogic with a combination of ActiveMQ acting as like the messaging layer and then Apache Camel. And those two projects are

are very well aligned. In fact, you can launch Camel routes directly inside of ActiveMQ and the broker if you want to, to do point-to-point connections between brokers and things like that. And Camel is a really friendly language, domain-specific language for implementing enterprise integration patterns. And specifically, the EIP is from the book Enterprise Integration Patterns, Gregor Hohpe and Bobby Woolf.

So we, this was a big success. I put together this prototype. It worked. We were able to rip out, all of the Oracle BEA WebLogic stuff that was doing that middleware, replace it with free and open source stuff, which was Apache Camel and ActiveMQ. And we reduced the code base. mean, it was like almost half a million lines of like BEA and SOAP type of code and BPEL code and logic and everything like that. We reduced that down to like 30,000 lines of Camel, maybe even less than that. So it was like a big simplification of the code base and we reduced the hardware.

footprint like 90 % because these solutions are way more streamlined than what you were getting from BEA WebLogic. So that was successful and I got a chance to kind of like get a team of people who would go and like roll this stuff out. I taught them all Camel, I taught them how ActiveMQ worked and I got to open basically an open source program management office before we calling that within EarthLink which would take me to the next lily pad in the career which was OpenLogic. So that was kind of the first foray into it and kind of proving it out and starting to learn about it.

OpenLogic’s business model was to provide a number of enterprise class services around community open source so that you could have businesses taking the community editions of these open source projects but still be able to get the same type of enterprise class services that they’d be looking for from a commercial product. So there’s things like 24-7 support with SLOs, training, and general consulting around a lot of these technologies. So I came in as kind of like a resident ActiveMQ expert acting in sort of an architect capacity.

I built OpenLogic’s training on ActiveMQ, which I was actually just thumbing through before this to see what did I really do? And it was like 550 slides about ActiveMQ. I was like, wow, I I built some beefy training here. And I got a chance to go and deliver that, that ended up being probably the most popular training that we had at OpenLogic. So I got to deliver kind of this full week course on ActiveMQ to a number of big enterprise clients, which really solidified.

I guess the knowledge for me getting to see the way that a lot of this technology was implemented across a bunch of different enterprises. I still have a 24 node cluster running in a train system that I probably shouldn’t mention which one it is, but it’s still running today. If that broker system goes down, all the trains in that entire city stop. So was kind of a high stakes implementation. But anyway, that’s the right to play, I guess. got accidentally very deep into ActiveMQ, but ended up really getting pretty passionate about middleware and implementing these types of patterns.

Kate Holterhoff (07:30)
I don’t think you’re the only person that stumbles into this domain, that’s a story that I’ve been hearing a lot from folks who end up being, passionate about this space. Okay, so I’m interested in why you chose ActiveMQ because in this series, we really haven’t spoken about it. There’s a lot of other MQs out there. ⁓ Many of them are proprietary, others are open source. Talk to me about why you elected ActiveMQ.

Justin Reock (07:54)
Yeah, it’s the fact that it was so kind of well positioned for the enterprise, right? It really is because of the JMS specification and kind of the fact that we can then like write these implementations and brokers that match a specification. ActiveMQ…

comprehensively fills the entire spec. Like every bit of the spec can be realized within ActiveMQ. Every bit of the JMS spec can be realized within ActiveMQ. And it took the spec even further by adding all types of policies to the way that the routing works, adding all types of different configuration options for performance optimization. It’s really one of the most highly configurable solutions out there. And so it can adapt really well to

to most enterprise needs and use cases. So that was a lot of it. It’s highly observable with a lot of different options for monitoring. It can be deployed across a lot of different types of commodity hardware. The high availability solution is like super straightforward and easy to implement. And there’s sort of a ubiquity between like enterprise concern and what JMS was bringing to the table at the time. was obvious, mean, was, know, JMS is a part

part of Java Enterprise Edition, or guess it was J2EE back then, now it’s just JEE or Jakarta. But the enterprise focus of JMS and then the full implementation of JMS by ActiveMQ is what made it kind of a no-brainer to rip out an existing enterprise solution like Oracle BEA and to replace it with something that wouldn’t fall short. Because again, it’s kind of informed by the JMS spec itself.

Kate Holterhoff (09:24)
Yeah, and I might have been able to anticipate this because Justin and I ,we run into each other pretty often at conferences. The two most recent have been Devnexus, a Java conference, and devopsdays Atlanta, So I feel like your answer here really aligns well what I know about your interests.

Justin Reock (09:40)
I, you know, again, it’s having been so enterprise focused and my background is in Java development. You know, I mean, before, kind of moving to the ActiveMQ stuff at EarthLink, mean, I was just writing a bunch of JEE code in general. and I think that, it’s fun to solve problems for scrappy startups and stuff like that, but like the real meaty challenges of like writing, and building solutions that can scale to something enterprise grade. I just think it’s a, it’s it’s a bigger challenge and so for me it’s more compelling.

Kate Holterhoff (10:10)
OK, that’s a very good answer. And I’m interested in the fact that it is an enterprise solution, because well, I want to take this a couple of different ways. But maybe we can start with the foundation model. So ActiveMQ it’s an Apache project. And some of the other MQs are under OASIS. So I’m curious. What did the Apache experience offer enterprise customers? And do you have any thoughts on how well they can support MQs specifically?

Justin Reock (10:39)
Yeah, that’s a great question. I mean, I think a lot of what is probably most attractive is just the Apache licensing, right? I mean, I think it’s highly permissive, which we can have a whole argument about permissive versus non permissive licenses. In most cases, I actually prefer GPL for the innovation perspective. But in terms of just being able to freely deploy this thing any way you want to be able to modify or even embed ActiveMQ brokers within existing Java applications,

which is totally a real use case. Like you can literally have your Java app be like, okay, I need to spring up a broker like inside my app and start doing like either inner app communication or acting as an internal bus for the app itself. There’s just so much flexibility in the way that enterprises can deploy this stuff. And of course they don’t have to tell anybody that they’re deploying it, right? Which is really helpful for folks who want to be, who are still very sensitive about their use of open source in general. It’s interesting looking at some of these

that are backed by standards groups. mean, if you look at what ActiveMQ can do, one of its big selling points is that it is a multi-protocol broker. So while it has its own built-in wire protocol called OpenWire, which can be used for sending JMS messages pretty easily back and forth between really the only other OpenWire clients that I know of are ActiveMQ brokers, it is also fully compatible with like AMQP 2.0. It’s compatible with STOMP. It’s compatible with like WebSockets. So I don’t think that it necessarily

needs the governance that’s gonna come with a standards group like OASIS. I think that as an Apache project, it can kind of stand on its own. But it integrates really, really well, even with projects that are outside of Apache’s purview. Like for instance, Hawtio is a great monitoring and console alternative for ActiveMQ. And there’s native support for ActiveMQ built into Hawtio.

Kate Holterhoff (12:21)
that’s a really good point. so OASIS is a standards group. Do you know if they’re also a foundation? I’m on their website. It doesn’t say that, so them. Okay.

Justin Reock (12:29)
I don’t think that they are. Unless things have changed since then, yeah, it really is more of a standards group. Which, you know, it makes sense to have something like a messaging broker backed by a standards group because standardization is absolutely necessary when you want to be able to build, you know, heterogeneous but compatible solutions that can read, you know, messages that are maybe not originating from the same broker standard. ⁓

Kate Holterhoff (12:33)
I think you’re right. Right, so maybe there is an advantage to having an MQ through a standards consortium like OASIS. And do you know if Apache does something similar? We think of Apache as a foundation. Are they also a sort ISO approved standards body?

Justin Reock (13:09)
wouldn’t say that, no. I mean, I think that besides the Apache license, which is pretty ubiquitous at this point, I think that they have, I don’t know how that would even really work really. I you’re talking about hundreds of top level Apache projects at this point, sort of loosely categorized into data solutions, messaging solutions, obviously Apache web servers, still highly utilized web server. But there’s such a broad spectrum of Apache projects out there.

I mean, they do set some, but see, they even break their own standards. mean, there’s like, you kind of know when you’re cracking open an Apache project, like, okay, how is the code gonna be laid out? Especially, know, Java-driven ones. I mean, they have kind of their own like best practices that they use for, and try to stick to for actually developing these apps. But then you see them break those standards all the time. And there’s not one consistent language either, right? I mean, Apache web server is written in C, you know, Apache Cassandra. Apache ApacheMQ are all written in Java. So, I think they would struggle with that,

actually. think a lot of what you get from the Apache Foundation is the surrounding community. You know, they used to run ApacheCon, now they call that Community Over Code, which is still like a very important open source conference that’s out there. In my experience, you know, reaching out to the various sub-communities that sit over the top level projects has always been very positive.

Kate Holterhoff (14:11)
Yeah, yeah.

Justin Reock (14:30)
definitely have like very passionate people in a lot of these communities and that can come across sometimes as being maybe a little aggressive or whatever but I’ve never really experienced any toxicity from these communities I think they’re very welcoming and it was essential for me as I was really coming up to speed and even throughout the course of like trying to implement Apache, implement ActiveMQ in various niche cases and use cases to be able to speak directly to that community they always availed themselves easily.

Kate Holterhoff (14:57)
Yeah, and you were a contributor to ActiveMQ as well, right? Can you talk at all about what you contributed?

Justin Reock (15:03)
Yeah, again, accidental, right? Like you would run into these corner cases that enterprises often sussed out when you’re using this technology at scale and something might not work properly. You’d run into bugs and things like that that needed to be addressed. And part of OpenLogic’s business model was to do what we call shepherding these bugs into the community. So if, you know, a customer, the client would like run into an actual bug and there wasn’t, it wasn’t already addressed by the community. had this whole process of like opening up a ticket with the community.

and providing a lot of context, sometimes even just providing the fix and saying, hey, community, here’s what we think will fix this. Here’s the source for it. Go and just implement it. The more you could do really to take the load off of the community, because the community wants to solve bugs. They want to close stuff out of their backlog. But when you have these popular projects, those backlogs get really big really quick. So the more you can do to reduce friction in getting those incidents resolved, the better. So a lot of it was channeling through these sources, doing the bug reporting and stuff.

like just direct things that I ran into. Like there was a, off the top of my head, there was like a limitation in the JAAS security module that came with ActiveMQ, which caused it actually to be pretty easy to break like RBAC in terms of like the internal ActiveMQ console. Now it’s not good practice to actually just deploy the default console for ActiveMQ like within the broker instance itself. The better practice is to actually split that console out and host it decoupled.

But nobody does that because just out of the box when you turn up the broker you have this console and people would just kind of do that So I I found a few bugs in that Security layer that I actually pushed out to the community in the form of a pull request that got accepted There were limitations also in the level DB persistence store, but they ended up deprecating that whole thing anyway So I did make some contributions there, but then the whole the whole part of that got got actually deprecated And then the JDBC persistence layer again like ActiveMQ has so many different switches and knobs

a lot of these brokers are going to like say, okay, this is how you must do persistence. But ActiveMQ had its own persistence store. brought in LevelDB as an optional persistence store for better scale. And you could use JDBC if you wanted to use just like a traditional SQL database as your backing persistence. And then you could even hybridize that. Like there were some issues with the high availability model with kahaDB, depending on the underlying file system that you’re using. But kahaDB is a lot faster as far as a persistence store than trying to use like a SQL database

like PostgreSQL or MySQL.

So you can hybridize it. can actually create a strategy where you can create high availability by creating a row lease lock effectively using a JDBC database, but then still have the actual messages themselves living in the extents of kahaDB. And so I made some contributions as well to the persistence layer for JDBC because there were some non-standard things happening there too. So really was more like ad hoc though, based on

and client needs. I wouldn’t consider myself a core contributor to that project, but I did get a chance to meet and work with some of the core contributors from that project.

Kate Holterhoff (18:04)
mean, that’s great. At RedMonk, we’re interested in the practitioner story. You are absolutely a practitioner. So I’m excited to hear your engagement with this and how you not only impacted the project, but how it impacted the real work that you were doing. So not just hypotheticals here, not just ivory tower

academic

stuff going on

I mean, you were actually in the trenches making this work for actual enterprises.

Justin Reock (18:26)
Yeah, I’ve set up quite a number of instances and a whole lot of weird environments at this point. But again, ActiveMQ just like allowed for that flexibility. I mean, I think having Java as a base and being able to run inside the JVM already provides a lot of abstraction. But then on the other side, having like…

community provided system wrappers for Windows. If you needed to run it as a Windows service, could. Letting the persistent store live in a lot of different environments, whether like on-prem cloud, dealing with different store, like file sharing protocols, like NFS and that sort of thing, allows you to deploy this in the trenches, so to speak, in a lot of different corner cases and weird use cases.

Kate Holterhoff (19:04)
Okay,

so you were pushing these when you were at OpenLogic or at EarthLink.

Justin Reock (19:09)
Well, the really, really pushing this stuff at OpenLogic. Yeah. EarthLink was the first time that I had a chance to do a real like enterprise implementation and we saw great results from that. But it wasn’t until we started doing these deployments at OpenLogic, or you could imagine, right? We had like hundreds of customers that with all types of different profiles, most of them in the enterprise, and they would come to us for this ActiveMQ training. They’d come to us for actual like consulting and implementation guidance. Sometimes even just hiring us to just come in and just do the implementation for

Kate Holterhoff (19:12)
Okay.

Justin Reock (19:38)
for

them, like literally go on-prem, stand up the brokers and just get them set up and ready to go for the organization. So yeah, I’ve gotten to see this technology running in like a lot of different use cases and environments.

Kate Holterhoff (19:50)
Right.

And what years were you working at OpenLogic?

Justin Reock (19:53)

yeah, so this would have been gosh mid aughts. So like 2005 and I was there for gosh almost almost 10 years. So this is around like mid aughts to around 2015.

Kate Holterhoff (20:01)
Awesome.

Okay,

and I love this because you were in the thick of it when ActiveMQ was like the thing, right? So just to give a little bit of history here, so ActiveMQ as a project was originally created by LogicBlaze in 2004 as an open source message broker hosted by CodeHaus. Now I don’t know much about CodeHaus, are you familiar with them?

Justin Reock (20:24)
Gosh, it has been a long time. Yeah, I they were like another, I wouldn’t even, I mean, they were certainly a community, but really more like a repository, I would say, in terms of making it pretty easy to get your hands on high quality open source.

Kate Holterhoff (20:26)
Okay.

Got it.

Very cool,

And then LogicBlaze was acquired by IONA and of course they work in the CORBA space which was super popular in the late 90s. Did you also intersect with CORBA? So that’s an acronym for Common Object Request Broker Architecture. Is that something that you’re familiar with?

Justin Reock (20:59)
I’m familiar with CORBA. ⁓ I never was a CORBA developer, but kind of in the context of understanding the broader messaging landscape. You still come across it every now and then. So ActiveMQ’s inception is actually very interesting. basically, you’re familiar with Red Hat and full Java application server specifications and things like that.

Kate Holterhoff (21:00)
Yeah.

I see.

Justin Reock (21:23)
Apache wanted to create a full Java application server like JAAS type of framework that was going to be under the Apache ActiveMQ license. So they created a probably not that well-known project called Apache Geronimo. And Apache Geronimo was supposed to be like JBoss effectively, coming instead of from the Red Hat community license, it was supposed to be coming from Apache. And they built Geronimo, they built a full Java application server.

And that included having to have a JMS engine, right? Because that’s part of the JAAS specification. like JBoss had HornetQ and then actually the Artemis project from ActiveMQ would import a ton of HornetQ’s internal logic into it because there was sort of a reaching across the aisle between Red Hat and ActiveMQ. Like Red Hat was supporting, so it’s all very like convoluted, but the FuseSource built an ESB, an enterprise service bus, off of

of Apache technology, including Camel and ActiveMQ. And so they were trying to build a full enterprise service bus type of capability. Red Hat ended up buying FuseSource. And then the FuseSource technology was then being maintained by Red Hat. So then Red Hat found itself in a weird scenario where it was basically supporting two different message brokers. It had HornetQ and JBoss. And it had ActiveMQ that it was part of the Fuse stack. And so they said, well, why are we supporting two of these? Why don’t we just

combine the best parts of HornetQ with the best parts of ActiveMQ and that’s where the Artemis broker came from.

kind of rewinding back to the whole Geronimo story, the web application server that was Geronimo never really took off, but ActiveMQ as a subcomponent of Geronimo got really, really popular. And so that project ended up being spun out into its own level, a top level project, and had a much longer lifespan. I mean, obviously still being used today than the Geronimo web server. So that’s the origin story of ActiveMQ. It was really just meant to check the box for a JMS specification inside of Geronimo, but it ended up becoming way more popular than Geronimo.

itself.

Kate Holterhoff (23:18)
That’s super helpful. And that’s why I’ve got you on here to take this convoluted story and, you know, ELI5 for people like me.

Justin Reock (23:24)
What? I’m not

Kate Holterhoff (23:25)
I am interested in hearing more about HornetQ. Can you talk about this?

Justin Reock (23:29)
Yeah. So HornetQ is the, message broker that shipped with the JBoss, EAP server. the, JBoss enterprise application server. yeah, cause JBoss, you know, in order to be a full job application server, you have to have a JMS component. mean, really just like, can you provide every specification that was part of at the time J2EE and JMS is one of those specs. So HornetQ was the message broker that shipped.

with JBoss and versions of Wildfly as well. It only followed the JMS 2.0 spec and used a different internal wire protocol. You also couldn’t easily split the server out from JBoss application server. So was like, the nice thing about ActiveMQ is that if I wanna deploy 300 nodes of it standalone, just directly from a JVM, I can do that, because it’s a standalone project.

It’s a very simple it.

The internal messaging routing and stuff like that that was in HornetQ generally performed better under benchmarks than ActiveMQ did. And there was a lot of talk about splitting it out into its own instance and things like that as its own top level project. That never really happened. And then as we started to converge on this idea of Artemis, actually fun fact, the original project was called Apollo, who’s Artemis’ brother in

Greek mythology, the Apollo project didn’t really take off. They kind of abandoned a lot of the internal stuff that Apollo was supposed to do and then ended up taking some parts of that though and putting it into Artemis.

Kate Holterhoff (25:10)
Okay, super helpful. now I am interested in, again, digging into this relationship between Java and ActiveMQ. So would you say that anyone who has a Java application, it would behoove them to use ActiveMQ as opposed to a different message queue?

Justin Reock (25:27)
If you

If you really want JMS, as opposed to just messaging, if you really want the full standard for JMS, yeah, definitely. But let me be clear too, the client library for ActiveMQ, if I had to sum up ActiveMQ’s benefits in one word, it’s flexibility, no question. It’s so much more configurable than any other broker out there that I know of. It’s so much more flexible in terms of what types of clients can communicate with it.

Kate Holterhoff (25:31)
Okay.

Justin Reock (25:55)
there’s native integration in Java for ActiveMQ via just the JMS specification or through Spring Messaging or any of that. There’s also an actual ActiveMQ Java client library that’s an OpenWire client that can do more. You can effectively have more flexibility in the way that you set parameters on messages and the way that you want to direct behavior like the dead letter queue and that sort of thing. In JMS, the client should be able to direct the broker as to lot of, ⁓ inform it as to the

message routing patterns and stuff like that. And so the ActiveMQ client library for Java provides more of those options. But it also has client libraries for .NET and for JavaScript and for C and for PHP and for Perl and for all these other different languages that were popular at the time. So was like you could in like 10 minutes spin up an easy spec that would allow you to stand up a message broker and connect like a PHP application to a Java application and have

them route data through each other, know, using the ActiveMQ as a broker. So certainly, you know, there’s a lot of benefits if you want JMS and you’re writing Java application to use ActiveMQ, but it’s not a limitation, right? I mean, if you have .NET applications and you want a good open source, like a spec driven broker that’s really reliable, ActiveMQ is a great choice for that too. I definitely set up many ActiveMQ brokers that were used for .NET applications and C# applications.

and that sort of thing.

Kate Holterhoff (27:20)
Excellent. And to continue with the convoluted integration story, I understand that ActiveMQ is using Apache ServiceMix.

Can you talk at all about that? yes.

Justin Reock (27:29)
Yeah. yes I can.

Actually ServiceMix was.

the first prototype at EarthLink

because Oracle BEA WebLogic provides basically an enterprise service bus type of pattern, an ESB pattern. And it was just my original thought was, well, why don’t we just like lift and shift, you know, one ESB out for another, especially because it’s all loose coupling through API endpoints anyway. So we should theoretically be able to do that. ServiceMix is an ESB. It combines Apache Karaf, which is like the engine

that actually, like the runtime that actually deploys all of the objects. It has Camel built into it also as its enterprise integration pattern framework, and it has ActiveMQ to actually provide messaging. It shoves all of that into one single container, which is Karaf. To back up just a second, there’s something called OSGI, which was before microservices were cool, kind of a way of Java

⁓ having a specification for microservices. And they were very much like Java style microservices. You’d have an application, that little app could have a life cycle to it, like start stop and everything like that. There was a standard protocol for controlling the life cycle of that app. And it was easy because of this sort of containerization, which really was like a new type of application container that’s OSGI, easy to even like move these workloads between multiple nodes, which was the dream.

of ServiceMix, right? It was like, let’s take Apache Karaf, which is an OSGI implementation and container, let’s launch ActiveMQ inside of there, let’s put Camel inside of there, so that we can have this nice, big, open source ESB that is ServiceMix. The problem was, it’s hard to scale. So like when you’ve got the broker…

and the Camel spec and the runtime for all the Camel routes and applications and everything like that living in one single JVM, which you had to because at the end of the day, it’s still ServiceMix is still just a Java application that’s running inside a JVM.

just like it was difficult to break HornetQ out of JBoss and try to scale the broker model that way, it was like, well, we can have ActiveMQ running inside ServiceMix, but most of the time, people were just interested in having an OSGI container. So the first thing they would do is turn off ActiveMQ because once you start actually pummeling that thing with messages, it becomes very expensive, right? And it’s tough to partition resources specifically for other parts of what was running inside of Karaf. ⁓

dream of ServiceMix was actually sort better than the implementation. I evangelized that technology for people who were needing a relatively low traffic ESB solution and were concerned more about specification, which obviously in the enterprise is really important. It is a good solution for that. But once you start hitting more than 100 messages a second, which is pretty low throughput compared to a lot of the systems that we were building,

split ActiveMQ out of there anyway. And then it was like, well, why don’t you just implement Camel separately? And why don’t you use ActiveMQ as a standalone broker so that you don’t have to spend a ton of energy learning the OSGI specification and understanding how that was different than doing traditional Java deploys.

Kate Holterhoff (30:43)
So I have so many more questions. Again, this is a very convoluted story in the sense of acquisitions and acronyms and just the evolution of these tools and specifications. But we’ve hit on the history quite a bit. And you were there from the beginning. Let’s talk about where we are today and maybe start to think a little bit about the future. So would you say that ActiveMQ is still

Something that folks are looking to as a first choice. We’ve mentioned that we’re in the Kafka era, but that doesn’t mean that MQs are going anywhere. Where does ActiveMQ sit today?

Justin Reock (31:18)
So first of all, there are a ton of brokers that are just still out there, right? That have been out there for, it’s a very reliable tech. It’s a very easy to deploy tech and it scales well. So there’s largely not a lot of reasons to rip it out. Once you’ve got it in place, it works well, it’s very robust and is now supported by…

many, many, years of development. So it’s a rich project and I’d say a very viable one.

I think that if you are intentional about the use case and about the reason that you want to stand up a message broker in the first place, then you’ll find yourself still seeing a lot of benefits to standing up something that’s like a traditional message oriented middleware following a spec like JMS, as opposed to standing up a streaming solution like Kafka. I think it just comes down to overhead, right? mean, when you want to, mean, Kafka, they’re not gonna be wrong.

is insanely good technology for what it is, right? It’s excellent. And I mean that there’s a reason that, know, Confluent is doing so well as a company, especially amongst fintechs and banking and stuff like that. It’s an excellent platform for streaming. But in order to get a Kafka implementation, even working at the most minimal point,

You need to have at least three brokers set up because you need to have at minimum a quorum and a quorum of brokers, you can’t have a quorum of two, right? A quorum is half plus one. And so in order to have an effective quorum for high availability, you need at least three Kafka brokers set up because half plus one of three is two. And so we actually have an effective quorum. If half plus one is one or half of two is just one broker standing alone, that’s no longer a highly available solution, right? So Kafka brokers themselves have to be set up in a way that you can

cluster them with a quorum. And then even past that, they’re really only good for streaming use cases. Which is probably too much for a lot of folks who just need to be able to do some traditional, putting some glue around some applications and shuffling data across the business, right? I if you need to be passing 100,000 messages a second, and you need to do that with relative confidence, Kafka actually still doesn’t have

100 % only once messaging delivery. It’s still at least once because of the way that these clusters work. So you may end up redelivering messages or actually duplicating messages. Whereas you can be very precise with a non-streaming solution like ActiveMQ that will allow you to build a messaging pattern where you can guarantee only once delivery of a message, even if you’re using a scaled solution like a network of brokers or something like that. So all that is to say,

that if you’re really intentional about your use case and you’re not just doing Kafka because you want to do Kafka and you think like that’s the way to go, then yes, absolutely. It’s a great first choice for picking a message broker because it’s again, very, very flexible, adapts to most enterprise environments. It’s robust. It’s rigid. You can have a lot of like levers to pull in terms of what your priorities are, like speed versus delivery confidence. And there are so many different patterns that can be

extended from ActiveMQ using Apache Camel with very minimal code. So very low effort to build like highly sophisticated enterprise routing cases. Now, if you’re a scrappy startup and you’re trying to stream messages for some reason, maybe you’re trying to train an AI model or something like that, like everybody’s doing today, it’s not that you can’t build those same patterns with ActiveMQ. mean, you can.

But on the flip side then it’d be all of that additional configurability and feature sets and all that kind of stuff becomes ActiveMQ’s overhead, right? You don’t really need that if you’re just trying to get a bunch of data from one place to another but if you have a whole bunch of disparate endpoints a whole lot of different heterogeneous systems and you’re dealing with like traditional enterprise integration problems I I don’t think you need anything besides ActiveMQ and Camel as a combination. It’s very very powerful and can give you really all that you need

Kate Holterhoff (35:25)
like that answer. And as we are looking ahead, of course, things are moving very fast in our domain, what would you say about the future of messaging just as something that needs to occur with computers? Do you think that the paradigm is going to shift because of AI and the fact that everyone’s going to have these sort of bespoke SaaS solutions that I’m hearing predicted? Or do you think that this is something that is, it’s tried and true?

We’re not, we won’t need to reinvent this particular wheel.

Justin Reock (35:52)
It’s a great question. And I say what I’ve always said is that, at least in the context of AI, anybody who says that they know what things are going to look like a year from now is lying. I mean, it’s just like it’s moving too quickly to try to accurately predict. But in terms of like, will enterprises continue to have a need to take data from one place and put it somewhere else? Yeah, absolutely. mean, again, like not to oversimplify it, but that is like the core.

whole point of any type of integration or messaging solution is getting data from one system and putting it somewhere else and doing it reliably and doing it in a way that can be easily observed and where we can guarantee delivery and where we can have fault tolerance and all these things. I don’t think that that need in the enterprise is going to go away anytime soon. And I don’t know that what we’re doing with GenAI

beyond a difference in the way that we need to get data into the model to train it, which don’t get me wrong, don’t use ActiveMQ for that. That’s a streaming solution for sure. But I don’t think that that more basic problem of being able to do ETL type patterns and to shuffle data around between disparate heterogeneous systems, I don’t think that problem’s going away. In fact, think we’re seeing more of a need for that as the tool sprawl and the available

backend options that we have to build out these platforms and things like that, we’re just increasing the amount of software that’s out there. And every time you do that, you increase the overall complexity of just sort of the application landscape. And so having something that can act as sort of like a spinal cord between all of those disparate systems becomes more and more important.

Kate Holterhoff (37:21)
OK. And I certainly won’t ask you to predict anything that far down the road. I agree that you’re setting yourself up for failure in that

case. Here’s what I do like about our conversation. I’m thinking through the spectrum of conversations I’ve had about messaging so far. And when I talk to folks like Andy Stanford Clark,

Justin Reock (37:30)
Absolutely.

Kate Holterhoff (37:42)
we began by talking about the history of MQs in the airline industry. And you have brought in using MQs for trains. And I feel like the next conversation I have, I need to have someone who’s using messaging in automobiles, right? And then it’ll be like the John Candy movie from the 80s, right? Maybe these are just the jokes I tell myself in my head. ⁓ Okay, so this has been an amazing conversation. Super appreciate you coming on here to give

Justin Reock (37:57)
Yep.

Kate Holterhoff (38:07)
me.

I guess just real world examples of what this looks like, ActiveMQ as an enterprise solution, also something that is deeply important to the open source community, something that is still very relevant and being used in places where people don’t always recognize it, right?

Justin Reock (38:23)
Absolutely, Happy to talk about it. This technology was one of those ones that even though I found myself sort of accidentally stumbling into it, it had a huge impact on my career path and kind of like where my overall career as a technologist would sort of take me. So it’s a technology that’s very near and dear to my heart even though I don’t do quite as much work with it these days as I used to.

Kate Holterhoff (38:46)
I love

that. Okay, so before we go, Justin, how can folks hear more from you? Are you gonna be at any more conferences this year so that I can run into you? And what are your preferred social channels?

Justin Reock (38:56)
Gosh, yeah, so I’m at conferences, it seems like almost like nonstop right now. That’s a lot of the circuit. So most of my work now is centered around developer experience and the knock on effective developer productivity. And there’s a lot to talk about in this space because it’s a very compelling and very difficult problem. And even though we’ve been working on it for decades, I feel like we still aren’t quite there. There’s still so many unknowns. So I tend to be quite out and about on the conference circuit. was just at devopsdays, Salt Lake City this week giving a talk on platform engineering.

be out at KCD New York ⁓ on June 4th. ⁓ And then a couple of weeks after that, I’ll be out in LeadDev London and then flying from LeadDev London to devopsdays Amsterdam to give a talk with Nathan Harvey from the DORA community. So all over the place in terms of that. then LinkedIn is usually the easiest way to find me. My dad is the only other person with my last name, so I’m pretty easy.

to find. So feel free to reach out on LinkedIn. Always happy to discuss really anything technology, but most recently the big focus has been on developer experience and productivity.

Kate Holterhoff (40:03)
That sounds amazing. I bet your talk with Nathan Harvey is gonna be awesome too. I love him.

Justin Reock (40:07)
he’s so great.

Yeah, I can’t wait. I’m really excited.

Kate Holterhoff (40:10)
That’s going to be

awesome. Yeah, very cool. OK, so I’ve really enjoyed speaking with you, Justin. Again, my name is Kate Holterhoff, senior analyst at RedMonk. If you enjoyed this conversation, please like, subscribe, and review the MonkCast on your podcast platform of choice. If you are watching us on RedMonk’s YouTube channel, please like, subscribe, and engage with us in the comments.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *