I recently appeared as a guest on the Context Window podcast hosted by IBM’s Anant Jhingran and Ed Anuff. It inspired a couple of posts. This one is about skills relevance – namely Java and agents.
Ed said that he had talked to some people that still flatly refuse to believe that AI works in any context. Perhaps they’ve had poor results, or couldn’t get something to work. My suggestion is to try again. The models and tools are getting better all the time. We also talked about the fact it feels like some people are saying you need to learn an entirely new stack in order to be relevant in the age of AI- trying to keep up with achingly cool kids. As someone that has tracked programming language adoption and usage for a long time this seems particularly wrong-headed to me.
Of course there is probably an underlying factor at work here – fear. People are naturally a little unnerved about the impact of AI on the software development jobs market. But this is where continued learning is so important. The swiftest path to irrelevance is to refuse to learn new skills and or refresh the ones you already have.
So what about Java?
Sure new languages and frameworks continue to emerge, and move to dominance. Java is no longer a top three programming language (Python, JavaScript and TypeScript are all ahead). But, and this is the important bit – that’s not to say your Java skills are not relevant. On the contrary – it’s highly likely they’re going to come into their own, particularly in enterprise contexts. Don’t think OMG I am a Java developer, but now I need to learn Python because it’s the language of AI. Python may have overtaken Java in terms of the current industry conversation – it’s the language of machine learning and AI after all. Sure, OpenAI is a huge Python shop. Python is the language of frontier models, and the Python ecosystem of libraries is just an incredible industry asset. But that doesn’t mean your Java skills aren’t relevant for developing apps that use models.
Java has incredible antibodies. Its ability to swallow and digest new innovation, to find new niches, is why it’s lasted so long in this industry and been so successful. Think about big data. For a while, people were saying, “Oh no, there’s no innovation in Java.” Big data came along, and sure enough, we saw frameworks like Hadoop and Spark written in Java and JVM languages. Java has maintained relevance through all of the waves that we’ve seen over the last couple of decades – it is the exemplar of a general purpose programming language and runtime. With the distributed systems and cloud revolution, so many of the applications and systems that were built, so much of the infrastructure, was built in Java, new languages like Go and Rust notwithstanding. The idea that somehow Java isn’t going to be play well with AI doesn’t make any sense.
Let’s look at an interesting example of innovation in the space, an an antibody in his own right. Rod Johnson founded the Spring project. Millions of developers around the world use Spring and Spring Boot every day. He’s now created Embabel, a strongly-typed agent framework written for the JVM. It’s designed to bring determinism to your project plan using a model that isn’t an LLM, before using autonomous agents to generate the code to map to that plan. Not everything is decided by LLM. According to Rod:
The critical adjacency for building business apps with LLMs is existing business logic and infrastructure. And the critical skill set is building sophisticated business applications. In both these areas, the JVM is far ahead of Python and likely to remain so.
Honestly, this checks out. Embabel is an enterprise play, and one where Java developers’ skills are on point. Spring has proven itself for business logic, systems that are built to last, event-driven systems, transaction systems and so on. Adjacency is a thing. Rod again:
If you’re a Spring developer, you’ll find building agents with Embabel to be as natural as building a Spring MVC REST interface.
On the LLM side, folks might just be thinking surely Model Context Protocol (MCP) is all you need? The hype might make it seem so, but security concerns around the standard mean the answer is almost certainly not. MCP became an industry standard remarkably quickly, but arguably too too much so. MCP became the new Hello World for every enterprise technology vendor, but there is still a great deal of work to do.
Here’s what Rod believes MCP lacks:
-
Explainability: Why were choices made in solving a problem?
-
Discoverability: MCP skirts this important problem. How do we find the right tools at each point, and ensure that models aren’t confused in choosing between them?
-
Ability to mix models, so that we are not reliant on God models but can use local, cheaper, private models for many tasks
-
Ability to inject guardrails at any point in a flow
-
Ability to manage flow execution and introduce greater resilience
-
Composability of flows at scale. We’ll soon be seeing not just agents running on one system, but federations of agents.
-
Safer integration with sensitive existing systems such as databases, where it is dangerous to allow even the best LLM write access.
The last point really is absolutely critical.
Rod is building in Kotlin, which is an interesting design choice. His argument is that Java can do a better job than Python agent frameworks like crew.ai. The proof will be in adoption, so this is a project I will be tracking with interest.
Another open source project to mention is LangChain4J, supported by vendors including Red Hat and Microsoft. Dmytro Liubarskyi, founder and project lead, says:
“Our goal with LangChain4j has always been to make advanced AI capabilities easily accessible to Java developers — without compromising on security, scalability, or developer experience.”
LangChain4J is designed to allow Java developers to easily work with LLMs, vector stores and agents. It also has a set of Kotlin extensions.
Meanwhile Jetbrains has built a Kotlin-based agent framework called Koog – “an idiomatic, type-safe Kotlin DSL designed specifically for JVM and Kotlin developers”.
It’s also worth checking out this post by Marcus Eisele for further context.
What makes LangChain4j attractive to architects is its alignment with established enterprise patterns. The framework offers unified interfaces for chat models (OpenAI, Anthropic, Azure, Google Gemini), embeddings, and vector stores.
Developers declare @AiService interfaces, similar in feel to REST controllers, and annotate methods with @UserMessage, @SystemMessage, or @Tool to define prompts and expose domain logic. This design keeps interactions type-safe, composable, and predictable.
LangChain4j also moves beyond simple model calls. Tool calling allows LLMs to invoke Java methods directly. This controlled bridging between models and systems turns generative AI into a first-class part of enterprise logic.
It’s those Java and JVM antibodies at work. I mean, sure, by all means learn some Python and or some TypeScript. TypeScript is the current language of the day in building dev tools. It’s really exciting how much innovation is happening there, TypeScript is exploding – it seems partly because AI is generating so much of the code written today. If you’re building dev tools, you’re building modern dev infrastructure, chances are high that you’re building in TypeScript. But one can’t learn every new thing that comes along.
Of course questions will always remain. For example – what’s the future for Spring now it’s owned by Broadcom? But there is plenty of innovation out there in Java frameworks such as Quarkus (that team is currently working on a Langchain4j extension. Oracle remains a solid steward of the core language. IBM and Red Hat continue to invest in Java, and aren’t going to give up on the AI, LLM and agent opportunity lightly.
According to my colleague Dr Kate Holterhoff Java is cool again. Anthropic and IBM are partnering – great post by my colleague Stephen O’Grady about the implications here– and I believe that definitely means Java modernisation and integration with LLMs. IBM’s Project Bob IDE is explicitly being pitched for Java modernisation, with a focus on enterprise security when using agents. If you’re a Java programmer, you may be in better shape than you thought for building AI-enabled apps, integrating agents into your workflows.
Bit of a bonus update here. After I posted this on linkedin, Tyler Jewell CEO of Akka commented that I could have mentioned the company in this post. I think this is fair, given Akka literally pivoted from its historical language and framework roots to focus squarely on AI agent workflows. Akka was originally a framework written in the JVM-based Scala language for building high performance, concurrent distributed systems. Now the platform is positioned is as a safe, secure, agentic AI platform. According to Jewell:
Agents are unreliable:
– complexity with agents, memory, orchestration, streaming, endpoints, APIs, tools, integration, stochastic LLMs … all now running in a distributed system.
– distrust from unreliable systems, limited agent security protocols, lack of agent identity, transparency and explainability of LLM interactions, inconsistent outputs, and new AI security threats.
– shadow costs that extend beyond LLM fees as agentic systems require constant maintenance, integration with feedback loops, and continuous governance.Java and the JVM is well suited to overcoming these complexity, trust, and cost issues.
disclosure statement: IBM, Red Hat, Microsoft, Oracle and Broadcom are all RedMonk clients.

No Comments