the decline of Java application servers when using docker containers

James Strachan
fabric8 io
Published in
7 min readMar 9, 2015

--

For many years the Java ecosystem has used Application Servers. The basic idea of a Java Application Server (Servlet Engine, JEE or OSGi) is its a JVM process you deploy & undeploy your java code to as a deployment unit (jar / war / ear / bundle etc). So a JVM process mutates the code it runs over time. Often Java Application Servers have directories you drop files into or REST/JMX APIs to modify the running deployment units (Java code).

Memory used to be very expensive so it used to make lots of sense to put all your Java code into the same JVM process to minimise RAM foot print of running lots of processes.

Its been common practice for many years in the Java ecosystem that in production you never really un-deploy Java code in a running JVM; since its far too easy to leak resources (threads, memory, database connections, sockets, running code etc). So to upgrade the version of an application in production its better to spin up a new Application Server process with the new code inside in parallel; move traffic to the new application server instance and drain traffic from the old instance so that once the old application server has finished processing all its in flight requests it can be stopped.

Conceptually you still undeploy the old code and deploy the new code; but in reality you boot up a new process and move traffic over to the new process and kill the old one.

There’s a current movement towards micro services; making each process do one thing well. In many ways recommended best practice in the use of application servers for years has been; try to run a small number of deployment units per JVM as you can get away with. If you put all your services (deployment units) into the same JVM; then if you need to upgrade one of those services you have to take down the JVM which can affect all the other services. So its less risky and more agile to deploy each service in a different JVM process so you can upgrade any service at any point in time without affecting any of the other services.

Having lots of independent processes rather than a huge monolith process is also much easier to monitor and understand which service is using up memory, network, disk or CPU etc.

How docker changes things

Docker containers are an ideal way to package up applications for easy deployment on linux machines; they use immutable container images for all the operating system and code they need to use; they are isolated from each other and can have cgroups limits on IO/memory/CPU usage etc. They work well with pretty much any technology that can run on linux (Java / python / ruby / nodejs / golang et al).

One of the big wins of docker containers is you can spin up as many instances of a container you like on any machine and they work in a repeatable way since they are based on the same reusable immutable image. A container instance can have its own persistent state mounted on a volume; but the code (and possibly configuration) comes from an immutable image.

So the docker way to work with Java Application Servers is to make an immutable image for the application server and the deployment units you wish to run in production.

To upgrade the Java code of a service, rather than dropping a WAR in the webapps/deploy folder or calling a REST/JMX API in the application server or whatever, you just make a new image with the new deployment unit inside and run it.

Increasingly the Java Application Server doesn’t need to worry about deploying and undeploying new code at runtime; it doesn’t need to watch a deploy folder for changes or listen on a REST/JMX API for requests to change its deployment; it just starts up the code in its image on startup.

So the very idea of a Java Application Server (a dynamic JVM which you deploy and undeploy code to) is very much in the decline in a docker world.

The best way to work with Application Services in docker is to treat them more as an immutable disk image; there’s less need for Java code inside the process to mutate itself at runtime. Then the rolling upgrade of new versions of containers is done outside of the application server itself (e.g. via rolling upgrades in kubernetes and using a load balancer in front of the containers).

Configuration

One thing the Java ecosystem has done well since adopting Application Servers is creating immutable binary deployment units (jars/wars/ears/bundles etc), releasing them once and moving them between environments. To do that we often delegate to the application server to find resources (e.g. lookup in JNDI for JEE) for things like discovering where the database is or message broker. Then there would be separate clusters of independently configured application servers which you deployed your artefacts to; assuming that the application servers were properly configured.

Though its surprisingly easy to mess up and for different environments to be running different operating systems, Java versions, application server versions or mismatched configurations; things may work in your staging environment but if you’re not very careful they may fail in production.

Instead the docker approach is to extend the idea of immutable images to the operating system and application server too; so the exact same binary image of the operating system, Java runtime, application server and deployment units would be run in each environment. So there’s no chance of hitting an issue with a mis-configured application server in a certain environment since its the same binary image that runs everywhere.

To be able to do this well; having service discovery in each environment is extremely useful as it makes it really easy to run the same image in every environment without messing with configuration. e.g. things like kubernetes service discovery can make it trivial to run the same binary image in all environments and for service discovery of things like databases and message brokers to just work.

Summary

So does that mean Java application servers are dead? In a docker world there’s really no need to ever hot deploy java code into a running java process in production any more. However that can be useful in development; being able to hot deploy code into a running instance. (Though to be fair you can do the same thing in any Java application with tools like JRebel; most IDE debuggers do the same trick too).

So I’d say that Java Application Servers are mutating into becoming more like frameworks that are baked into an immutable image that are then managed externally by the cloud (e.g. via kubernetes). In many ways the cloud (e.g. kubernetes and docker) takes over many of the features that Java application servers used to do; since rolling upgrades of new images is needed for all technologies (java/golang/nodejs/python/ruby et al).

Though Java folks still want services that Java application servers provide (e.g. servlet engine, dependency injection, transactions, messaging and so forth). However you don’t need to dynamically drop deployment units into a running application server JVM for that; e.g. you can easily embed a servlet engine into a Java application. Approaches like Spring Boot show how just using dependency injection is enough to get most of the features of an application server but taking a framework approach using a flat class loader.

As a developer one of the biggest issues complexities of working with Java Application Servers is actually class loaders; I think we’ve all hit class loader issues at some point. There’s some value in hiding the implementation jars for services in JEE from the developers; though you can fix that by just using a BOM file to import maven dependencies in your project; you don’t need a complex class loader tree or graph for that. Even with a flat classpath though, you can end up with conflicting versions; so having ways to isolate class loaders can be handy. Though in some ways using different versions of the same jar implies a code smell; maybe in those cases its time to refactor into 2 separate services so that you can keep nice simple flat class loaders?

If a Java Application Server process is now only starting a statically known set of Java code; the very idea of an application server changes drastically to be more about a way of performing dependency injection and including the modular services you need; that sounds more like a framework than what we’ve come to think of as a Java Application Server.

Lots of Java developers have learnt to use Application Servers and will continue to use them in a docker world; which is fine. But at the same time I see their use declining as lots of the things application servers have done for us over the years are now not so relevant with docker & kubernetes and frameworks can do most of that instead in a simpler, leaner way.

One thing thats great about docker and the cloud is it lets developers choose what technologies they wish to use; they can choose the right tool for the right job and have all their technologies provisioned and managed in the same way whatever the language and frameworks used. You can start off using the technologies you know and over time migrate to lighter alternatives.

In the fabric8 project we’re now pretty agnostic on what Application Servers or frameworks you wish to use; Camel Boot, CDI, Spring Boot, Karaf, Tomcat, Vertx, Wildfly are all supported in our quickstarts. Thanks to kubernetes we can provide the same provisioning, management and tooling experience whatever application server or framework you choose to use. If you’re starting a new Camel project for use in fabric8 V2 for example I highly recommend you start with Camel Boot (like these quickstarts) or try the Spring Boot quickstarts.

Increasingly I see more Java folks choosing the lighter approach of things like Camel Boot, CDI, Dropwizard, Vertx or Spring Boot and over time using less Java Application Servers; though we’ll still need dependency injection and frameworks.

--

--

I created Groovy and Apache Camel. I’m currently working at CloudBees on Jenkins X: automated CI/D for Kubernetes: https://jenkins-x.io/