Deploying Grails Apps To Docker

In previous posts, we discussed how to deploy postgres and Grails into Docker containers. The process that was used there was for the grails “development” setup. In this post we’ll discuss how to deploy your Grails application as a war file into a containerized Tomcat server.

Cats in Containers

This is easier than herding cats, I promise.

To get tomcat running as a docker container, create a Dockerfile similar to the following:

# Start with a nearly perfect base
FROM library/tomcat

Once this is complete, build the docker image:

$ sudo docker build -t my-tomcat .

Now let’s give it a test run to make sure everything is working as expected:

$ sudo docker run --name tomcatApp -i -p 8080:8080 my-tomcat

Now cruise on over to http://localhost:8080/ in a web browser and you’ll see something akin to the following:

Tomcat status page

Now shut it down and let’s prepare to make it more useful:

$ sudo docker stop tomcatApp # Stop the running container
$ sudo docker rm tomcatApp # destroy it

Isn’t this great?

Now let’s create a tomcat-users.xml file giving permissions to a user so we can interact with the manager-gui. Create a a file called tomcat-users.xml alongside your Dockerfile containing the following (please don’t use admin/admin in your version…):

  <user username="admin" password="admin" roles="manager-gui,manager-status"/>

Now let’s update the Dockerfile to copy this file over to the container. Add the following:

# Import tomcat-users.xml file
ADD tomcat-users.xml /usr/local/tomcat/conf/

Now let’s build this new container and kick it off:

$ sudo docker build -t my-tomcat .
$ sudo docker run \
    --name tomcatApp -i -p 8080:8080 \

Now navigate over to http://localhost:8080/manager/html with your browser and you’ll be presented with this beautiful management gui:

Tomcat management gui

This interface allows us to upload the WAR generated from our grails project!

jk! lololllll

… at least it claims as much. In practice I was unable to get upload a war file, as the connection would instantly be reset. I did a bit of digging but wasn’t able to sort this out. Instead, let’s try another path. We’ll make a volume for tomcat’s webapps directory, map it to a local directory, and deploy our war by simply copying it over!

First, let’s nuke that last mistake:

$ sudo docker stop tomcatApp
$ sudo docker rm tomcatApp

Now update your Dockerfile to include the following:

# Create a volume for deploying wars
VOLUME ["/usr/local/tomcat/webapps/"]

Now we rebuild and kick it off again:

$ sudo docker build -t my-tomcat .
$ sudo docker run \
    --name tomcatApp -i -p 8080:8080 \
    -v /my/local/war/path:/usr/local/tomcat/webapps \

The run command above will bind /my/local/war/path to the tomcat webapps directory. Now we can build our Grails war (using a gradle task or whatever you prefer) and simply copy it over to this location:

$ cd grails-petclinic
$ ./grailsw war
... lots of output ...
| Compiling 144 source files

| Done creating WAR target/petclinic-0.1.war

$ cp target/petclinic-0.1.war /my/local/war/path

At this point, you can monitor the output in your tomcat container as the war is unpacked and deployed.

Things are getting serious now

Oh…. wait… what?!

You may have noticed that the app isn’t loading, and on the tomcat STDOUT, there’s some logging similar to this:

12-Apr-2015 23:56:16.697 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive /usr/local/tomcat/webapps/petclinic-0.1.war
12-Apr-2015 23:56:25.288 INFO [localhost-startStop-1] org.apache.jasper.servlet.TldScanner.scanJars At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
log4j:WARN No appenders could be found for logger (org.codehaus.groovy.grails.commons.cfg.ConfigurationHelper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See for more info.
12-Apr-2015 23:56:37.443 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal Error listenerStart
12-Apr-2015 23:56:37.476 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal Context [/petclinic-0.1] startup failed due to previous errors
12-Apr-2015 23:56:37.499 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive /usr/local/tomcat/webapps/petclinic-0.1.war has finished in 20,803 ms

Note that your output may be somewhat different (I’m not actually using the petclinic app for this).

To track down the root cause, peer into the ‘localhost’ log on the container:

$ sudo docker exec tomcatApp ls /usr/local/tomcat/logs/
  localhost.2015-04-13.log # <-- we want that one
$ sudo docker exec tomcatApp cat /usr/local/tomcat/logs/localhost.2015-04-13.log

Inspecting this log file, you’ll find a complaint about petclinicDatasource. The issue here is the grails dataSource definition, which uses a JDNI definition of the form:

production {
    dataSource {
        dbCreate = "update"
        jndiName = "java:comp/env/petclinicDatasource"

To fix this, we need to define that resource in the tomcat context. Let’s create a file called context.xml alongside the Dockerfile.

    <!-- standard stuff... -->

    <!-- new resource! -->
      maxActive="100" maxIdle="30" maxWait="10000"
      username="grails" password="super secure" 

As you can see above, we’ve added the Resource block to define the petclinicDatasource resource. This is, of course, a jdbc connection. You may notice that I used database as the hostname in the connection string. More on that later… First let’s update the Dockerfile:

# Start with a nearly perfect base
FROM library/tomcat
ADD tomcat-users.xml /usr/local/tomcat/conf/
ADD context.xml /usr/local/tomcat/conf/
ADD /usr/local/tomcat/lib/postgresql.jar
VOLUME ["/usr/local/tomcat/webapps/"]

Note that I’m also adding the postgresql driver here - we need that as the resource is defined as a postgresql connection. This also displays the ADD directive’s URL version. Pretty handy!

Now we need to define the database host that we used in the JDBC info above. This can be accomplished by adding a host to the container’s /etc/hosts file, which is done via the --add-host switch.

Let’s rebuild and run again!

$ sudo docker stop tomcatApp
$ sudo docker rm tomcatApp
$ sudo docker build -t my-tomcat .
$ sudo docker run \
  --name tomcatApp  -i \
  -p 8080:8080 \
  --add-host=database: \
  -v /my/local/war/path:/usr/local/tomcat/webapps \

The add-host switch above adds an entry defining database as the ip address listed. You’ll want to change this to your machine’s ip address. There are dozens of ways to grab the ip address when needed here, so don’t feel like you need to hardcode it.

Great! Now the database is connected!


Actually, I needed to update my local postgresql to bind to the ip address above, edit the HBA file to accept connections from the docker machine, and update iptables to allow traffic from the docker containers (cidr range to my local postgresql instance.

I won’t detail all of that now, but here’s the condensed version:

  1. Edit $PG_HOME/postgresql.conf
  2. Change listen_addresses to include the relevant interface
  3. Add an entry in $PG_HOME/pg_hba.conf for your db user: host petclinic grails md5
  4. Restart postgresql
  5. Add an ip tables rule: iptables -I INPUT -p tcp --dport 5432 --source -j ACCEPT

At this point, you should be able to restart your container and achieve success!

$ sudo docker restart tomcatApp
$ sudo docker attach tomcatApp

Note - I had to fix some database migration issues on my actual project here. Check the logs on the container for more information if your application fails to start!

$ sudo docker exec tomcatApp ls /usr/local/tomcat/logs
$ sudo docker exec tomcatApp cat /usr/local/tomcat/logs/localhost.2015-04-13.log | vim -

I guess it’s a little late to say voilà! at this point.

For the final version of the files, take a look at the blog-snippets repo.