Using docker-compose for local development with AWS services

I previously blogged about being frustrated enough with slow docker stop times that I figured out how to speed them up.

I ran into that issue while using docker-compose to build an application which used several AWS services. While it's easy enough to develop 'live' against AWS, if it was possible to do it locally, it seemed worth doing.

I've uploaded a small 'hello world' repo to Github, so for TL;DR purposes, take a look!

The Goals

Ultimately, I want to have a container that I can ship (without alteration) to AWS, while being able to develop and test locally.

On AWS, I will be using Cloudformation to apply EC2 IAM Roles to the production hosts.

This goal led to, for example, only using S3 over SSL, as that's the only way it now works in production.

"Hello AWS World"

The script that runs in the demo container does the following:

  1. Connect to s3 and SQS
  2. Send a 'Hello World!' message to SQS
  3. Read that message back from SQS
  4. Store the contents of that message in an S3 bucket.

Clearly this code is not going to be the backbone of the next hot startup, but it demonstrates the end-to-end workflows without too much extraneous code.

And, meeting the goal ✓, the exact same script (or the whole Docker container) runs without alteration on AWS. (Provided the queue and buckets exist, and it has permissions, of course.)

To see it in action, run

$ docker-compose up -d

To see what happens at startup, you can run docker-compose logs. You should see all the containers starting up, and the demo process logging:

$ docker-compose logs
Attaching to dockercomposefakeaws_demo_1, dockercomposefakeaws_fakesqs_1, dockercomposefakeaws_fakes3ssl_1, dockercomposefakeaws_fakes3_1
demo_1      | INFO:aws_demo:Sending message
demo_1      | INFO:aws_demo:Receive the message
demo_1      | INFO:aws_demo:Got message <Hello World!>
demo_1      | INFO:aws_demo:Storing contents into s3 at hello.txt


docker-compose is an ideal tool for setting this up, as I can create one compose file for production, and another one for local dev. The production file will just be the container(s); the dev/test will also include the mock services.

The whole dev file is on github, and it defines 4 containers:

  • demo: A demo container that runs a 'hello world' python app
  • fakesqs: a fake SQS service, powered by the SQS-compatible ElasticMQ (Scala and Akka)
  • fakes3: A fake s3 service, running the fake-s3 ruby gem
  • fakes3ssl: An nginx-powered SSL proxy to provide SSL to fakes3.

This polyglot stack is a great example of how powerful Docker can be. Outside of putting the container names in the configuration file, I had to do zero work to run 4 services that are running completely different languages and tools. Sweet.

One notable 'hack' is in working with the endpoints.


Docker's link feature actually sets up local hostfile entries for the pairs listed here. So when the code tries to connect to, it's going to get the IP for the fakes3ssl container instead of the actual AWS endpoint.

If you're using python, boto has another way to manage this (BOTO_ENDPOINTS), but the hostname based approach has the advantage of working with nearly every tool which can work with aws, including awscli.

In-container integration testing

The final neat application of this approach is being able to run full integration tests against these mock services. There is a simple test file in the repo which just validates that, in fact, 'Hello World!' has been written to hello.txt in our bucket.

Again, these tests would work on production AWS if we wanted to ensure that the fake services were really equivalent, as they just use the same environment variables for configuration.

The Makefile has a useful technique for running a command in a running container attached to a fully configured docker-compose stack:

.PHONY: test

        docker exec -ti `docker ps -q -f 'name=_demo_'` py.test

So this finds the running container which has _demo_ in the name, and runs py.test in it.

$ make test
======================= test session starts ========================
platform linux2 -- Python 2.7.6 -- py-1.4.27 -- pytest-2.7.1
rootdir: /demo, inifile: 
collected 1 items 

tests/ .

===================== 1 passed in 0.09 seconds =====================

(Normally I would use the $(docker...) invocation, but the default shell for make doesn't like it, even if I manually set it to bash. So, backticks work.)


Once you figure out some of the quirks of mocking AWS services on AWS, it's incredibly powerful. For some services, AWS even offers official local versions (like DynamoDB Local, which has of course already been dockerized).

Since the environment is self-contained, not only does this speed up (and cost down) local development, but it means we can even integration test AWS-inclusive solutions as part of a CI process.

(BTW, credit for figuring out some of this plumbing goes to my stellar co-workers Dan Billeci and Nate Jones. Thanks guys, it's great working with you!)

Stopping Docker Containers in a hurry

I've been working with docker-compose a lot, and it's a really great tool. I can't wait to see what they do with it!

However, I found myself doing a lot of docker compose stop, docker compose start. Shutting down each container was taking approximately 10 seconds each, which, with my setup, meant waiting about a minute for a shutdown. This was certainly manageable, but also added up quickly.

So what happens when you call docker stop? The container's main process is sent a signal, SIGTERM. It's then given 10 seconds to do any cleanup it wants/needs to, and then it's sent SIGKILL and forcibly killed.

By default, python does nothing when it gets a SIGTERM, it has the default signal handler installed (SIG_DFL). So, for a simple application, you can turn SIGTERM into an immediate quit by just doing:

import sys
import signal

def handler(signum, frame):

def main():
    signal.signal(signal.SIGTERM, handler)
    ... your special logic here

And that's it! Or so I thought. I started the container:

$ docker run -d --name shutdown_test shutdown_test
$ time docker stop shutdown test
real    0m10.367s
user    0m0.136s
sys     0m0.007s

Not awesome. Checking docker ps showed the problem, though:

$ docker ps                                                               
CONTAINER ID        IMAGE                           COMMAND               
272c68f3206e        shutdown_test:latest            "/bin/sh -c ./ 

Docker is actually running sh as the primary process, which is launching my python script. This is because of the CMD entry I was using in Dockerfile.

CMD ./

This version of CMD, as you can see, runs the process under a shell.

Converting to the argument list format fixes this:

CMD ["./"]

There we go:

$ docker ps                                                            
CONTAINER ID        IMAGE                           COMMAND            
8e5ef05d5389        shutdown_test:latest            "./"        

and finally, we get the shutdown times we deserve.

$ time docker stop shutdown_test 
real    0m0.341s                 
user    0m0.132s                 
sys     0m0.007s                 

It took a bit more work to get all the containers in my docker-compose working. Some containers specified different commands in the docker-compose.yml itself, so you can do the same thing there.

  command: "thescript"
  command: ["thescript"]

Also, if you're using cherrypy, their documentation reveals an important step for handling signals:

if hasattr(cherrypy.engine, 'signal_handler'):

There are a few containers left which have slow shutdowns, but they are ones I'm grabbing from upstream sources. I'm sure some pull requests will be pending!

Contents © 2015 Joshua Barratt - Powered by Nikola