I previously blogged about being frustrated enough with slow docker stop times that I figured out how to speed them up.
I ran into that issue while using
docker-compose to build an application which used several AWS services. While it's easy enough to develop 'live' against AWS, if it was possible to do it locally, it seemed worth doing.
I've uploaded a small 'hello world' repo to Github, so for TL;DR purposes, take a look!
Ultimately, I want to have a container that I can ship (without alteration) to AWS, while being able to develop and test locally.
This goal led to, for example, only using S3 over SSL, as that's the only way it now works in production.
"Hello AWS World"
The script that runs in the demo container does the following:
- Connect to s3 and SQS
- Send a 'Hello World!' message to SQS
- Read that message back from SQS
- Store the contents of that message in an S3 bucket.
Clearly this code is not going to be the backbone of the next hot startup, but it demonstrates the end-to-end workflows without too much extraneous code.
And, meeting the goal ✓, the exact same script (or the whole Docker container) runs without alteration on AWS. (Provided the queue and buckets exist, and it has permissions, of course.)
To see it in action, run
$ docker-compose up -d
To see what happens at startup, you can run
docker-compose logs. You should see all the containers starting up, and the demo process logging:
$ docker-compose logs Attaching to dockercomposefakeaws_demo_1, dockercomposefakeaws_fakesqs_1, dockercomposefakeaws_fakes3ssl_1, dockercomposefakeaws_fakes3_1 demo_1 | INFO:aws_demo:Sending message demo_1 | INFO:aws_demo:Receive the message demo_1 | INFO:aws_demo:Got message <Hello World!> demo_1 | INFO:aws_demo:Storing contents into s3 at hello.txt
docker-compose is an ideal tool for setting this up, as I can create one compose file for production, and another one for local dev. The production file will just be the container(s); the dev/test will also include the mock services.
The whole dev file is on github, and it defines 4 containers:
demo: A demo container that runs a 'hello world' python app
fakesqs: a fake SQS service, powered by the SQS-compatible ElasticMQ (Scala and Akka)
fakes3: A fake s3 service, running the fake-s3 ruby gem
fakes3ssl: An nginx-powered SSL proxy to provide SSL to
This polyglot stack is a great example of how powerful Docker can be. Outside of putting the container names in the configuration file, I had to do zero work to run 4 services that are running completely different languages and tools. Sweet.
One notable 'hack' is in working with the endpoints.
links: - fakesqs:us-west-2.queue.amazonaws.com - fakes3ssl:testbucket.s3.amazonaws.com
link feature actually sets up local hostfile entries for the pairs listed here.
So when the code tries to connect to
testbucket.s3.amazonaws.com, it's going to get the IP for the
fakes3ssl container instead of the actual AWS endpoint.
If you're using python, boto has another way to manage this (
BOTO_ENDPOINTS), but the hostname based approach has the advantage of working with nearly every tool which can work with aws, including
In-container integration testing
The final neat application of this approach is being able to run full integration tests against these mock services. There is a simple test file in the repo which just validates that, in fact, 'Hello World!' has been written to
hello.txt in our bucket.
Again, these tests would work on production AWS if we wanted to ensure that the fake services were really equivalent, as they just use the same environment variables for configuration.
Makefile has a useful technique for running a command in a running container attached to a fully configured
.PHONY: test test: docker exec -ti `docker ps -q -f 'name=_demo_'` py.test
So this finds the running container which has
_demo_ in the name, and runs
py.test in it.
$ make test ======================= test session starts ======================== platform linux2 -- Python 2.7.6 -- py-1.4.27 -- pytest-2.7.1 rootdir: /demo, inifile: collected 1 items tests/test_demo.py . ===================== 1 passed in 0.09 seconds =====================
(Normally I would use the
$(docker...) invocation, but the default shell for
make doesn't like it, even if I manually set it to
bash. So, backticks work.)
Once you figure out some of the quirks of mocking AWS services on AWS, it's incredibly powerful. For some services, AWS even offers official local versions (like DynamoDB Local, which has of course already been dockerized).
Since the environment is self-contained, not only does this speed up (and cost down) local development, but it means we can even integration test AWS-inclusive solutions as part of a CI process.
(BTW, credit for figuring out some of this plumbing goes to my stellar co-workers Dan Billeci and Nate Jones. Thanks guys, it's great working with you!)