Making The Best DeCIsions for Continuous Integration and Deployment

What's the best setup for your team's continuous integration and deployment?

I've always appreciated boats. When purchasing a boat you are presented with three options, and you can only have two: price, speed, and comfort.

Continuous Integration is similar. CI providers like CircleCI give you fine-grained control over everything installed in your Docker container, and every command that gets executed.

What is better? We'll look at the pros and cons so you can make an informed decision for your team.

Speed

Build an image nightly on a cron job. CircleCI supports crons natively so I'll use them as the example.

The image you build should be a private image with all your source and dependencies included in the image. Pulling the images with everything you need means you are ready to update your cache and move onto testing ASAP, instead of pulling an image, restoring the cache, and updating it.

This method easily allows you to test on the latest versions of libraries and binaries available upstream. If any issues arise from an updated library, you'll know almost immediately.

The downside to this method is if you need to git push -f or update all your dependendies you will suffer longer builds in that effort. Since this downside is minimal I advocate using it.

Another positive aspect of building your own private images is having all your dependencies version controlled. Anything can happen upstream, and DNS outages have known to happen, so having images with your dependencies & source can be very helpful.

Reliability

Build a Docker image that matches production verbatim. Same libraries and binaries installed in both systems allows you to easily catch system-specific bugs before you ship anything into production.

If possible you should always aim to match production with your Docker image. Systems that are scaled out and utilize the latest packages should go with the faster approach as reliability is already sacrificed.

Cost

If money is tight just build an image manually. It can be done locally and uploaded to Docker Hub, the default Docker registry. Try using an Alpine image as they are slimmer in size than traditional systems. The image hardly needs to be maintained - just make it match production. Build it and push it once.

Time

If money and time are no problem then you can build a Docker image on each commit. Push the image under a name (such as the commit's sha) that won't collide with anything valid and pull it into all the jobs you need.

Caveat

CircleCI provides convenience images and you're more likely to be hitting cached image layers if you utilize them. Using them is not best practice for your testing, however, as you should try to match production wherever possible.

Tip for Beginners

If you're new to Docker just get something working. Docker images don't need to be "maintained" per se, they just have to make sense for your use case. CircleCI provides some good tooling in their convenience images to get you started, and their Dockerfiles are open source.

Conclusion

Having your dependencies and source in your image makes everything much faster at test time, giving you faster feedback on if your tests have failed or succeeded. Unless your project is open source, that image will need to be private, which requires payment. If you want to stick to free, then don't include closed source projects in the image; consider if your dependencies are all public and can be uploaded publicly. For example, private Ruby gems are something that should be kept private, but public npm modules can be shared in a public image.

At the end of the day, if you want the best performance, you need to pay for it.