Files
blender-cloud/docker
Sybren A. Stüvel 35675866ee Build our own HAproxy docker image
The HAproxy docker image we were using is no longer maintained (hasn't been
for years), but is built upon Alpine Linux which has a big security leak:
https://talosintelligence.com/vulnerability_reports/TALOS-2019-0782

The security leak is fixed in this build of the docker image, but we should
move to something else (lke Træfik).
2019-05-09 14:12:02 +02:00
..

Setting up a production machine

To get the docker stack up and running, we use the following, on an Ubuntu 16.10 machine.

0. Basic stuff

Install the machine, use locale-gen nl_NL.UTF-8 or similar commands to generate locale definitions. Set up automatic security updates and backups, the usual.

1. Install Docker

Install Docker itself, as described in the Docker CE for Ubuntu manual:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce

2. Configure Docker to use "overlay"

Configure Docker to use "overlay" instead of "aufs" for the images. This prevents segfaults in auplink.

  1. Set DOCKER_OPTS="-s overlay" in /etc/defaults/docker
  2. Copy /lib/systemd/system/docker.service to /etc/systemd/system/docker.service. This allows later upgrading of docker without overwriting the changes we're about to do.
  3. Edit the [Service] section of /etc/systemd/system/docker.service:
    1. Add EnvironmentFile=/etc/default/docker
    2. Append $DOCKER_OPTS to the ExecStart line
  4. Run systemctl daemon-reload
  5. Remove all your containers and images.
  6. Restart Docker: systemctl restart docker

3. Pull the Blender Cloud docker image

docker pull armadillica/blender_cloud:latest

4. Get docker-compose + our repositories

See the Quick setup on how to get those. Then run:

cd /data/git/blender-cloud/docker
docker-compose up -d

Set up permissions for Docker volumes; the following should be writable by

  • /data/storage/pillar: writable by www-data and root (do a chown root:www-data and chmod 2770).
  • /data/storage/db: writable by uid 999.

5. Set up TLS

Place TLS certificates in /data/certs/{cloud,cloudapi}.blender.org.pem. They should contain (in order) the private key, the host certificate, and the CA certificate.

6. Create a local config

Blender Cloud expects the following files to exist:

  • /data/git/blender_cloud/config_local.py with machine-local configuration overrides
  • /data/config/google_app.json with Google Cloud Storage credentials.

When run from Docker, the docker/4_run/config_local.py file will be used. Overrides for that file can be placed in /data/config/config_secrets.py.

7. ElasticSearch & kibana

ElasticSearch and Kibana run in our self-rolled images. This is needed because by default

  • ElasticSearch uses up to 2 GB of RAM, which is too much for our droplet, and
  • the Docker images contain the proprietary X-Pack plugin, which we don't want.

This also gives us the opportunity to let Kibana do its optimization when we build the image, rather than every time the container is recreated.

/data/storage/elasticsearch needs to be writable by UID 1000, GID 1000.

Kibana connects to ElasticProxy, which only allows GET, HEAD, and some specific POST requests. This ensures that the public-facing Kibana cannot be used to change the ElasticSearch database.

Production Kibana can be placed in read-only mode, but this is not necessary now that we use ElasticProxy. However, I've left this in here as reference.

curl -XPUT 'localhost:9200/.kibana/_settings' -d '{ "index.blocks.read_only" : true }'

If editing is desired, temporarily turn off read-only mode:

curl -XPUT 'localhost:9200/.kibana/_settings' -d '{ "index.blocks.read_only" : false }'