This article will go over scaling a Python Flask application utilizing a multi-container docker architecture. Leveraging Docker Compose we will create a NGINX Docker container that will act as a load balancer with two Python Flask application containers it will direct traffic to. The Python Flask application will serve a web page via a GET request and will be running Gunicorn.
Assumptions:
Quick Links:
GitHub repo with files referenced in this blog ubuntu-flask-gunicorn-nginx-docker-compose
Docker Hub repo with some of the images used in the blog here.
File Structure:
ubuntu-flask-gunicorn-nginx-docker-compose/ | docker-compose.yml | app/ | --> Dockerfile | --> src | --> app01.py | --> gunicorn_config.py | --> wsgi.py | --> requirements.txt | nginx/ --> Dockerfile --> nginx.conf |
Step 0
Create a directory for your project.
mkdir ubuntu-flask-gunicorn-nginx-docker-compose cd ubuntu-flask-gunicorn-nginx-docker-compose mkdir -p app/src mkdir nginx |
Step 1
Create a file requirements.txt for your Python dependencies, such as Flask or Gunicorn. For more information about PIP requirements.txt read here.
cd app/src vim requirements.txt |
Add the following to the requirements.txt
Click==7.0 Flask==1.0.3 gunicorn==19.9.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 Werkzeug==0.15.4 |
Step 2
Create a file app01.py which will be a simple Flask web app. Flask is a microframework for Python based on Werkzeug and Jinja2. For more information about Flask read here.
vim app01.py |
Add the following to app01.py
from flask import Flask hello = Flask(__name__) @hello.route("/") def greeting(): return "<h1 style='color:red'>Hello World!</h1>" if __name__ == "__main__": hello.run(host='0.0.0.0') |
Step 3
Create a file wsgi.py which will be the gateway from the webserver to your app. For more information about WSGI read here.
vim wsgi.py |
Add the following to wsgi.py
from app01 import hello if __name__ == "__main__": hello.run() |
Step 4
Create a file gunicorn_config.py which will be the configurations Gunicorn will utilize. Gunicorn is a Python WSGI HTTP Server for UNIX. For more information about Gunicorn read here.
vim gunicorn_config.py |
Add the following to gunicorn_config.py
*Note we set workers to 2 because if you only have one worker, and it’s handling a slow query, the heartbeat query will timeout which could remove it from a load balancer. Also container schedulers expect logs to come out on stdout and stderr so we have it set in the config as such. We also leverage /dev/shm vs default /tmp to avoid timeouts accessing ram vs disk.
pidfile = 'app01.pid' worker_tmp_dir = '/dev/shm' worker_class = 'gthread' workers = 2 worker_connections = 1000 timeout = 30 keepalive = 2 threads = 4 proc_name = 'app01' bind = '0.0.0.0:8080' backlog = 2048 accesslog = '-' errorlog = '-' user = 'ubuntu' group = 'ubuntu' |
Step 5
Create a Dockerfile which contains the commands to build your Python Flask App Docker image. For more information on Dockerfile read here.
cd ../ vim Dockerfile |
Add the following to Dockerfile
*Note we pull a base ubuntu 18.04 image with python 3.7.3 from my Docker Hub repo you could point this to the official Ubuntu repo and add a Python image or install line. All the files are copied into the /home/ubuntu container directory and referenced from there.
FROM nethacker/ubuntu-18-04-python-3:python-3.7.3 COPY src/requirements.txt /root/ RUN pip install -r /root/requirements.txt && useradd -m ubuntu ENV HOME=/home/ubuntu USER ubuntu COPY src/app01.py src/wsgi.py src/gunicorn_config.py /home/ubuntu/ WORKDIR /home/ubuntu/ EXPOSE 8080 CMD ["gunicorn", "-c", "gunicorn_config.py", "wsgi:hello"] |
Step 6
Create a NGINX configuration file to make a reverse proxy load balancer to your Python Flask application
cd ../nginx/ vim nginx.conf |
Add the following to your nginx.conf file for NGINX to act as a reverse proxy load balancer.
events { worker_connections 1024; } http { proxy_headers_hash_max_size 1024; proxy_headers_hash_bucket_size 64; upstream localhost { # References to our app containers, via docker compose server app01:8080; server app02:8080; } server { listen 80; server_name localhost; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_buffers 8 24k; proxy_buffer_size 4k; proxy_pass http://localhost; proxy_set_header Host $host; } } } |
Step 7
Create a file Dockerfile which contains the commands to build your NGINX Docker image.
vim Dockerfile |
Add the following to Dockerfile
FROM nethacker/ubuntu-18-04-nginx:1.17.1 RUN apt-get update && apt-get install -y \ build-essential \ curl \ && rm -rf /var/lib/apt/lists/* COPY nginx.conf /etc/nginx/nginx.conf EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] |
Step 8
Create your Docker Compose YAML file outlining one NGINX container with two backend Python Flask application containers to which to direct traffic too. Also specify the NGINX port as well increasing shm size for Gunicorn to use.
cd ../ vim docker-compose.yml |
Add the following to docker-compose.yml file.
version: '3.7' services: app01: shm_size: '1000000000' build: context: ./app tty: true volumes: - './app/src:/home/ubuntu' app02: shm_size: '1000000000' build: context: ./app tty: true volumes: - './app/src:/home/ubuntu' nginx: build: ./nginx tty: true links: - app01 - app02 ports: - '80:80' |
Step 9
Build and Start your Docker images using Docker Compose, you will end up with one NGINX container with two backend Python Flask application containers. The NGINX container will listen on port 80 and forward traffic to the backend apps on 8080.
docker-compose up --build --detach |
Step 10
Test your Docker container
If you didn’t modify the example you should be able to go to localhost port 80 in a browser and get a red “Hello World” message.
http://localhost |
You can also do the following to access your containers with a bash shell to poke around.
Find the running Docker container id you wish to examine.
docker ps |
Get and interactive shell on the container.
docker exec -it {id here} /bin/bash |
Hopefully this basic overview of Docker Compose and multi-containers on a single host/docker engine helps you scale your application.
On a side note,
docker-compose |
command appears to be transitioning to:
docker stack deploy |
with the difference “docker-compose” can do builds in the docker-compose.yml and 2.0/3.0 spec with caveats, but “docker stack deploy” cannot do build commands in the docker-compose.yml and needs prebuilt images as well as 3.0 spec making docker compose nicer for development purposes but “docker stack deploy” a more production oriented method with multiple containers over multiple hosts leveraging swarm. In both cases docker-compose.yml is used and Docker will ignore commands not support by the respective compose vs stack call (silently). I will do a post on Docker Swarm next.