Deployment requirements

  • software requirements (python package dependencies, python version)
  • OS requirements (operating system, system packages, config)
  • hardware/resource requirements (CPU, RAM, storage, GPU, networking – ports, load balancing etc)
  1. Dockerfile recipe
  2. docker build . package the code
  3. docker run <image> run the container

cheat sheet

commandtask
docker build .build a docker image
docker build -t <image-name>:<tag>add image name and tag
docker imageslist docker images
docker pull <image-name>:<tag>run image tag
docker run <image-name>:<tag>run image:tag
docker contain lslist running containers
docker-compose upspin up docker compose
docker-compose up -dspin up in detached mode
docker-compose downspin down docker compose

basic Dockerfile

FROM, COPY, RUN, CMD

FROM python:3
COPY requirements.txt . # copy file from local to current
RUN pip install -r requirements.txt
COPY cool.py .
CMD ["univocrn", "cool:app", "--reload"]

scaling

Docker compose lets you run multiple containers. For small workloads, Amazon EC2 and docker compose works well.

Docker orchestrators deal with discoverability, auto-scaling, bin packing (distributing across multiple servers given CPU RAM constraints). These range from docker swarm, to kubernetes (industry standard).

Consider modal (blog post) as an alternative to these files for python jobs where we can specify images, cron jobs, GPUs etc where code executes in the cloud but is printed locally.