I’ve been trying to get familiar Docker and containers recently, mostly from Brian Holt’s Complete Intro to Containers, v2 course on Frontend Masters and referencing the Docker documentation.

I’ve always been quite intimidated by Docker but, as Brian says, containers are

more simple than you think they are

They’re just the combination of three Linux features combined:

  • chroot - allows you to restrict the files a process can access
  • Namespaces - allow you to restrict access to processes
  • cgroups - limit the resources an environment can use

Docker is just a platform for running and working with containers and Docker Hub is a directory of pre-made containers.

Some common Docker commands I’ve come across

  • docker image ls shows all the images you have
  • docker container ls shows all container
  • docker container prune removes all stopped containers
  • docker pull
  • docker exec
  • docker search
  • docker run -it --rm node:20 cat /etc/issue shows the OS version for this Node.js container

Running JS on Docker

There are Docker images for just about every language you can imagine (Python, JavaScript etc.).

For example, docker run -it --rm node:20 pulls the version of the node Docker container with the flag of 20 from Docker Hub and puts your directly into the REPL for Node.js version 20 (note the --rm flag which instructs Docker to erase everything related to it when it has finished running)

Docker tags

Note: the :20 above and things like :latest are Docker tags. Tags are versions.

Dockerfiles

These are simply a manifest of what is to go into a container. Every line in a Dockerfile is called a layer, and layers can be cached.

Steps to create your own Docker image:

  1. Create a Dockerfile (no extension)
  2. Use FROM to set a base image and what you’d like to be in it
# Base image  
FROM node:20   
# Run while executing. CMD is typically the last line in a Dockerfile  
CMD ["node", "-e", "console.log('Hello your dudeness')"]  
  1. Run docker build --tag blah-app . (Now you have an image named blah-app - the ID will be the value of the hash)
  2. Run docker run --name boing blah-app to run your container (Note: here boing will be the container name and blah-app is the image name to build)

Running a simple JavaScript server in Docker

OK. Let’s start by crafting a super simple Node.js server in an index.js file.

const http = require("http");

http.createServer(function (request, response) {
    console.log("Request received");
    response.end("Hi!", "utf-8")
}).listen(3000)

console.log("Server started");

We then introduce this to our Dockerfile using COPY - this assumes the Dockerfile and index.js are in the same directory

FROM node:20  
  
COPY index.js index.js  
  
CMD ["node", "index.js"]

We can then run run a command to build and name our image

docker build --tag blah-app .

And, having done this, we can create a container and expose its port (Note that the --init flag simply allows you to Ctrl+C out of it)

docker run --init --name blah-app --publish 3000:3000 --rm blah-app

At this point we have a Node server running on port 3000 🎉

Improving security and file management

FROM node:20  

# node is a user on all Node containers but you could 
# run a useradd command before this to create your own 
# this line prevents root access (for security)
USER node  

# placing the file in a place owned by the user  
COPY index.js home/node/code/index.js  
  
CMD ["node", "home/node/code/index.js"]

Running shell commands within the container

  1. First get the container name using docker ps. It’ll be in the “Names column”
  2. Run docker exec -it [Container name or ID] bash
  3. Run your commands
  4. Exit with Ctrl+D

You could also run node for example with docker exec -it [Container name or ID] node

Using dependencies in Docker

Note: Don’t include your node_modules folder because you might find a mismatch between macOS and Linux. Note: Don’t include your node_modules folder because you might find a mismatch between macOS and Linux.

Create a JavaScript file with dependencies (in a folder where the package.json etc. are)

const fastify = require("fastify")({ logger: true });

fastify.get("/", function handler(request, reply) {
    reply.send({ hello: "world" });
})

fastify.listen({ port: 8080, host: "0.0.0.0" }, (err) => {
    if (err) {
        fastify.log.error(err);
        process.exit(1);
    }
});

Create a .dockerignore file and add node_modules to it

node_modules
# Set base container  
FROM node:20  
  
# Equivalent to `cd`  
WORKDIR /home/node/code  
  
# Copy files from the current directory into the image  
COPY  --chown=node . .  

# Install the dependencies
RUN npm ci  
  
# Run node from the current directory  
CMD ["node", "index.js"]

Then we build and name our image

docker build -t blah-app . 

Then we run our container

docker run -it -p 8080:8080 --name blah-app --rm --init blah-app

Layers

The lines in a Docker file are known as layers. Each layer can be cached. By being smart you can prevent things like npm ci happening again unnecessarily when building images.

For example, if we only want the npm ci layer to run when our dependencies change (rather than when our project files change), we can structure our Dockerfile as below. Because package.json and package-lock.json are copied separately to other files, the npm ci step will only run during a build when other files have changed.

FROM node:20 

USER node 

RUN mkdir /home/node/code 

WORKDIR /home/node/code 

COPY --chown=node:node package-lock.json package.json ./ 

RUN npm ci 

COPY --chown=node:node . . 

CMD ["node", "index.js"]

Making smaller containers

You should really only include the things in your container that are necessary. Don’t obsess over this. It doesn’t matter much. Alpine Linux is a Linux container that aims to be as small as possible. It’s really popular.

This can be as simple as specifying the -alpine flag in the base image.

# Set base container  
FROM node:20-alpine  
  
# Set the user which is provided by node containers  
USER node  
  
# Equivalent to `cd`  
WORKDIR /home/node/code  
  
# Copy this file file into current directory in Docker  
COPY --chown=node . .  
  
RUN npm ci  
  
# Run node from the current directory  
CMD ["node", "index.js"]

Making your own Node container

FROM alpine:3.19

RUN apk add --update nodejs npm

RUN addgroup -S node && adduser -S node -G node

USER node

RUN mkdir /home/node/code

WORKDIR /home/node/code

# We do this to prevent Docker from running npm ci every 
# time a project file changes
COPY --chown=node:node package-lock.json package.json ./

RUN npm ci

COPY --chown=node:node . .

CMD ["node", "index.js"]

Multi-stage builds

In Docker, it’s possible to do multi-stage builds. This is handy where, for example, we need a container with additional tools to build our dist directory and a smaller container to run the server.

It’s worth noting that only last stage is kept - the stuff resulting from previous steps is blown away.

# build stage
FROM node:20 AS node-builder
RUN mkdir /build
WORKDIR /build
COPY package-lock.json package.json ./
RUN npm ci
COPY . .

# runtime stage
FROM alpine:3.19
RUN apk add --update nodejs
RUN addgroup -S node && adduser -S node -G node
USER node
RUN mkdir /home/node/code
WORKDIR /home/node/code
COPY --from=node-builder --chown=node:node /build .
CMD ["node", "index.js"]

Other Docker features

Docker scout

Allows you to inspect your containers for vulnerabilities. Can be run either through CLI or the Docker UI.

Bind mounts

Bind mounts allow you to access your local files from within a Docker container. They’re like a window into your local computer from Docker.

For example, let’s imagine I’m working locally and want to see the files in my /dist directory running in an Nginx. I could run this command - no need to create a Dockerfile etc.

docker run --mount type=bind,source="$(pwd)"/dist,target=/usr/share/nginx/html -p 8080:80 nginx:latest

For example, I was able to display the HTML for my blog by pointing to the _site directory in Jekyll

docker run --mount type=bind,source="$(pwd)"/_site,target=/usr/share/nginx/html -p 8080:80 nginx:latest

Volumes

Typically, state within a container is ephemeral and lost when the container is closed.

Volumes are intended for state that should persist between runs and be shared across all containers that are running.

Multi container projects

Docker compose

Allows us to coordinate multiple containers and do so with one YAML file

Docker compose is perfect for local development involving multiple containers. Can also be used for production where there are only, say, 5 container groups. For larger projects, you’d probably want to use Kubernetes (aka k8s or “kates”)

An example docker-compose.yml

services:
  api:
    build: api # Directory for the API (i.e. where the Dockerfile is)  
    ports:
      - "8080:8080"
    links:
      - db # api talks to db on the network  
    environment:
      MONGO_CONNECTION_STRING: mongodb://db:27017 # sets an env var  
  db:
    image: mongo:7 # run this container from Docker hub as is 
  web:
    build: web
    environment:
      API_URL: http://api:8080 # env var  
    ports:
      - "8081:80"

Run docker compose up --build (the build flag is necessary where you haven’t built it before)

Kubernetes

Kubernetes (aka k8s or “Kate’s”) is the next level up in terms of difficulty. It is built for Google-level stuff. I have no intention of learning this, but here’s a few Kubernetes concepts.

Fundamental concepts:

  • The control plane - is brain of your cluster (sometimes referred to as the “master node”)
  • Nodes - are the deploy targets in Kubernets (usually a VM of some kind)
  • Pod - a group of nodes that can’t run separate to each other (i.e. a web server and its sidecar)
  • Service - a group of pods
  • Deployment - a deployment of a set of services