Docker and Kubernetes: container guide

Containerisation, orchestration and when to use each approach in your infrastructure

9 min

Containers have revolutionised how software is deployed and run. Docker packages an application with all its dependencies into a portable unit that works identically across any environment: development, staging, production, cloud or local. Kubernetes orchestrates thousands of containers in production, managing scaling, networking and availability.

This guide explains the fundamental concepts of containerisation, how Docker and Kubernetes complement each other, when Kubernetes is necessary and when it is overkill, and how they compare to serverless.

What are containers?

A container is a lightweight, portable package that includes an application and everything needed to run it: code, runtime, system libraries and configuration. Unlike virtual machines, containers share the host operating system kernel, making them much lighter (MBs vs GBs) and faster to start (seconds vs minutes).

Containerisation solves the "it works on my machine" problem: if the container works in development, it will work in production. This eliminates environment discrepancies caused by different dependency versions, OS configurations or environment variables.

  • Portability: the same container runs on any machine with Docker installed
  • Isolation: each container has its own filesystem, network and processes
  • Lightweight: shares the host kernel without the overhead of a full VM
  • Reproducibility: the Dockerfile defines exactly how the environment is built

Docker: containerisation in practice

Docker is the most popular containerisation platform. A Dockerfile defines step by step how the container image is built: base image (ubuntu, node, python), dependency installation, code copying and start command. The resulting image is stored in a registry (Docker Hub, GitHub Container Registry, AWS ECR) and deployed on any machine.

Docker Compose allows defining multi-container applications (web + database + cache) in a YAML file, spinning up the entire stack with a single command. For local development it is a fundamental tool: every team member spins up the same environment in seconds.

  • Dockerfile: defines the image step by step (base, deps, code, CMD)
  • Docker Compose: local orchestration of multiple containers
  • Registries: Docker Hub, GitHub Container Registry, AWS ECR, GCP Artifact Registry
  • Multi-stage builds: smaller, more secure production images

Kubernetes: orchestration at scale

Kubernetes (K8s) is a container orchestrator created by Google that automates the deployment, scaling and management of containerised applications. When you have dozens or hundreds of containers that need to communicate with each other, scale based on demand, automatically restart when they fail and distribute across multiple servers, Kubernetes manages all that complexity.

Key concepts are: Pod (the minimum unit, one or more containers), Deployment (defines how many replicas of a Pod to run), Service (exposes Pods as a network service) and Ingress (manages external HTTP traffic). Kubernetes runs on any cloud (EKS on AWS, GKE on GCP, AKS on Azure) or on-premise.

  • Pod: minimum deployment unit (one or more containers)
  • Deployment: manages replicas, rolling updates and rollbacks
  • Service: exposes Pods internally or externally with load balancing
  • Ingress: HTTP/HTTPS routing with TLS and path rules
  • Helm: package management for Kubernetes (charts)

When do you need Kubernetes?

Kubernetes adds significant operational complexity: a cluster needs monitoring, version updates, networking management, security configuration and a team with specific knowledge. For a small team with few services, Kubernetes can be overkill.

Kubernetes makes sense when: you have more than 5–10 services that need to communicate, you need per-service auto-scaling, you require zero-downtime deployments (rolling updates, canary), or your team already has K8s experience. For simpler cases, Docker Compose on a server or PaaS platforms (Railway, Render) are more manageable alternatives.

  • Yes K8s: >5–10 microservices, independent scaling, experienced team
  • No K8s: <5 services, small team, limited ops budget
  • Alternatives: Docker Compose + server, Railway, Render, Fly.io, ECS Fargate

Containers vs serverless

Containers and serverless are not mutually exclusive; they solve problems at different points on the control vs abstraction spectrum. Containers give full control over the execution environment, with no time limits or cold starts, and portability across providers. Serverless eliminates all infrastructure management but with execution constraints and less portability.

The trend is convergence: AWS Fargate runs containers without managing servers (serverless containers), GCP Cloud Run runs Docker containers in a serverless fashion, and Fly.io offers a similar model. These services combine container portability with serverless operational simplicity.

  • Containers: full control, no time limits, portable, but require management
  • Serverless: zero management, but with cold starts, limits and less portability
  • Serverless containers: Fargate, Cloud Run, Fly.io — the best of both

Containerisation best practices

Use official, minimal base images (Alpine, Distroless) to reduce the attack surface and image size. Implement multi-stage builds to separate build and runtime dependencies. Do not run processes as root inside the container: create a non-privileged user.

Make images immutable: do not modify a running container, deploy a new version. Scan images for vulnerabilities (Trivy, Snyk) before deploying them. And keep Dockerfiles simple and documented: a well-written Dockerfile is executable documentation of your environment.

  • Minimal base images: Alpine, Distroless, official slim images
  • Multi-stage builds: separate build from runtime for smaller images
  • Non-root: run processes with a non-privileged user
  • Vulnerability scanning: Trivy, Snyk, Docker Scout
  • One concern per container: do not bundle multiple processes

Key Takeaways

  • Containers package applications with their dependencies for total portability
  • Docker is the standard tool for creating and managing containers
  • Kubernetes orchestrates containers at scale but adds significant operational complexity
  • Serverless containers (Fargate, Cloud Run) combine portability and operational simplicity
  • Use minimal images, multi-stage builds and vulnerability scanning as best practices

Need to containerise or orchestrate your applications?

We help you design the right containerisation strategy for your scale, team and budget.