Skip to main content

Custom Images and Dockerfile Practices

This page is not about "can Docker do X" but about "which approach should I use so I don't leave a mess for the next person." Here's the conclusion up front:

  • For temporary snapshots, docker commit works
  • When you need repeatable builds, delivery, or collaboration, prefer writing a Dockerfile
  • Don't make "set a root password" your default way to get elevated access inside a container

First, Distinguish 3 Common Needs

1. Temporary Debugging Environment

You just want to spin up a container to try commands, install packages, or edit files. Just do:

docker run --rm -it ubuntu:24.04 bash

2. You've Already Manually Tweaked the Container and Want to Save a Snapshot

You can use:

docker commit my-debug-container my-debug-image:temp

This is suitable for:

  • Temporarily preserving a troubleshooting state
  • Keeping a one-time experiment state
  • Quickly creating a snapshot for personal use only

This is not suitable for:

  • Team collaboration
  • Repeatable builds
  • Long-term maintenance

3. You Need to Reproduce the Environment Reliably

This is when you should use a Dockerfile.

This example leans toward an "Ubuntu tool environment" and is more reproducible than manually tweaking inside a container:

FROM ubuntu:22.04

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && apt-get install -y --no-install-recommends \
bash \
ca-certificates \
curl \
git \
vim \
wget \
&& rm -rf /var/lib/apt/lists/*

WORKDIR /workspace

CMD ["bash"]

Build and run:

docker build -t ubuntu-tools:22.04 .
docker run --rm -it ubuntu-tools:22.04

Also Prepare a .dockerignore

Many slow builds, oversized image contexts, and accidental inclusion of caches and large files happen because there is no .dockerignore.

A common minimal example:

.git
node_modules
dist
build
__pycache__
*.log
*.tar
*.zip

Dockerfile Writing Patterns I Prefer by Default

Pin the Base Image Version

Instead of:

FROM ubuntu:latest

I prefer:

FROM ubuntu:22.04

This reduces version drift and makes troubleshooting more reproducible.

Combine apt-get update and Installation in a Single RUN

RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
git \
&& rm -rf /var/lib/apt/lists/*

This keeps layers cleaner and avoids caching issues.

Use USER When You Need Non-root Execution

RUN useradd -m app
USER app
WORKDIR /home/app

If you only occasionally need root access to a running container, don't set a root password. Instead, just:

docker exec -u root -it my-container bash

When Multi-stage Builds Are Worth It

If the "build environment" and "runtime environment" differ, multi-stage builds are usually worthwhile.

For example, a static frontend build:

FROM node:22 AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM nginx:stable-alpine
COPY --from=build /app/dist /usr/share/nginx/html

The benefits are:

  • Smaller final image
  • No node_modules in the runtime image
  • Cleaner security surface

The Right Place for docker commit

If you've completed a round of debugging inside a container, you can certainly save a snapshot like this:

docker commit my-container my-image:debug
docker image ls

But afterward, you should return to a Dockerfile and write the steps you actually need to keep into the build process.

Otherwise, over time you'll only remember "this image works" but won't be able to explain how it was made.

Exporting and Importing Images

Export

docker save -o my-image.tar my-image:1.0

Import

docker load -i my-image.tar

This workflow is suitable for:

  • Transferring images between offline machines
  • Making one-time backups
  • Moving images in environments without direct access to image registries

Copying Files Between Host and Container

docker cp ./local-file.txt my-container:/tmp/local-file.txt
docker cp my-container:/var/log/app.log ./app.log

If your goal is "long-term file retention," prefer volumes or bind mounts rather than leaving files inside the container.

Re-entering a Running Container

docker exec -it my-container bash

If bash is not available:

docker exec -it my-container sh

My Default Decision Order

  1. Just experimenting: docker run --rm -it
  2. Need to preserve the state: docker commit
  3. Need reproducibility and collaboration: write a Dockerfile
  4. Need multi-service orchestration: go straight to Compose