Skip to main content

docker run Common Parameters and Scenario Examples

The purpose of docker run can be simply understood as: based on an image, create and start a new container.

Minimal syntax:

docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

This page does not aim to list every parameter. It only keeps the ones I use most often and the combinations I most easily forget.

High-Frequency Parameter Quick Reference

ParameterPurposeCommon Example
-dRun in backgrounddocker run -d nginx:stable
-itInteractive terminaldocker run -it ubuntu:24.04 bash
--rmAuto-delete container on exitdocker run --rm ubuntu:24.04 bash
--nameSpecify container namedocker run --name web nginx:stable
-pPublish portdocker run -p 8080:80 nginx:stable
-eSet environment variabledocker run -e APP_ENV=prod my-app:latest
--env-fileRead environment variables from filedocker run --env-file .env my-app:latest
-v / --mountMount data volume or host directorydocker run -v data:/data redis:7
-wSpecify working directorydocker run -w /app node:22 npm test
--restartRestart policy after container exitsdocker run --restart unless-stopped ...
--networkSpecify networkdocker run --network my-net redis:7
--cpusLimit CPUdocker run --cpus 2 ...
--memoryLimit memorydocker run --memory 2g ...
--gpus allExpose GPU to containerdocker run --gpus all ...
-uSpecify run userdocker run -u 1000:1000 ...

Default Habits I Recommend

  • For temporary debugging containers, prefer --rm
  • For long-running services, prefer adding --name
  • For mounts, prefer --mount in complex scenarios
  • When exposing services externally, explicitly write -p rather than guessing
  • Long-running services typically add --restart unless-stopped

Scenario 1: Open a Temporary Interactive Shell

docker run --rm -it ubuntu:24.04 bash

Suitable for:

  • Trying commands
  • Temporarily installing packages for verification
  • Quickly inspecting the file structure inside an image

If the image does not have bash:

docker run --rm -it alpine:3.21 sh

Scenario 2: Start a Long-Running Service in the Background

docker run -d \
--name web \
--restart unless-stopped \
-p 8080:80 \
nginx:stable

This set of parameters is essentially the minimal common combination for "deploying a single service":

  • -d: Run in background
  • --name web: Give the container a stable name
  • --restart unless-stopped: Makes it easier to automatically restart after machine reboot
  • -p 8080:80: Map host 8080 to container 80

If you only want the host itself to access it and don't want to expose it to the LAN or public network:

docker run -d --name web -p 127.0.0.1:8080:80 nginx:stable

For more detailed port binding rules, see:

Scenario 3: Mount the Current Directory for Development or Debugging

docker run --rm -it \
--mount type=bind,src="$PWD",target=/workspace \
-w /workspace \
python:3.12 \
bash

In this type of scenario, I prefer --mount. The reason is simple: the fields are clearer, and it's easier to add options like read-only or sub-paths later without making mistakes.

If you only want to view code without letting the container modify host files, add read-only:

docker run --rm -it \
--mount type=bind,src="$PWD",target=/workspace,readonly \
-w /workspace \
python:3.12 \
bash

Scenario 4: Pass Environment Variables to a Container

Write individual variables directly:

docker run --rm -e APP_ENV=prod -e TZ=Asia/Shanghai busybox env

When there are many variables, using a .env file is more reliable:

docker run --rm --env-file .env my-app:latest

Scenario 5: Mount a Data Volume for Persistence

docker run -d \
--name redis \
--restart unless-stopped \
--mount type=volume,src=redis-data,target=/data \
redis:7

This is much more reliable than leaving data in the container's writable layer. For a more systematic explanation, see:

Scenario 6: Limit Resources

docker run -d \
--name worker \
--cpus 2 \
--memory 4g \
my-worker:latest

If you run multiple services on the same machine, these limits are valuable. At the very least, they prevent a single container from consuming all resources on the machine.

Scenario 7: Using GPU

docker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu22.04 nvidia-smi

The prerequisite is not Docker itself, but that the host has correctly installed and configured the NVIDIA driver and NVIDIA Container Toolkit. Otherwise, what you typically see is not "container problem" but that the GPU runtime isn't connected at all.

When to Upgrade from docker run to Compose

If you start encountering the following situations, stop fighting with long docker run commands:

  • More than one service
  • Need to pin networks, environment variables, and data volumes
  • More than one person on the team needs to reproduce the setup
  • You've already started writing commands into scripts or notes for repeated copying

At that point, switch to: