docker run Common Parameters and Scenario Examples
The purpose of docker run can be simply understood as: based on an image, create and start a new container.
Minimal syntax:
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
This page does not aim to list every parameter. It only keeps the ones I use most often and the combinations I most easily forget.
High-Frequency Parameter Quick Reference
| Parameter | Purpose | Common Example |
|---|---|---|
-d | Run in background | docker run -d nginx:stable |
-it | Interactive terminal | docker run -it ubuntu:24.04 bash |
--rm | Auto-delete container on exit | docker run --rm ubuntu:24.04 bash |
--name | Specify container name | docker run --name web nginx:stable |
-p | Publish port | docker run -p 8080:80 nginx:stable |
-e | Set environment variable | docker run -e APP_ENV=prod my-app:latest |
--env-file | Read environment variables from file | docker run --env-file .env my-app:latest |
-v / --mount | Mount data volume or host directory | docker run -v data:/data redis:7 |
-w | Specify working directory | docker run -w /app node:22 npm test |
--restart | Restart policy after container exits | docker run --restart unless-stopped ... |
--network | Specify network | docker run --network my-net redis:7 |
--cpus | Limit CPU | docker run --cpus 2 ... |
--memory | Limit memory | docker run --memory 2g ... |
--gpus all | Expose GPU to container | docker run --gpus all ... |
-u | Specify run user | docker run -u 1000:1000 ... |
Default Habits I Recommend
- For temporary debugging containers, prefer
--rm - For long-running services, prefer adding
--name - For mounts, prefer
--mountin complex scenarios - When exposing services externally, explicitly write
-prather than guessing - Long-running services typically add
--restart unless-stopped
Scenario 1: Open a Temporary Interactive Shell
docker run --rm -it ubuntu:24.04 bash
Suitable for:
- Trying commands
- Temporarily installing packages for verification
- Quickly inspecting the file structure inside an image
If the image does not have bash:
docker run --rm -it alpine:3.21 sh
Scenario 2: Start a Long-Running Service in the Background
docker run -d \
--name web \
--restart unless-stopped \
-p 8080:80 \
nginx:stable
This set of parameters is essentially the minimal common combination for "deploying a single service":
-d: Run in background--name web: Give the container a stable name--restart unless-stopped: Makes it easier to automatically restart after machine reboot-p 8080:80: Map host8080to container80
If you only want the host itself to access it and don't want to expose it to the LAN or public network:
docker run -d --name web -p 127.0.0.1:8080:80 nginx:stable
For more detailed port binding rules, see:
Scenario 3: Mount the Current Directory for Development or Debugging
docker run --rm -it \
--mount type=bind,src="$PWD",target=/workspace \
-w /workspace \
python:3.12 \
bash
In this type of scenario, I prefer --mount. The reason is simple: the fields are clearer, and it's easier to add options like read-only or sub-paths later without making mistakes.
If you only want to view code without letting the container modify host files, add read-only:
docker run --rm -it \
--mount type=bind,src="$PWD",target=/workspace,readonly \
-w /workspace \
python:3.12 \
bash
Scenario 4: Pass Environment Variables to a Container
Write individual variables directly:
docker run --rm -e APP_ENV=prod -e TZ=Asia/Shanghai busybox env
When there are many variables, using a .env file is more reliable:
docker run --rm --env-file .env my-app:latest
Scenario 5: Mount a Data Volume for Persistence
docker run -d \
--name redis \
--restart unless-stopped \
--mount type=volume,src=redis-data,target=/data \
redis:7
This is much more reliable than leaving data in the container's writable layer. For a more systematic explanation, see:
Scenario 6: Limit Resources
docker run -d \
--name worker \
--cpus 2 \
--memory 4g \
my-worker:latest
If you run multiple services on the same machine, these limits are valuable. At the very least, they prevent a single container from consuming all resources on the machine.
Scenario 7: Using GPU
docker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu22.04 nvidia-smi
The prerequisite is not Docker itself, but that the host has correctly installed and configured the NVIDIA driver and NVIDIA Container Toolkit. Otherwise, what you typically see is not "container problem" but that the GPU runtime isn't connected at all.
When to Upgrade from docker run to Compose
If you start encountering the following situations, stop fighting with long docker run commands:
- More than one service
- Need to pin networks, environment variables, and data volumes
- More than one person on the team needs to reproduce the setup
- You've already started writing commands into scripts or notes for repeated copying
At that point, switch to: