Skip to content

Commit 02a1149

Browse files
hmellorjikunshang
authored andcommitted
Explain where the engine args go when using Docker (vllm-project#12041)
Signed-off-by: Harry Mellor <[email protected]>
1 parent 8e074fb commit 02a1149

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

docs/source/deployment/docker.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@ $ docker run --runtime nvidia --gpus all \
1919
--model mistralai/Mistral-7B-v0.1
2020
```
2121

22+
You can add any other <project:#engine-args> you need after the image tag (`vllm/vllm-openai:latest`).
23+
2224
```{note}
2325
You can either use the `ipc=host` flag or `--shm-size` flag to allow the
2426
container to access the host's shared memory. vLLM uses PyTorch, which uses shared

0 commit comments

Comments
 (0)