We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 8e074fb commit 02a1149Copy full SHA for 02a1149
docs/source/deployment/docker.md
@@ -19,6 +19,8 @@ $ docker run --runtime nvidia --gpus all \
19
--model mistralai/Mistral-7B-v0.1
20
```
21
22
+You can add any other <project:#engine-args> you need after the image tag (`vllm/vllm-openai:latest`).
23
+
24
```{note}
25
You can either use the `ipc=host` flag or `--shm-size` flag to allow the
26
container to access the host's shared memory. vLLM uses PyTorch, which uses shared
0 commit comments