Skip to content

Commit 2c3618b

Browse files
terrytangyuanmzusman
authored andcommitted
[Doc] Add instructions on using Podman when SELinux is active (vllm-project#12136)
Signed-off-by: Yuan Tang <[email protected]>
1 parent ff79f99 commit 2c3618b

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

docs/source/deployment/docker.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,9 @@ DOCKER_BUILDKIT=1 docker build . --target vllm-openai --tag vllm/vllm-openai
4242
By default vLLM will build for all GPU types for widest distribution. If you are just building for the
4343
current GPU type the machine is running on, you can add the argument `--build-arg torch_cuda_arch_list=""`
4444
for vLLM to find the current GPU type and build for that.
45+
46+
If you are using Podman instead of Docker, you might need to disable SELinux labeling by
47+
adding `--security-opt label=disable` when running `podman build` command to avoid certain [existing issues](https://github.com/containers/buildah/discussions/4184).
4548
```
4649

4750
## Building for Arm64/aarch64

0 commit comments

Comments
 (0)