You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default vLLM will build for all GPU types for widest distribution. If you are just building for the
43
43
current GPU type the machine is running on, you can add the argument `--build-arg torch_cuda_arch_list=""`
44
44
for vLLM to find the current GPU type and build for that.
45
+
46
+
If you are using Podman instead of Docker, you might need to disable SELinux labeling by
47
+
adding `--security-opt label=disable` when running `podman build` command to avoid certain [existing issues](https://github.com/containers/buildah/discussions/4184).
0 commit comments