-
-
Notifications
You must be signed in to change notification settings - Fork 7.6k
Install vLLM failed with pip install -e .
, PyTorch dependency confusion?
#1283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
+1 |
1 similar comment
+1 |
Due to the release of PyTorch 2.1.0, the torch version has been locked to 2.0.1 in the pyproject.toml file.
|
+1 |
Landed here with the same errors as OP. What version of torch is required now for vLLM? I'm trying to use vLLM built from source and currently on 2.1.2. |
+1 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Hi guys, I install vllm failed with
pip install -e .
The error messages shows below:
I have no idea about this issue, even though I pre-install
pytorch=2.0.1+cu11.8
, it still failed.My driver version is
520.61.05
and cuda version is 11.8. While I can't install vllm in an editable way, but installation is success withpip install vllm
.After I update my driver to
535.104.12
(latest version), I got another error messages:It's weird that vLLM should not support CUDA 12.1, in which case it will depend on CUDA 12.1?
And after install the CUDA 12.2, I install vLLM succeed but can't run.
I also tried install vllm in nvidia/cuda11.8.0-cudnn8-devel-ubuntu20.04 docker inage, got the same issue.
The text was updated successfully, but these errors were encountered: