-
-
Notifications
You must be signed in to change notification settings - Fork 7.7k
[Bug]: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
when running 0.7.3.dev57+g2ae88905.precompiled
on A100
#13047
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
hello, I met with simiar issue. |
@youkaichao any thoughts on this? |
This looks like torch and/or FA2issue - observed it with torch=2.6, CUDA 12.8 and transformers 4.48 and FA=2.7.4.post1 |
I ran into the same error. Can anyone suggest an nvidia pytorch image that works with vllm 0.7.2? |
@kkimmk
|
Any update? |
+1 |
I accidentally fixed this issue on two different machines:
I used version 2.5.8 on one machine because I needed compatibility with an older repository's code. If you want to quickly get your experiments running, you could try our approach of using the precompiled wheel with the |
Your current environment
The output of `python collect_env.py`
🐛 Describe the bug
This a follow up on #12847.
sing the main branch on commit
2ae889052c6d0205ca677052ddb41db96a2a2620
, we are facingImportError: /usr/local/lib/python3.12/dist-packages/flash_attn_2_cuda.cpython-312-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
. The details of the env/test is given below.Adding @youkaichao since I am suspicious #12963 may cause this(?).
Note
This issue does NOT happen using 0.7.1 release. On the same machine, same container, changing the installation to
pip install vllm
(orpip install https://github.com/vllm-project/vllm/releases/download/v0.7.1/vllm-0.7.1-cp38-abi3-manylinux1_x86_64.whl
) works fine.nvcr.io/nvidia/pytorch:24.12-py3
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: