Skip to content

Why isn't Pascal supported? #774

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
atorsvn opened this issue Aug 17, 2023 · 2 comments
Closed

Why isn't Pascal supported? #774

atorsvn opened this issue Aug 17, 2023 · 2 comments

Comments

@atorsvn
Copy link

atorsvn commented Aug 17, 2023

I tried to install and got a message saying nothing under CUDA Compute 7.0 can be used

@WoosukKwon
Copy link
Collaborator

Hi @atorsvn, thanks for your interest. Unfortunately, vLLM currently does not support GPU architectures before Volta.

@aisensiy
Copy link
Contributor

It seems that Flash attention is not available on volta. So would you mind to share how is vllm works for v100 gpu? Thanks.

yma11 pushed a commit to yma11/vllm that referenced this issue Feb 25, 2025
mul scale input in factor = 448/240

---------

Co-authored-by: Michał Kuligowski <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants