Skip to content

ExLlamaV2: exl2 support #3203

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
pabl-o-ce opened this issue Mar 5, 2024 · 37 comments · May be fixed by #11348
Open

ExLlamaV2: exl2 support #3203

pabl-o-ce opened this issue Mar 5, 2024 · 37 comments · May be fixed by #11348
Labels
feature request New feature or request

Comments

@pabl-o-ce
Copy link

If is possible ExLlamaV2 is a very fast and good library to Run LLM

ExLlamaV2 Repo

@mgoin
Copy link
Member

mgoin commented Mar 5, 2024

@pabl-o-ce What specifically would you like to see? vLLM already integrates kernels from exllamav2 - see things such as the GPTQ kernels

Copied from https://github.com/turboderp/exllamav2

@pabl-o-ce
Copy link
Author

Hi @mgoin thanks for the response.

So is possible to use exl2 format on vllm? or is using gptq format
sorry if this is a very n00b question

@tobiajung
Copy link

Hi,
seems like 0.3.3 has vllm support, but I'm not able to get a model up and running. I use the docker environment with the following args: --model LoneStriker/CodeFuse-DeepSeek-33B-4.0bpw-h6-exl2 --gpu-memory-utilization 0.65 --max-model-len 2048
But seems like vllm tries to allocate much more memory than the given 0.65 (of 48GB) and results in an error.

Is exl2 proper supported? How to start the docker container correctly to operate on exl2 models?

@wxupjack
Copy link

Hi @mgoin, I think this feature submitted by @chu-tianxiang in #2330 and #916 just utilize the shuffle and dequant functions from exllamav2 repo for GPTQ. But not means vllm(main branch) has been compatible with the dynamic precision exl2 format.

So it still don't support exl2 properly. @tobiajung

@sapountzis
Copy link

We are also interested in exl2 dynamic precision format. What steps would be needed to support it?

@nkeilar
Copy link

nkeilar commented Apr 5, 2024

Support for this would allow dbrx exl2 support for the dbrx model quantized weights. Allowing inference of the model on a dual 24 gpu system.

@zminer123
Copy link

Hello there! Has any more thought/attention been given to the idea of exl2 support? The newest derivatives of llama3 (such as dolphin 70b) utilize it and it seems no one else is quantizing it to AWQ or GPTQ. I love vLLM regardless! Thank you guys for all the work you put in.

@saucebing
Copy link

Hi, we are also interested in EXL2 format, which is quite flexible and fast. As for flexibility, you can use 3.2, 4.5, 8.5 bpw (bit per weight) to quantize a model. And the inference speed of EXL2 is much faster than GPTQ in 8-bit precision.

@houmie
Copy link

houmie commented Apr 28, 2024

Fully agree. Supporting exl2 would be perfect!

@belladoreai
Copy link

I would also love to see exl2 support

@mku-wedoai
Copy link

I would love to see EXL2 support in vLLM!

@DenisSergeevitch
Copy link

exl2 is needed feature 100%

@sparsh35
Copy link

I support it too

@chopin1998
Copy link

any update?
really hope to get perfect support for exl2 format ASAP.

@meditans
Copy link

meditans commented Jun 9, 2024

Also voicing my support!

@kulievvitaly
Copy link

VLLM is great. Like to see exlv2 support too!

@Respaired
Copy link

yeah this would be great to have

@paulb-seldon
Copy link

Would love to see this!

@rjmehta1993
Copy link

+1. EXl quants is unbeatable

@fablerq
Copy link

fablerq commented Sep 2, 2024

Is there any chance to see exl2? 👀

@hmellor hmellor added the feature request New feature or request label Sep 20, 2024
@DaBossCoda
Copy link

can we get this? no one making awq and gtpq quants anymore :(

@javAlborz
Copy link

exl2 would be nice 😃

@alkeryn
Copy link

alkeryn commented Dec 9, 2024

@javAlborz it would !

@SlapDrone
Copy link

+1!

@alkeryn
Copy link

alkeryn commented Dec 13, 2024

vllm's cli is my favorite so far because it just works, also the api is better than tabby.
but god, exl2 is better than awq.

@Originalimoc
Copy link

Originalimoc commented Dec 19, 2024

+1. Most new models are in GGUF and EXL2. And the inference quality at same size are pretty good.

@drexample
Copy link

Give exl2 support pls

@rsxdalv
Copy link

rsxdalv commented Dec 19, 2024

Although I am waiting for exl2 support myself, the amount of +1 messages really don't help.
If you truly wish to make a +1, state reasons so that the developers or forkers have a real incentive.
Yes, for inactive issues a +1 might amplify your voice and is better than nothing, but a stream of +1s really can be worse than nothing. The probability that devs have unsubscribed from this issue is fairly high.

@AlpinDale AlpinDale linked a pull request Dec 20, 2024 that will close this issue
@JohnConnor123
Copy link

we are waiting for exl2 support!

@dclipca
Copy link

dclipca commented Feb 5, 2025

Make vLLM great again.

1 similar comment
@zyssyz123
Copy link

Make vLLM great again.

@agahEbrahimi
Copy link

+1

@suparious
Copy link

I think the problem here is due to exl2 being a TurboDerp fork of GPTQ that allows altering the size of the head bits. This is far more complex to implement in vLLM and the reason why TabbyAPI is just a python flask wrapper around the exllama2 library, using torch (which is not a scalable methodology to host inference). This design is an anti-pattern to how vLLM serves it's inference engine, and I am not sure how or if they will figure out the implementation.

Many people are making AWQ quants, and when you don't find one it is usually because some model engineering teams decided to alter their model design and deviate from established standards, causing the AutoAWQ community to react by implementing yet another custom kernel to implement and test before releasing. In defence of the exl2 project, you can quantize very large models using several NVIDIA 24GB VRAM GPUs (still not working on ROCm), which I haven't had much success doing using AutoAWQ.

I would wholeheartedly argue that the AWQ quants of models derive more precise results and faster inference than any comparable exl2/gptq format, and I can attest to this, as I have made several thousand AWQ quants under the org solidrust on HuggingFace. I am also an author of several evaluation frameworks, (most recently the Uncertainty Quantification, from arXiv:2401.12794) that have routinely compared AWQ to Exl2, which influence my conclusion.

@alkeryn
Copy link

alkeryn commented Mar 23, 2025

@suparious vllm + AWQ takes a LOT more vram than exllama though.

in my testing i was able to load models almost twice as big using tabby and the inferance speed was around the same.

that's why i stopped using vllm, even though i think it has a nicer api.

@Originalimoc
Copy link

Originalimoc commented Mar 25, 2025

@suparious There are (experimental) optimizations made to the layer bits distribution of exl2 format which in my testing producing output much better than quantizer in main branch. I might upload some models I use to HF later so you can try it out, one is QwQ-32B-4bpw which is super good, I almost can't feel the quality loss.

@jsboige
Copy link

jsboige commented Apr 7, 2025

I think the problem here is due to exl2 being a TurboDerp fork of GPTQ that allows altering the size of the head bits. This is far more complex to implement in vLLM and the reason why TabbyAPI is just a python flask wrapper around the exllama2 library, using torch (which is not a scalable methodology to host inference). This design is an anti-pattern to how vLLM serves it's inference engine, and I am not sure how or if they will figure out the implementation.

@suparious What do you think of the upcoming Exl3 format. Early results seem promising, and support was just added to text-generation-webui,

@suparious
Copy link

I am currently testing it now, and exllama3 is very exciting. For needs I don't go lower than FP8, which vLLM can do natively. Now vLLM supports llama4 on day 0, which was very exciting to see. I don't have the vram to run llama4, so I am looking into maybe using a 4bit EXL3 quant with the head bits set to at least 6bits, which is something that is missing in vLLM right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.