-
-
Notifications
You must be signed in to change notification settings - Fork 7.7k
ExLlamaV2: exl2 support #3203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@pabl-o-ce What specifically would you like to see? vLLM already integrates kernels from exllamav2 - see things such as the GPTQ kernels vllm/csrc/quantization/gptq/qdq_4.cuh Line 2 in 05af6da
|
Hi @mgoin thanks for the response. So is possible to use |
Hi, Is exl2 proper supported? How to start the docker container correctly to operate on exl2 models? |
Hi @mgoin, I think this feature submitted by @chu-tianxiang in #2330 and #916 just utilize the So it still don't support exl2 properly. @tobiajung |
We are also interested in exl2 dynamic precision format. What steps would be needed to support it? |
Support for this would allow dbrx exl2 support for the dbrx model quantized weights. Allowing inference of the model on a dual 24 gpu system. |
Hello there! Has any more thought/attention been given to the idea of exl2 support? The newest derivatives of llama3 (such as dolphin 70b) utilize it and it seems no one else is quantizing it to AWQ or GPTQ. I love vLLM regardless! Thank you guys for all the work you put in. |
Hi, we are also interested in EXL2 format, which is quite flexible and fast. As for flexibility, you can use 3.2, 4.5, 8.5 bpw (bit per weight) to quantize a model. And the inference speed of EXL2 is much faster than GPTQ in 8-bit precision. |
Fully agree. Supporting exl2 would be perfect! |
I would also love to see exl2 support |
I would love to see EXL2 support in vLLM! |
exl2 is needed feature 100% |
I support it too |
any update? |
Also voicing my support! |
VLLM is great. Like to see exlv2 support too! |
yeah this would be great to have |
Would love to see this! |
+1. EXl quants is unbeatable |
Is there any chance to see exl2? 👀 |
can we get this? no one making awq and gtpq quants anymore :( |
exl2 would be nice 😃 |
@javAlborz it would ! |
+1! |
vllm's cli is my favorite so far because it just works, also the api is better than tabby. |
+1. Most new models are in GGUF and EXL2. And the inference quality at same size are pretty good. |
Give exl2 support pls |
Although I am waiting for exl2 support myself, the amount of +1 messages really don't help. |
we are waiting for exl2 support! |
Make vLLM great again. |
1 similar comment
Make vLLM great again. |
+1 |
I think the problem here is due to exl2 being a TurboDerp fork of GPTQ that allows altering the size of the head bits. This is far more complex to implement in vLLM and the reason why TabbyAPI is just a python flask wrapper around the exllama2 library, using torch (which is not a scalable methodology to host inference). This design is an anti-pattern to how vLLM serves it's inference engine, and I am not sure how or if they will figure out the implementation. Many people are making AWQ quants, and when you don't find one it is usually because some model engineering teams decided to alter their model design and deviate from established standards, causing the AutoAWQ community to react by implementing yet another custom kernel to implement and test before releasing. In defence of the exl2 project, you can quantize very large models using several NVIDIA 24GB VRAM GPUs (still not working on ROCm), which I haven't had much success doing using AutoAWQ. I would wholeheartedly argue that the AWQ quants of models derive more precise results and faster inference than any comparable exl2/gptq format, and I can attest to this, as I have made several thousand AWQ quants under the org solidrust on HuggingFace. I am also an author of several evaluation frameworks, (most recently the Uncertainty Quantification, from arXiv:2401.12794) that have routinely compared AWQ to Exl2, which influence my conclusion. |
@suparious vllm + AWQ takes a LOT more vram than exllama though. in my testing i was able to load models almost twice as big using tabby and the inferance speed was around the same. that's why i stopped using vllm, even though i think it has a nicer api. |
@suparious There are (experimental) optimizations made to the layer bits distribution of exl2 format which in my testing producing output much better than quantizer in main branch. I might upload some models I use to HF later so you can try it out, one is QwQ-32B-4bpw which is super good, I almost can't feel the quality loss. |
@suparious What do you think of the upcoming Exl3 format. Early results seem promising, and support was just added to text-generation-webui, |
I am currently testing it now, and exllama3 is very exciting. For needs I don't go lower than FP8, which vLLM can do natively. Now vLLM supports llama4 on day 0, which was very exciting to see. I don't have the vram to run llama4, so I am looking into maybe using a 4bit EXL3 quant with the head bits set to at least 6bits, which is something that is missing in vLLM right now. |
If is possible ExLlamaV2 is a very fast and good library to Run LLM
ExLlamaV2 Repo
The text was updated successfully, but these errors were encountered: