Skip to content

Commit ef19e67

Browse files
authored
[Doc] Add headings to improve gptqmodel.md (#17164)
Signed-off-by: windsonsea <[email protected]>
1 parent a41351f commit ef19e67

File tree

1 file changed

+11
-0
lines changed

1 file changed

+11
-0
lines changed

docs/source/features/quantization/gptqmodel.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,12 +16,16 @@ GPTQModel is one of the few quantization toolkits in the world that allows `Dyna
1616
is fully integrated into vLLM and backed up by support from the ModelCloud.AI team. Please refer to [GPTQModel readme](https://github.com/ModelCloud/GPTQModel?tab=readme-ov-file#dynamic-quantization-per-module-quantizeconfig-override)
1717
for more details on this and other advanced features.
1818

19+
## Installation
20+
1921
You can quantize your own models by installing [GPTQModel](https://github.com/ModelCloud/GPTQModel) or picking one of the [5000+ models on Huggingface](https://huggingface.co/models?sort=trending&search=gptq).
2022

2123
```console
2224
pip install -U gptqmodel --no-build-isolation -v
2325
```
2426

27+
## Quantizing a model
28+
2529
After installing GPTQModel, you are ready to quantize a model. Please refer to the [GPTQModel readme](https://github.com/ModelCloud/GPTQModel/?tab=readme-ov-file#quantization) for further details.
2630

2731
Here is an example of how to quantize `meta-llama/Llama-3.2-1B-Instruct`:
@@ -49,12 +53,16 @@ model.quantize(calibration_dataset, batch_size=2)
4953
model.save(quant_path)
5054
```
5155

56+
## Running a quantized model with vLLM
57+
5258
To run an GPTQModel quantized model with vLLM, you can use [DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2](https://huggingface.co/ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2) with the following command:
5359

5460
```console
5561
python examples/offline_inference/llm_engine_example.py --model DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2
5662
```
5763

64+
## Using GPTQModel with vLLM's Python API
65+
5866
GPTQModel quantized models are also supported directly through the LLM entrypoint:
5967

6068
```python
@@ -67,14 +75,17 @@ prompts = [
6775
"The capital of France is",
6876
"The future of AI is",
6977
]
78+
7079
# Create a sampling params object.
7180
sampling_params = SamplingParams(temperature=0.6, top_p=0.9)
7281

7382
# Create an LLM.
7483
llm = LLM(model="DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2")
84+
7585
# Generate texts from the prompts. The output is a list of RequestOutput objects
7686
# that contain the prompt, generated text, and other information.
7787
outputs = llm.generate(prompts, sampling_params)
88+
7889
# Print the outputs.
7990
for output in outputs:
8091
prompt = output.prompt

0 commit comments

Comments
 (0)