Skip to content

Commit 7cb0f4f

Browse files
SachinVargheserasmith
authored andcommitted
Update default max_num_batch_tokens for chunked prefill (vllm-project#11694)
1 parent 6e10c9a commit 7cb0f4f

File tree

1 file changed

+4
-5
lines changed

1 file changed

+4
-5
lines changed

docs/source/usage/performance.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -32,8 +32,8 @@ You can enable the feature by specifying `--enable-chunked-prefill` in the comma
3232
```python
3333
llm = LLM(model="meta-llama/Llama-2-7b-hf", enable_chunked_prefill=True)
3434
# Set max_num_batched_tokens to tune performance.
35-
# NOTE: 512 is the default max_num_batched_tokens for chunked prefill.
36-
# llm = LLM(model="meta-llama/Llama-2-7b-hf", enable_chunked_prefill=True, max_num_batched_tokens=512)
35+
# NOTE: 2048 is the default max_num_batched_tokens for chunked prefill.
36+
# llm = LLM(model="meta-llama/Llama-2-7b-hf", enable_chunked_prefill=True, max_num_batched_tokens=2048)
3737
```
3838

3939
By default, vLLM scheduler prioritizes prefills and doesn't batch prefill and decode to the same batch.
@@ -49,13 +49,12 @@ This policy has two benefits:
4949
- It improves ITL and generation decode because decode requests are prioritized.
5050
- It helps achieve better GPU utilization by locating compute-bound (prefill) and memory-bound (decode) requests to the same batch.
5151

52-
You can tune the performance by changing `max_num_batched_tokens`.
53-
By default, it is set to 512, which has the best ITL on A100 in the initial benchmark (llama 70B and mixtral 8x22B).
52+
You can tune the performance by changing `max_num_batched_tokens`. By default, it is set to 2048.
5453
Smaller `max_num_batched_tokens` achieves better ITL because there are fewer prefills interrupting decodes.
5554
Higher `max_num_batched_tokens` achieves better TTFT as you can put more prefill to the batch.
5655

5756
- If `max_num_batched_tokens` is the same as `max_model_len`, that's almost the equivalent to the default scheduling policy (except that it still prioritizes decodes).
58-
- Note that the default value (512) of `max_num_batched_tokens` is optimized for ITL, and it may have lower throughput than the default scheduler.
57+
- Note that the default value (2048) of `max_num_batched_tokens` is optimized for ITL, and it may have lower throughput than the default scheduler.
5958

6059
We recommend you set `max_num_batched_tokens > 2048` for throughput.
6160

0 commit comments

Comments
 (0)