You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, vLLM scheduler prioritizes prefills and doesn't batch prefill and decode to the same batch.
@@ -49,13 +49,12 @@ This policy has two benefits:
49
49
- It improves ITL and generation decode because decode requests are prioritized.
50
50
- It helps achieve better GPU utilization by locating compute-bound (prefill) and memory-bound (decode) requests to the same batch.
51
51
52
-
You can tune the performance by changing `max_num_batched_tokens`.
53
-
By default, it is set to 512, which has the best ITL on A100 in the initial benchmark (llama 70B and mixtral 8x22B).
52
+
You can tune the performance by changing `max_num_batched_tokens`. By default, it is set to 2048.
54
53
Smaller `max_num_batched_tokens` achieves better ITL because there are fewer prefills interrupting decodes.
55
54
Higher `max_num_batched_tokens` achieves better TTFT as you can put more prefill to the batch.
56
55
57
56
- If `max_num_batched_tokens` is the same as `max_model_len`, that's almost the equivalent to the default scheduling policy (except that it still prioritizes decodes).
58
-
- Note that the default value (512) of `max_num_batched_tokens` is optimized for ITL, and it may have lower throughput than the default scheduler.
57
+
- Note that the default value (2048) of `max_num_batched_tokens` is optimized for ITL, and it may have lower throughput than the default scheduler.
59
58
60
59
We recommend you set `max_num_batched_tokens > 2048` for throughput.
0 commit comments