Skip to content

Commit 833f942

Browse files
Configure the vllm deployment with best practices for startup
We want to recommend best practices for deployments of model servers under an InferencePool. Use the need to gracefully drain without client visible errors during rollout ("hitless" updates) to annotate the yaml with strong opinions on best practices. This configuration was experimentally verified on the GKE Inference Gateway configuration which should be longer than other servers.
1 parent 03d8584 commit 833f942

File tree

1 file changed

+136
-8
lines changed

1 file changed

+136
-8
lines changed

config/manifests/vllm/gpu-deployment.yaml

+136-8
Original file line numberDiff line numberDiff line change
@@ -46,26 +46,103 @@ spec:
4646
- containerPort: 8000
4747
name: http
4848
protocol: TCP
49+
lifecycle:
50+
preStop:
51+
# vLLM stops accepting connections when it receives SIGTERM, so we need to sleep
52+
# to give upstream gateways a chance to take us out of rotation. The time we wait
53+
# is dependent on the time it takes for all upstreams to completely remove us from
54+
# rotation. Older or simpler load balancers might take upwards of 30s, but we expect
55+
# our deployment to run behind a modern gateway like Envoy which is designed to
56+
# probe for readiness aggressively.
57+
sleep:
58+
# Upstream gateway probers for health should be set on a low period, such as 5s,
59+
# and the shorter we can tighten that bound the faster that we release
60+
# accelerators during controlled shutdowns. However, we should expect variance,
61+
# as load balancers may have internal delays, and we don't want to drop requests
62+
# normally, so we're often aiming to set this value to a p99 propagation latency
63+
# of readiness -> load balancer taking backend out of rotation, not the average.
64+
#
65+
# This value is generally stable and must often be experimentally determined on
66+
# for a given load balancer and health check period. We set the value here to
67+
# the highest value we observe on a supported load balancer, and we recommend
68+
# tuning this value down and verifying no requests are dropped.
69+
#
70+
# If this value is updated, be sure to update terminationGracePeriodSeconds.
71+
seconds: 25
4972
livenessProbe:
50-
failureThreshold: 240
5173
httpGet:
5274
path: /health
5375
port: http
5476
scheme: HTTP
55-
initialDelaySeconds: 5
56-
periodSeconds: 5
77+
# vLLM's health check is simple, so we can more aggressively probe it. Liveness
78+
# check endpoints should always be suitable for aggressive probing.
79+
periodSeconds: 1
5780
successThreshold: 1
81+
# vLLM has a very simple health implementation, which means that any failure is
82+
# likely significant. However, any liveness triggered restart requires the very
83+
# large core model to be reloaded, and so we should bias towards ensuring the
84+
# server is definitely unhealthy vs immediately restarting. Use 5 attempts as
85+
# evidence of a serious problem.
86+
failureThreshold: 5
5887
timeoutSeconds: 1
5988
readinessProbe:
60-
failureThreshold: 600
6189
httpGet:
6290
path: /health
6391
port: http
6492
scheme: HTTP
65-
initialDelaySeconds: 5
66-
periodSeconds: 5
93+
# vLLM's health check is simple, so we can more aggressively probe it. Readiness
94+
# check endpoints should always be suitable for aggressive probing, but may be
95+
# slightly more expensive than readiness probes.
96+
periodSeconds: 1
6797
successThreshold: 1
98+
# vLLM has a very simple health implementation, which means that any failure is
99+
# likely significant,
100+
failureThreshold: 1
68101
timeoutSeconds: 1
102+
# We set a startup probe so that we don't begin directing traffic to this instance
103+
# until the model is loaded.
104+
startupProbe:
105+
# Failure threshold is when we believe startup will not happen at all, and is set
106+
# to the maximum possible time we believe loading a model will take. In our
107+
# default configuration we are downloading a model from HuggingFace, which may
108+
# take a long time, then the model must load into the accelerator. We choose
109+
# 10 minutes as a reasonable maximum startup time before giving up and attempting
110+
# to restart the pod.
111+
#
112+
# IMPORTANT: If the core model takes more than 10 minutes to load, pods will crash
113+
# loop forever. Be sure to set this appropriately.
114+
failureThreshold: 600
115+
# Set delay to start low so that if the base model changes to something smaller
116+
# or an optimization is deployed, we don't wait unneccesarily.
117+
initialDelaySeconds: 2
118+
# As a startup probe, this stops running and so we can more aggressively probe
119+
# even a moderately complex startup - this is a very important workload.
120+
periodSeconds: 1
121+
exec:
122+
# Verify that our core model is loaded before we consider startup successful.
123+
# /health starts returning true very early in vLLM startup, but we want to
124+
# only consider ourselves as started up once the model has been loaded.
125+
#
126+
# vLLM should implement a readiness check that is only true once the model
127+
# can begin serving, and then this can be switched to an httpGet probe.
128+
# https://github.com/kubernetes-sigs/gateway-api-inference-extension/issues/558
129+
command:
130+
- /bin/bash
131+
- -c
132+
- |
133+
set -eu
134+
if ! models="$( curl -q http://0.0.0.0:8000/v1/models )"; then
135+
echo "server not responding"
136+
exit 1
137+
fi
138+
echo "${models}" | grep -q "$1"
139+
if [[ $? -ne 0 ]]; then
140+
echo "model not found"
141+
exit 1
142+
fi
143+
echo "ok"
144+
- ''
145+
- '"id":"meta-llama/Llama-2-7b-hf"'
69146
resources:
70147
limits:
71148
nvidia.com/gpu: 1
@@ -92,8 +169,59 @@ spec:
92169
- name: config-volume
93170
mountPath: /config
94171
restartPolicy: Always
95-
schedulerName: default-scheduler
96-
terminationGracePeriodSeconds: 30
172+
173+
# Generally, the termination grace period needs to last longer than the slowest request
174+
# we expect to serve plus any extra time spent waiting for load balancers to take the
175+
# model server out of rotation.
176+
#
177+
# An easy starting point is the p99 or max request latency measured for your workload,
178+
# although LLM request latencies vary significantly if clients send longer inputs or
179+
# trigger longer outputs. Since steady state p99 will be higher than the latency
180+
# to drain a server, you may wish to slightly this value either experimentally or
181+
# via the calculation below.
182+
#
183+
# For most models you can derive an upper bound for the maximum drain latency as
184+
# follows:
185+
#
186+
# 1. Identify the maximum context length the model was trained on, or the maximum
187+
# allowed length of output tokens configured on vLLM (llama2-7b was trained to
188+
# 4k context length, while llama3-8b was trained to 128k).
189+
# 2. Output tokens are the more compute intensive to calculate and the accelerator
190+
# will have a maximum concurrency (batch size) - the time per output token at
191+
# maximum batch with no prompt tokens being processed is the slowest an output
192+
# token can be generated (for this model it would be about 100ms TPOT at a max
193+
# batch size around 50)
194+
# 3. Calculate the worst case request duration if a request starts immediately
195+
# before the server stops accepting new connections - generally when it receives
196+
# SIGTERM (for this model that is about 4096 / 10 ~ 40s)
197+
# 4. If there are any requests generating prompt tokens that will delay when those
198+
# output tokens start, and prompt token generation is roughly 6x faster than
199+
# compute-bound output token generation, so add 20% to the time from above (40s +
200+
# 16s ~ 55s)
201+
#
202+
# Thus we think it will take us at worst about 55s to complete the longest possible
203+
# request the model is likely to receive at maximum concurrency (highest latency)
204+
# once requests stop being sent.
205+
#
206+
# NOTE: This number will be lower than steady state p99 latency since we stop receiving
207+
# new requests which require continuous prompt token computation.
208+
# NOTE: The max timeout for backend connections from gateway to model servers should
209+
# be configured based on steady state p99 latency, not drain p99 latency
210+
#
211+
# 5. Add the time the pod takes in its preStop hook to allow the load balancers have
212+
# stopped sending us new requests (55s + 25s ~ 80s)
213+
#
214+
# Because termination grace period controls when the Kubelet forcibly terminates a
215+
# stuck or hung process (a possibility due to a GPU crash), there is operational safety
216+
# in keeping the value roughly proportional to the time to finish serving. There is also
217+
# value in adding a bit of extra time to deal with unexpectedly long workloads.
218+
#
219+
# 6. Add a 50% safety buffer to this time since the operational impact should be low
220+
# (80s * 1.5 ~ 120s)
221+
#
222+
# NOTE: The max timeout for backend connections from gateway to model servers will be
223+
terminationGracePeriodSeconds: 120
224+
97225
volumes:
98226
- name: data
99227
emptyDir: {}

0 commit comments

Comments
 (0)