You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While we depend on upstream model servers to support proper graceful drain (moving to a mode where the server terminates once all requests are completed, probably with a timeout although not always on very very long running servers), our examples and our docs should clearly indicate and configure the pool members for graceful drain.
I.e. the classic:
Use a preStop hook to wait for load balancers to stop sending traffic (depends on the config of the fronting LB)
Respond to SIGTERM in the model server process (e.g. vLLM) to begin draining and exit when completed
Optionally letting the drain be unbounded for extremely long requests or cases where LB may have extremely long drain periods
Write good log messages
Ensure the readiness probe continues firing as long as the model server is accepting requests (for scenarios where the service is taking requests)
We should work with upstream vLLM to ensure they gracefully shut down and out of the box examples show it.
EDIT: vLLM does support drain on TERM
INFO 03-20 14:21:01 [launcher.py:74] Shutting down FastAPI HTTP server.
INFO: Shutting down
INFO: Waiting for connections to close. (CTRL+C to force quit)
So we are missing preStop in our examples (will test).
The text was updated successfully, but these errors were encountered:
vLLM rejects connections immediately, so we should be sleeping in a preStop until the request finishes.
We should recommend gateways probe model servers aggressively, but the correct sleep for preStop is that probe interval + propagation delay (some load balancers take extra time to propagate a probe failure, and the value may change unexpectedly or require experimentation). Each gateway impl will have to recommend the right sleep interval, but ootb we should be correct for the set of recommended deployments.
I will add a PR with an annotated gpu-deployment that serves as a reference for correct behavior in upstreams.
While we depend on upstream model servers to support proper graceful drain (moving to a mode where the server terminates once all requests are completed, probably with a timeout although not always on very very long running servers), our examples and our docs should clearly indicate and configure the pool members for graceful drain.
I.e. the classic:
We should work with upstream vLLM to ensure they gracefully shut down and out of the box examples show it.
EDIT: vLLM does support drain on TERM
So we are missing preStop in our examples (will test).
The text was updated successfully, but these errors were encountered: