Skip to content

[Frontend] support AWS SageMaker inference id #12652

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/serving/openai_compatible_server.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ completion = client.chat.completions.create(

## Extra HTTP Headers

Only `X-Request-Id` HTTP request header is supported for now. It can be enabled
Only `X-Request-Id` and `X-Amzn-SageMaker-Inference-Id` HTTP request headers are supported for now. It can be enabled
with `--enable-request-id-headers`.

> Note that enablement of the headers can impact performance significantly at high QPS
Expand Down
13 changes: 10 additions & 3 deletions vllm/entrypoints/openai/api_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -706,10 +706,17 @@ async def authentication(request: Request, call_next):

@app.middleware("http")
async def add_request_id(request: Request, call_next):
request_id = request.headers.get(
"X-Request-Id") or uuid.uuid4().hex
request_id = request.headers.get("X-Request-Id")
sagemaker_request_id = request.headers.get(
"X-Amzn-SageMaker-Inference-Id")

response = await call_next(request)
response.headers["X-Request-Id"] = request_id

response.headers["X-Request-Id"] = request_id or uuid.uuid4().hex
if sagemaker_request_id is not None:
response.headers[
"X-Amzn-SageMaker-Inference-Id"] = sagemaker_request_id

return response

for middleware in args.middleware:
Expand Down
4 changes: 3 additions & 1 deletion vllm/entrypoints/openai/serving_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -504,7 +504,9 @@ def _base_request_id(raw_request: Optional[Request],
if raw_request is None:
return default

return raw_request.headers.get("X-Request-Id", default)
return (raw_request.headers.get("X-Request-Id")
or raw_request.headers.get("X-Amzn-SageMaker-Inference-Id")
or default)

@staticmethod
def _get_decoded_token(logprob: Logprob,
Expand Down