Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default to streaming mode #552

Merged
merged 1 commit into from
Mar 20, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 13 additions & 3 deletions config/charts/inferencepool/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,17 +5,27 @@ A chart to deploy an InferencePool and a corresponding EndpointPicker (epp) depl

## Install

To install an InferencePool named `pool-1` that selects from endpoints with label `app: vllm-llama2-7b` and listening on port `8000`, you can run the following command:
To install an InferencePool named `vllm-llama2-7b` that selects from endpoints with label `app: vllm-llama2-7b` and listening on port `8000`, you can run the following command:

```txt
$ helm install pool-1 ./config/charts/inferencepool \
--set inferencePool.name=pool-1 \
$ helm install vllm-llama2-7b ./config/charts/inferencepool \
--set inferencePool.name=vllm-llama2-7b \
--set inferencePool.selector.app=vllm-llama2-7b \
--set inferencePool.targetPortNumber=8000
```

where `inferencePool.targetPortNumber` is the pod that vllm backends served on and `inferencePool.selector` is the selector to match the vllm backends.

To install via the latest published chart in staging (--version v0 indicates latest dev version), you can run the following command:

```txt
$ helm install vllm-llama2-7b \
--set inferencePool.name=vllm-llama2-7b \
--set inferencePool.selector.app=vllm-llama2-7b \
--set inferencePool.targetPortNumber=8000 \
oci://us-central1-docker.pkg.dev/k8s-staging-images/gateway-api-inference-extension/charts/inferencepool --version v0
```

## Uninstall

Run the following command to uninstall the chart:
Expand Down
3 changes: 3 additions & 0 deletions config/charts/inferencepool/templates/inferencepool.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,9 @@ spec:
- "9003"
- -metricsPort
- "9090"
env:
- name: USE_STREAMING
value: "true"
ports:
- name: grpc
containerPort: 9002
Expand Down
62 changes: 31 additions & 31 deletions config/manifests/gateway/patch_policy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -54,37 +54,37 @@ spec:
op: replace
path: "/virtual_hosts/0/routes/0/route/cluster"
value: original_destination_cluster
# Uncomment the below to enable full duplex streaming
# - type: "type.googleapis.com/envoy.config.listener.v3.Listener"
# name: "default/inference-gateway/llm-gw"
# operation:
# op: add
# path: "/default_filter_chain/filters/0/typed_config/http_filters/0/typed_config/processing_mode/request_body_mode"
# value: FULL_DUPLEX_STREAMED
# - type: "type.googleapis.com/envoy.config.listener.v3.Listener"
# name: "default/inference-gateway/llm-gw"
# operation:
# op: add
# path: "/default_filter_chain/filters/0/typed_config/http_filters/0/typed_config/processing_mode/request_trailer_mode"
# value: SEND
# - type: "type.googleapis.com/envoy.config.listener.v3.Listener"
# name: "default/inference-gateway/llm-gw"
# operation:
# op: add
# path: "/default_filter_chain/filters/0/typed_config/http_filters/0/typed_config/processing_mode/response_body_mode"
# value: FULL_DUPLEX_STREAMED
# - type: "type.googleapis.com/envoy.config.listener.v3.Listener"
# name: "default/inference-gateway/llm-gw"
# operation:
# op: replace
# path: "/default_filter_chain/filters/0/typed_config/http_filters/0/typed_config/processing_mode/response_trailer_mode"
# value: SEND
# - type: "type.googleapis.com/envoy.config.listener.v3.Listener"
# name: "default/inference-gateway/llm-gw"
# operation:
# op: replace
# path: "/default_filter_chain/filters/0/typed_config/http_filters/0/typed_config/processing_mode/response_header_mode"
# value: SEND
# Comment the below to disable full duplex streaming
- type: "type.googleapis.com/envoy.config.listener.v3.Listener"
name: "default/inference-gateway/llm-gw"
operation:
op: add
path: "/default_filter_chain/filters/0/typed_config/http_filters/0/typed_config/processing_mode/request_body_mode"
value: FULL_DUPLEX_STREAMED
- type: "type.googleapis.com/envoy.config.listener.v3.Listener"
name: "default/inference-gateway/llm-gw"
operation:
op: add
path: "/default_filter_chain/filters/0/typed_config/http_filters/0/typed_config/processing_mode/request_trailer_mode"
value: SEND
- type: "type.googleapis.com/envoy.config.listener.v3.Listener"
name: "default/inference-gateway/llm-gw"
operation:
op: add
path: "/default_filter_chain/filters/0/typed_config/http_filters/0/typed_config/processing_mode/response_body_mode"
value: FULL_DUPLEX_STREAMED
- type: "type.googleapis.com/envoy.config.listener.v3.Listener"
name: "default/inference-gateway/llm-gw"
operation:
op: replace
path: "/default_filter_chain/filters/0/typed_config/http_filters/0/typed_config/processing_mode/response_trailer_mode"
value: SEND
- type: "type.googleapis.com/envoy.config.listener.v3.Listener"
name: "default/inference-gateway/llm-gw"
operation:
op: replace
path: "/default_filter_chain/filters/0/typed_config/http_filters/0/typed_config/processing_mode/response_header_mode"
value: SEND
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyExtensionPolicy
Expand Down
2 changes: 1 addition & 1 deletion config/manifests/inferencepool.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ spec:
- "9003"
env:
- name: USE_STREAMING
value: "false"
value: "true"
ports:
- containerPort: 9002
- containerPort: 9003
Expand Down