Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update extension-policy to match the new epp service name #522

Merged
merged 1 commit into from
Mar 17, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 0 additions & 32 deletions config/manifests/gateway/extension_policy.yaml

This file was deleted.

6 changes: 3 additions & 3 deletions config/manifests/inferencemodel.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ spec:
modelName: tweet-summary
criticality: Critical
poolRef:
name: my-pool
name: vllm-llama2-7b
targetModels:
- name: tweet-summary-1
weight: 100
Expand All @@ -20,7 +20,7 @@ spec:
modelName: meta-llama/Llama-2-7b-hf
criticality: Critical
poolRef:
name: my-pool
name: vllm-llama2-7b

---
apiVersion: inference.networking.x-k8s.io/v1alpha2
Expand All @@ -31,4 +31,4 @@ spec:
modelName: Qwen/Qwen2.5-1.5B-Instruct
criticality: Critical
poolRef:
name: my-pool
name: vllm-llama2-7b
33 changes: 33 additions & 0 deletions config/manifests/inferencepool.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,39 @@ spec:
initialDelaySeconds: 5
periodSeconds: 10
---
apiVersion: gateway.envoyproxy.io/v1alpha1
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved this here into the inferencepool yaml because it references the epp service.

kind: EnvoyExtensionPolicy
metadata:
name: ext-proc-policy
namespace: default
spec:
extProc:
- backendRefs:
- group: ""
kind: Service
name: vllm-llama2-7b-epp
port: 9002
processingMode:
allowModeOverride: true
request:
body: Buffered
response:
# The timeouts are likely not needed here. We can experiment with removing/tuning them slowly.
# The connection limits are more important and will cause the opaque: ext_proc_gRPC_error_14 error in Envoy GW if not configured correctly.
messageTimeout: 1000s
backendSettings:
circuitBreaker:
maxConnections: 40000
maxPendingRequests: 40000
maxParallelRequests: 40000
timeout:
tcp:
connectTimeout: 24h
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: llm-route
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
Expand Down
3 changes: 1 addition & 2 deletions site-src/guides/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,6 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
### Deploy Envoy Gateway Custom Policies

```bash
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/extension_policy.yaml
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/patch_policy.yaml
```
> **_NOTE:_** This is also per InferencePool, and will need to be configured to support the new pool should you wish to experiment further.
Expand Down Expand Up @@ -125,7 +124,7 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/traffic_policy.yaml --ignore-not-found
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/extension_policy.yaml --ignore-not-found
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/patch_policy.yaml --ignore-not-found
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/ext_proc.yaml --ignore-not-found
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/inferencepool.yaml --ignore-not-found
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/gateway.yaml --ignore-not-found
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/gateway/enable_patch_policy.yaml --ignore-not-found
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/inferencemodel.yaml --ignore-not-found
Expand Down