Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert name change to make pool name more descriptive. #516

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions config/manifests/ext_proc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,11 @@ apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferencePool
metadata:
labels:
name: my-pool
name: vllm-llama2-7b-pool
spec:
targetPortNumber: 8000
selector:
app: my-pool
app: vllm-llama2-7b-pool
extensionRef:
name: inference-gateway-ext-proc
---
Expand All @@ -75,7 +75,7 @@ spec:
imagePullPolicy: Always
args:
- -poolName
- "my-pool"
- "vllm-llama2-7b-pool"
- -v
- "4"
- -grpcPort
Expand Down
6 changes: 3 additions & 3 deletions config/manifests/inferencemodel.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ spec:
modelName: tweet-summary
criticality: Critical
poolRef:
name: my-pool
name: vllm-llama2-7b-pool
targetModels:
- name: tweet-summary-1
weight: 100
Expand All @@ -20,7 +20,7 @@ spec:
modelName: meta-llama/Llama-2-7b-hf
criticality: Critical
poolRef:
name: my-pool
name: vllm-llama2-7b-pool

---
apiVersion: inference.networking.x-k8s.io/v1alpha2
Expand All @@ -31,4 +31,4 @@ spec:
modelName: Qwen/Qwen2.5-1.5B-Instruct
criticality: Critical
poolRef:
name: my-pool
name: vllm-llama2-7b-pool
6 changes: 3 additions & 3 deletions config/manifests/vllm/cpu-deployment.yaml
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-pool
name: vllm-llama2-7b-pool
spec:
replicas: 3
selector:
matchLabels:
app: my-pool
app: vllm-llama2-7b-pool
template:
metadata:
labels:
app: my-pool
app: vllm-llama2-7b-pool
spec:
containers:
- name: lora
Expand Down
6 changes: 3 additions & 3 deletions config/manifests/vllm/gpu-deployment.yaml
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-pool
name: vllm-llama2-7b-pool
spec:
replicas: 3
selector:
matchLabels:
app: my-pool
app: vllm-llama2-7b-pool
template:
metadata:
labels:
app: my-pool
app: vllm-llama2-7b-pool
spec:
containers:
- name: lora
Expand Down
2 changes: 1 addition & 1 deletion test/e2e/epp/e2e_suite_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ const (
// TODO [danehans]: Must be "default" until https://github.com/kubernetes-sigs/gateway-api-inference-extension/issues/227 is fixed
nsName = "default"
// modelServerName is the name of the model server test resources.
modelServerName = "my-pool"
modelServerName = "vllm-llama2-7b-pool"
// modelName is the test model name.
modelName = "tweet-summary"
// envoyName is the name of the envoy proxy test resources.
Expand Down