Skip to content

Commit 9bd136a

Browse files
authored
Add README.md file to the epp pkg (#386)
* Polish the epp README.md file * Addressed comments
1 parent 7e3cd45 commit 9bd136a

File tree

3 files changed

+24
-5
lines changed

3 files changed

+24
-5
lines changed
File renamed without changes.

pkg/epp/README.md

+24
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# The EndPoint Picker (EPP)
2+
This package provides the reference implementation for the Endpoint Picker (EPP). It implements the [extension protocol](../../docs/proposals/003-endpoint-picker-protocol), enabling a proxy or gateway to request endpoint hints from an extension. An EPP instance handles a single `InferencePool` (and so for each `InferencePool`, one must create a dedicated EPP deployment).
3+
4+
5+
The Endpoint Picker performs the following core functions:
6+
7+
- Endpoint Selection
8+
- The EPP determines the appropriate Pod endpoint for the load balancer (LB) to route requests.
9+
- It selects from the pool of ready Pods designated by the assigned InferencePool's [Selector](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/7e3cd457cdcd01339b65861c8e472cf27e6b6e80/api/v1alpha1/inferencepool_types.go#L53) field.
10+
- Endpoint selection is contingent on the request's ModelName matching an `InferenceModel` that references the `InferencePool`.
11+
- Requests with unmatched ModelName values trigger an error response to the proxy.
12+
- Traffic Splitting and ModelName Rewriting
13+
- The EPP facilitates controlled rollouts of new adapter versions by implementing traffic splitting between adapters within the same `InferencePool`, as defined by the `InferenceModel`.
14+
- EPP rewrites the model name in the request to the [target model name](https://github.com/kubernetes-sigs/gateway-api-inference-extension/blob/7e3cd457cdcd01339b65861c8e472cf27e6b6e80/api/v1alpha1/inferencemodel_types.go#L161) as defined on the `InferenceModel` object.
15+
- Observability
16+
- The EPP generates metrics to enhance observability.
17+
- It reports InferenceModel-level metrics, further broken down by target model.
18+
- Detailed information regarding metrics can be found on the [website](https://gateway-api-inference-extension.sigs.k8s.io/guides/metrics/).
19+
20+
## The scheduling algorithm
21+
The scheduling package implements request scheduling algorithms for load balancing requests across backend pods in an inference gateway. The scheduler ensures efficient resource utilization while maintaining low latency and prioritizing critical requests. It applies a series of filters based on metrics and heuristics to select the best pod for a given request. The following flow chart summarizes the current scheduling algorithm
22+
23+
# Flowchart
24+
<img src="../../docs/scheduler-flowchart.png" alt="Scheduling Algorithm" width="400" />

pkg/scheduling.md

-5
This file was deleted.

0 commit comments

Comments
 (0)