Skip to content

Commit 3c490d1

Browse files
committed
Polish the epp README.md file
1 parent 2577f63 commit 3c490d1

File tree

2 files changed

+25
-5
lines changed

2 files changed

+25
-5
lines changed

pkg/epp/README.md

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
# The EndPoint Picker (EPP)
2+
This package provides the reference implementation for the Endpoint Picker (EPP). It implements the [extension protocol](../../docs/proposals/003-endpoint-picker-protocol), enabling a proxy to request endpoint hints from an extension. As it is implemented now, an EPP instance handles a single `InferencePool` (and so for each `InferencePool`, one must create a dedicated EPP deployment).
3+
4+
5+
The Endpoint Picker performs the following core functions:
6+
7+
- Endpoint Selection
8+
- The EPP determines the appropriate Pod endpoint for the load balancer (LB) to route requests.
9+
- It selects from the pool of ready Pods designated by the assigned InferencePool.
10+
- Endpoint selection is contingent on the request's ModelName matching an `InferenceModel` that references the `InferencePool`.
11+
- Requests with unmatched ModelName values trigger an error response to the proxy.
12+
- The endpoint selection algorithm is detailed below.
13+
- Traffic Splitting and ModelName Rewriting
14+
- The EPP facilitates controlled rollouts of new adapter versions by implementing traffic splitting between adapters within the same `InferencePool`, as defined by the `InferenceModel`.
15+
- EPP rewrites the ModelName to the TargetModelName as defined in the `InferenceModel` object.
16+
- Observability
17+
- The EPP generates metrics to enhance observability.
18+
- It reports InferenceModel-level metrics, further broken down by target model.
19+
- Detailed information regarding metrics can be found on the [website](https://gateway-api-inference-extension.sigs.k8s.io/guides/metrics/).
20+
21+
## The scheduling algorithm
22+
The scheduling package implements request scheduling algorithms for load balancing requests across backend pods in an inference gateway. The scheduler ensures efficient resource utilization while maintaining low latency and prioritizing critical requests. It applies a series of filters based on metrics and heuristics to select the best pod for a given request. The following flow chart summarizes the current scheduling algorithm
23+
24+
# Flowchart
25+
<img src="../../docs/schedular-flowchart.png" alt="Scheduling Algorithm" width="400" />

pkg/scheduling.md

-5
This file was deleted.

0 commit comments

Comments
 (0)