You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pkg/README.md
+14-16
Original file line number
Diff line number
Diff line change
@@ -6,29 +6,30 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
6
6
- Envoy Gateway [v1.2.1](https://gateway.envoyproxy.io/docs/install/install-yaml/#install-with-yaml) or higher
7
7
- A cluster that has built-in support for `ServiceType=LoadBalancer`. (This can be validated by ensuring your Envoy Gateway is up and running)
8
8
- For example, with Kind, you can follow these steps: https://kind.sigs.k8s.io/docs/user/loadbalancer
9
+
- 3 GPUs to run the vLLM deployment. Adjust the number of replicas as needed.
9
10
10
11
### Steps
11
12
12
-
1.**Deploy Sample vLLM Application**
13
+
1.**Install the Inference Extension CRDs:**
13
14
14
-
Create a Hugging Face secret to download the model [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). Ensure that the token grants access to this model.
15
-
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
16
-
```bash
17
-
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN# Your Hugging Face Token with access to Llama2
Create a Hugging Face secret to download the model [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). Ensure that the token grants access to this model.
22
22
23
-
```sh
24
-
kubectl apply -f config/crd/bases
23
+
Replace `$HF_TOKEN` in `./manifests/vllm/deployment.yaml` with your Hugging Face secret and then deploy the sample vLLM deployment.
24
+
```bash
25
+
kubectl apply -f ./manifests/vllm/deployment.yaml
25
26
```
26
27
27
-
1.**Deploy InferenceModel and InferencePool**
28
+
1.**Deploy InferenceModel**
28
29
29
-
Deploy a sample InferenceModel and InferencePool configuration based on the vLLM deployments mentioned above.
30
+
Deploy a sample InferenceModel configuration based on the vLLM deployments mentioned above.
1.**Update Envoy Gateway Config to enable Patch Policy**
@@ -46,11 +47,8 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
46
47
kubectl apply -f ./manifests/gateway/gateway.yaml
47
48
```
48
49
> **_NOTE:_** This file couples together the gateway infra and the HTTPRoute infra for a convenient, quick startup. Creating additional/different InferencePools on the same gateway will require an additional set of: `Backend`, `HTTPRoute`, the resources included in the `./manifests/gateway/ext-proc.yaml` file, and an additional `./manifests/gateway/patch_policy.yaml` file. ***Should you choose to experiment, familiarity with xDS and Envoy are very useful.***
49
-
50
-
51
-
52
50
53
-
1.**Deploy Ext-Proc**
51
+
1.**Deploy the Inference Extension and InferencePool**
0 commit comments