You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: site-src/guides/index.md
+19-19
Original file line number
Diff line number
Diff line change
@@ -14,34 +14,34 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
14
14
15
15
### Deploy Sample Model Server
16
16
17
-
This quickstart guide contains two options for setting up model server:
18
-
17
+
Two options are supported for running the model server:
18
+
19
19
1. GPU-based model server.
20
20
Requirements: a Hugging Face access token that grants access to the model [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf).
21
-
21
+
22
22
1. CPU-based model server (not using GPUs).
23
23
Requirements: a Hugging Face access token that grants access to the model [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
24
24
25
25
Choose one of these options and follow the steps below. Please do not deploy both, as the deployments have the same name and will override each other.
26
-
27
-
#### GPU-Based Model Server
28
26
29
-
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in `./config/manifests/vllm/gpu-deployment.yaml` as needed.
30
-
Create a Hugging Face secret to download the model [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). Ensure that the token grants access to this model.
31
-
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
32
-
```bash
33
-
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN# Your Hugging Face Token with access to Llama2
For this setup, you will need 3 GPUs to run the sample model server. Adjust the number of replicas in `./config/manifests/vllm/gpu-deployment.yaml` as needed.
30
+
Create a Hugging Face secret to download the model [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). Ensure that the token grants access to this model.
31
+
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
32
+
```bash
33
+
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN # Your Hugging Face Token with access to Llama2
Create a Hugging Face secret to download the model [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). Ensure that the token grants access to this model.
40
-
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
41
-
```bash
42
-
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN# Your Hugging Face Token with access to Qwen
Create a Hugging Face secret to download the model [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). Ensure that the token grants access to this model.
40
+
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
41
+
```bash
42
+
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN # Your Hugging Face Token with access to Qwen
0 commit comments