Skip to content

Commit 1d2bedc

Browse files
committed
rebase
Signed-off-by: Nir Rozenbaum <[email protected]>
1 parent d3d0dff commit 1d2bedc

File tree

1 file changed

+1
-10
lines changed

1 file changed

+1
-10
lines changed

site-src/guides/index.md

+1-10
Original file line numberDiff line numberDiff line change
@@ -35,17 +35,8 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
3535
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/gpu-deployment.yaml
3636
```
3737

38-
<<<<<<< HEAD
38+
3939
=== "CPU-Based Model Server"
40-
=======
41-
This setup is using the formal `vllm-cpu` image, which according to the documentation can run vLLM on x86 CPU platform.
42-
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.
43-
While it is possible to deploy the model server with less resources, this is not recommended.
44-
For example, in our tests, loading the model using 8GB of memory and 1 CPU was possible but took almost 3.5 minutes and inference requests took unreasonable time.
45-
In general, there is a tradeoff between the memory and CPU we allocate to our pods and the performance. The more memory and CPU we allocate the better performance we can get.
46-
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times.
47-
For modifying the allocated resources, adjust the numbers in `./config/manifests/vllm/cpu-deployment.yaml` as needed.
48-
>>>>>>> cb83786 (documentation cpu platform)
4940

5041
This setup is using the formal `vllm-cpu` image, which according to the documentation can run vLLM on x86 CPU platform.
5142
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.

0 commit comments

Comments
 (0)