|
| 1 | +# Benchmark |
| 2 | + |
| 3 | +This user guide shows how to run benchmarks against a vLLM deployment, by using both the Gateway API |
| 4 | +inference extension, and a Kubernetes service as the load balancing strategy. The |
| 5 | +benchmark uses the [Latency Profile Generator](https://github.com/AI-Hypercomputer/inference-benchmark) (LPG) |
| 6 | +tool to generate load and collect results. |
| 7 | + |
| 8 | +## Prerequisites |
| 9 | + |
| 10 | +### Deploy the inference extension and sample model server |
| 11 | + |
| 12 | +Follow this user guide https://gateway-api-inference-extension.sigs.k8s.io/guides/ to deploy the |
| 13 | +sample vLLM application, and the inference extension. |
| 14 | + |
| 15 | +### [Optional] Scale the sample vLLM deployment |
| 16 | + |
| 17 | +You will more likely to see the benefits of the inference extension when there are a decent number of replicas to make the optimal routing decision. |
| 18 | + |
| 19 | +```bash |
| 20 | +kubectl scale deployment my-pool --replicas=8 |
| 21 | +``` |
| 22 | + |
| 23 | +### Expose the model server via a k8s service |
| 24 | + |
| 25 | +As the baseline, let's also expose the vLLM deployment as a k8s service by simply applying the yaml: |
| 26 | + |
| 27 | +```bash |
| 28 | +kubectl apply -f .manifests/ModelServerService.yaml |
| 29 | +``` |
| 30 | + |
| 31 | +## Run benchmark |
| 32 | + |
| 33 | +### Run benchmark using the inference extension as the load balancing strategy |
| 34 | + |
| 35 | +1. Get the gateway IP: |
| 36 | + |
| 37 | + ```bash |
| 38 | + IP=$(kubectl get gateway/inference-gateway -o jsonpath='{.status.addresses[0].value}') |
| 39 | + echo "Update the <gateway-ip> in ./manifests/BenchmarkInferenceExtension.yaml to: $IP" |
| 40 | + ``` |
| 41 | + |
| 42 | +1. Then update the `<gateway-ip>` in `./manifests/BenchmarkInferenceExtension.yaml` to the IP |
| 43 | +of the gateway. Feel free to adjust other parameters such as request_rates as well. |
| 44 | + |
| 45 | +1. Start the benchmark tool. `kubectl apply -f ./manifests/BenchmarkInferenceExtension.yaml` |
| 46 | + |
| 47 | +1. Wait for benchmark to finish and download the results. Use the `benchmark_id` environment variable |
| 48 | +to specify what this benchmark is for. In this case, the result is for the `inference-extension`. You |
| 49 | +can use any id you like. |
| 50 | + |
| 51 | + ```bash |
| 52 | + benchmark_id='inference-extension' ./download-benchmark-results.bash |
| 53 | + ``` |
| 54 | + |
| 55 | +1. After the script finishes, you should see benchmark results under `./output/default-run/inference-extension/results/json` folder. |
| 56 | + |
| 57 | +### Run benchmark using k8s service as the load balancing strategy |
| 58 | + |
| 59 | +1. Get the service IP: |
| 60 | + |
| 61 | + ```bash |
| 62 | + IP=$(kubectl get service/my-pool-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}') |
| 63 | + echo "Update the <svc-ip> in ./manifests/BenchmarkK8sService.yaml to: $IP" |
| 64 | + ``` |
| 65 | + |
| 66 | +2. Then update the `<svc-ip>` in `./manifests/BenchmarkK8sService.yaml` to the IP |
| 67 | +of the service. Feel free to adjust other parameters such as **request_rates** as well. |
| 68 | + |
| 69 | +1. Start the benchmark tool. `kubectl apply -f ./manifests/BenchmarkK8sService.yaml` |
| 70 | + |
| 71 | +2. Wait for benchmark to finish and download the results. |
| 72 | + |
| 73 | + ```bash |
| 74 | + benchmark_id='k8s-svc' ./download-benchmark-results.bash |
| 75 | + ``` |
| 76 | + |
| 77 | +3. After the script finishes, you should see benchmark results under `./output/default-run/k8s-svc/results/json` folder. |
| 78 | + |
| 79 | +### Tips |
| 80 | + |
| 81 | +* You can specify `run_id="runX"` environment variable when running the `./download-benchmark-results.bash` script. |
| 82 | +This is useful when you run benchmarks multiple times and group the results accordingly. |
| 83 | + |
| 84 | +## Analyze the results |
| 85 | + |
| 86 | +This guide shows how to run the jupyter notebook using vscode. |
| 87 | + |
| 88 | +1. Create a python virtual environment. |
| 89 | + |
| 90 | + ```bash |
| 91 | + python3 -m venv .venv |
| 92 | + source .venv/bin/activate |
| 93 | + ``` |
| 94 | + |
| 95 | +1. Install the dependencies. |
| 96 | + |
| 97 | + ```bash |
| 98 | + pip install -r requirements.txt |
| 99 | + ``` |
| 100 | + |
| 101 | +1. Open the notebook `Inference_Extension_Benchmark.ipynb`, and run each cell. At the end you should |
| 102 | + see a bar chart like below: |
| 103 | + |
| 104 | +  |
0 commit comments