diff --git a/docs/source/deployment/frameworks/bentoml.md b/docs/source/deployment/frameworks/bentoml.md index ea0b5d1d4c9..2bf435bda83 100644 --- a/docs/source/deployment/frameworks/bentoml.md +++ b/docs/source/deployment/frameworks/bentoml.md @@ -2,6 +2,6 @@ # BentoML -[BentoML](https://github.com/bentoml/BentoML) allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-complicant image and deploy it on Kubernetes. +[BentoML](https://github.com/bentoml/BentoML) allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-compliant image and deploy it on Kubernetes. For details, see the tutorial [vLLM inference in the BentoML documentation](https://docs.bentoml.com/en/latest/use-cases/large-language-models/vllm.html). diff --git a/docs/source/deployment/frameworks/index.md b/docs/source/deployment/frameworks/index.md index 6a59131d366..964782763f6 100644 --- a/docs/source/deployment/frameworks/index.md +++ b/docs/source/deployment/frameworks/index.md @@ -8,6 +8,7 @@ cerebrium dstack helm lws +modal skypilot triton ``` diff --git a/docs/source/deployment/frameworks/modal.md b/docs/source/deployment/frameworks/modal.md new file mode 100644 index 00000000000..e7c42088e36 --- /dev/null +++ b/docs/source/deployment/frameworks/modal.md @@ -0,0 +1,7 @@ +(deployment-modal)= + +# Modal + +vLLM can be run on cloud GPUs with [Modal](https://modal.com), a serverless computing platform designed for fast auto-scaling. + +For details on how to deploy vLLM on Modal, see [this tutorial in the Modal documentation](https://modal.com/docs/examples/vllm_inference).