Skip to content

Commit 43f3d9e

Browse files
authored
[CI/Build] Add markdown linter (#11857)
Signed-off-by: Rafael Vasquez <[email protected]>
1 parent b25cfab commit 43f3d9e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+596
-571
lines changed

.github/workflows/sphinx-lint.yml renamed to .github/workflows/doc-lint.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ on:
1313
- "docs/**"
1414

1515
jobs:
16-
sphinx-lint:
16+
doc-lint:
1717
runs-on: ubuntu-latest
1818
strategy:
1919
matrix:
@@ -29,4 +29,4 @@ jobs:
2929
python -m pip install --upgrade pip
3030
pip install -r requirements-lint.txt
3131
- name: Linting docs
32-
run: tools/sphinx-lint.sh
32+
run: tools/doc-lint.sh

docs/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,4 +16,5 @@ make html
1616
```bash
1717
python -m http.server -d build/html/
1818
```
19+
1920
Launch your browser and open localhost:8000.

docs/source/api/model/index.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,3 @@ interfaces_base
99
interfaces
1010
adapters
1111
```
12-

docs/source/community/sponsors.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,15 @@ vLLM is a community project. Our compute resources for development and testing a
66
<!-- Note: Please keep these consistent with README.md. -->
77

88
Cash Donations:
9+
910
- a16z
1011
- Dropbox
1112
- Sequoia Capital
1213
- Skywork AI
1314
- ZhenFund
1415

1516
Compute Resources:
17+
1618
- AMD
1719
- Anyscale
1820
- AWS

docs/source/contributing/model/multimodal.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -200,6 +200,7 @@ def get_mm_max_tokens_per_item(self, seq_len: int) -> Mapping[str, int]:
200200
```{note}
201201
Our [actual code](gh-file:vllm/model_executor/models/llava.py) is more abstracted to support vision encoders other than CLIP.
202202
```
203+
203204
:::
204205
::::
205206

@@ -248,6 +249,7 @@ def get_dummy_processor_inputs(
248249
mm_data=mm_data,
249250
)
250251
```
252+
251253
:::
252254
::::
253255

@@ -312,6 +314,7 @@ def _get_mm_fields_config(
312314
Our [actual code](gh-file:vllm/model_executor/models/llava.py) additionally supports
313315
pre-computed image embeddings, which can be passed to be model via the `image_embeds` argument.
314316
```
317+
315318
:::
316319
::::
317320

@@ -369,6 +372,7 @@ def _get_prompt_replacements(
369372
),
370373
]
371374
```
375+
372376
:::
373377
::::
374378

docs/source/contributing/overview.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,6 @@ pytest tests/
3737
Currently, the repository is not fully checked by `mypy`.
3838
```
3939

40-
# Contribution Guidelines
41-
4240
## Issues
4341

4442
If you encounter a bug or have a feature request, please [search existing issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue) first to see if it has already been reported. If not, please [file a new issue](https://github.com/vllm-project/vllm/issues/new/choose), providing as much relevant information as possible.

docs/source/deployment/docker.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,8 @@ memory to share data between processes under the hood, particularly for tensor p
2828
You can build and run vLLM from source via the provided <gh-file:Dockerfile>. To build vLLM:
2929

3030
```console
31-
$ # optionally specifies: --build-arg max_jobs=8 --build-arg nvcc_threads=2
32-
$ DOCKER_BUILDKIT=1 docker build . --target vllm-openai --tag vllm/vllm-openai
31+
# optionally specifies: --build-arg max_jobs=8 --build-arg nvcc_threads=2
32+
DOCKER_BUILDKIT=1 docker build . --target vllm-openai --tag vllm/vllm-openai
3333
```
3434

3535
```{note}

docs/source/deployment/frameworks/cerebrium.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,14 @@ vLLM can be run on a cloud based GPU machine with [Cerebrium](https://www.cerebr
1313
To install the Cerebrium client, run:
1414

1515
```console
16-
$ pip install cerebrium
17-
$ cerebrium login
16+
pip install cerebrium
17+
cerebrium login
1818
```
1919

2020
Next, create your Cerebrium project, run:
2121

2222
```console
23-
$ cerebrium init vllm-project
23+
cerebrium init vllm-project
2424
```
2525

2626
Next, to install the required packages, add the following to your cerebrium.toml:
@@ -58,10 +58,10 @@ def run(prompts: list[str], temperature: float = 0.8, top_p: float = 0.95):
5858
Then, run the following code to deploy it to the cloud:
5959

6060
```console
61-
$ cerebrium deploy
61+
cerebrium deploy
6262
```
6363

64-
If successful, you should be returned a CURL command that you can call inference against. Just remember to end the url with the function name you are calling (in our case` /run`)
64+
If successful, you should be returned a CURL command that you can call inference against. Just remember to end the url with the function name you are calling (in our case`/run`)
6565

6666
```python
6767
curl -X POST https://api.cortex.cerebrium.ai/v4/p-xxxxxx/vllm/run \

docs/source/deployment/frameworks/dstack.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,16 +13,16 @@ vLLM can be run on a cloud based GPU machine with [dstack](https://dstack.ai/),
1313
To install dstack client, run:
1414

1515
```console
16-
$ pip install "dstack[all]
17-
$ dstack server
16+
pip install "dstack[all]
17+
dstack server
1818
```
1919

2020
Next, to configure your dstack project, run:
2121

2222
```console
23-
$ mkdir -p vllm-dstack
24-
$ cd vllm-dstack
25-
$ dstack init
23+
mkdir -p vllm-dstack
24+
cd vllm-dstack
25+
dstack init
2626
```
2727

2828
Next, to provision a VM instance with LLM of your choice (`NousResearch/Llama-2-7b-chat-hf` for this example), create the following `serve.dstack.yml` file for the dstack `Service`:

docs/source/deployment/frameworks/skypilot.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -334,12 +334,12 @@ run: |
334334

335335
1. Start the chat web UI:
336336

337-
```console
338-
sky launch -c gui ./gui.yaml --env ENDPOINT=$(sky serve status --endpoint vllm)
339-
```
337+
```console
338+
sky launch -c gui ./gui.yaml --env ENDPOINT=$(sky serve status --endpoint vllm)
339+
```
340340

341341
2. Then, we can access the GUI at the returned gradio link:
342342

343-
```console
344-
| INFO | stdout | Running on public URL: https://6141e84201ce0bb4ed.gradio.live
345-
```
343+
```console
344+
| INFO | stdout | Running on public URL: https://6141e84201ce0bb4ed.gradio.live
345+
```

docs/source/deployment/integrations/llamastack.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ vLLM is also available via [Llama Stack](https://github.com/meta-llama/llama-sta
77
To install Llama Stack, run
88

99
```console
10-
$ pip install llama-stack -q
10+
pip install llama-stack -q
1111
```
1212

1313
## Inference using OpenAI Compatible API

0 commit comments

Comments
 (0)