Skip to content

Commit 07ee665

Browse files
committed
docs: update code blocks to use sh syntax highlighting
1 parent 4d5fef6 commit 07ee665

File tree

4 files changed

+23
-23
lines changed

4 files changed

+23
-23
lines changed

docs/getting_started/installation-guide.md

+17-17
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ This guide walks you through the following steps to install NeMo Guardrails:
1010

1111
## Prerequisites
1212

13-
Python 3.9, 3.10 or 3.11.
13+
- Python 3.9, 3.10, or 3.11
1414

1515
## Additional dependencies
1616

@@ -35,21 +35,21 @@ To experiment with NeMo Guardrails from scratch, use a fresh virtual environment
3535

3636
1. Create a folder, such as *my_assistant*, for your project.
3737

38-
```bash
39-
> mkdir my_assistant
40-
> cd my_assistant
38+
```sh
39+
mkdir my_assistant
40+
cd my_assistant
4141
```
4242

4343
2. Create a virtual environment.
4444

45-
```bash
46-
> python3 -m venv venv
45+
```sh
46+
python3 -m venv venv
4747
```
4848

4949
3. Activate the virtual environment.
5050

51-
```bash
52-
> source venv/bin/activate
51+
```sh
52+
source venv/bin/activate
5353
```
5454

5555
### Setting up a virtual environment on Windows
@@ -65,8 +65,8 @@ Use the `mkvirtualenv` *name* command to activate a new virtual environment call
6565

6666
Install NeMo Guardrails using **pip**:
6767

68-
```bash
69-
> pip install nemoguardrails
68+
```sh
69+
pip install nemoguardrails
7070
```
7171

7272
## Installing from source code
@@ -75,13 +75,13 @@ NeMo Guardrails is under active development and the main branch always contains
7575

7676
1. Clone the repository:
7777

78-
```
78+
```sh
7979
git clone https://github.com/NVIDIA/NeMo-Guardrails.git
8080
```
8181

8282
2. Install the package locally:
8383

84-
```
84+
```sh
8585
cd NeMo-Guardrails
8686
pip install -e .
8787
```
@@ -98,7 +98,7 @@ The `nemoguardrails` package also defines the following extra dependencies:
9898

9999
To keep the footprint of `nemoguardrails` as small as possible, these are not installed by default. To install any of the extra dependency you can use **pip** as well. For example, to install the `dev` extra dependencies, run the following command:
100100

101-
```bash
101+
```sh
102102
> pip install nemoguardrails[dev]
103103
```
104104

@@ -107,7 +107,7 @@ To keep the footprint of `nemoguardrails` as small as possible, these are not in
107107
```{warning}
108108
If pip fails to resolve dependencies when running `pip install nemoguardrails[all]`, you should specify additional constraints directly in the `pip install` command.
109109
110-
**Example Command**:
110+
Example Command:
111111
112112
```sh
113113
pip install "nemoguardrails[all]" "pandas>=1.4.0,<3"
@@ -117,9 +117,9 @@ To use OpenAI, just use the `openai` extra dependency that ensures that all requ
117117
Make sure the `OPENAI_API_KEY` environment variable is set,
118118
as shown in the following example, where *YOUR_KEY* is your OpenAI key.
119119

120-
```bash
121-
> pip install nemoguardrails[openai]
122-
> export OPENAI_API_KEY=YOUR_KEY
120+
```zsh
121+
pip install nemoguardrails[openai]
122+
export OPENAI_API_KEY=YOUR_KEY
123123
```
124124

125125
Some NeMo Guardrails LLMs and features have specific installation requirements, including a more complex set of steps. For example, [AlignScore](../user_guides/advanced/align_score_deployment.md) fact-checking, using [Llama-2](../../examples/configs/llm/hf_pipeline_llama2/README.md) requires two additional packages.

docs/user_guides/advanced/llama-guard-deployment.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,13 @@ Detailed below are steps to self-host Llama Guard using vLLM and HuggingFace. Al
55
1. Get access to the Llama Guard model from Meta on HuggingFace. See [this page](https://huggingface.co/meta-llama/LlamaGuard-7b) for more details.
66

77
2. Log in to Hugging Face with your account token
8-
```
8+
```sh
99
huggingface-cli login
1010
```
1111

1212
3. Here, we use vLLM to host a Llama Guard inference endpoint in the OpenAI-compatible mode.
1313

14-
```
14+
```sh
1515
pip install vllm
1616
python -m vllm.entrypoints.openai.api_server --port 5123 --model meta-llama/LlamaGuard-7b
1717
```

docs/user_guides/jailbreak_detection_heuristics/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -16,19 +16,19 @@ Make sure to check that the prerequisites for the ABC bot are satisfied.
1616

1717
1. Install the `openai` package:
1818

19-
```bash
19+
```sh
2020
pip install openai
2121
```
2222

2323
2. Set the `OPENAI_API_KEY` environment variable:
2424

25-
```bash
25+
```sh
2626
export OPENAI_API_KEY=$OPENAI_API_KEY # Replace with your own key
2727
```
2828

2929
3. Install the following packages to test the jailbreak detection heuristics locally:
3030

31-
```bash
31+
```sh
3232
pip install transformers torch
3333
```
3434

docs/user_guides/multi_config_api/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ nest_asyncio.apply()
4141

4242
In this guide, the server is started programmatically, as shown below. This is equivalent to (from the root of the project):
4343

44-
```bash
44+
```sh
4545
nemoguardrails server --config=examples/server_configs/atomic
4646
```
4747

0 commit comments

Comments
 (0)