Skip to content

Commit c0fcdf5

Browse files
committed
docs(fix): replace img tag with Markdown images
Signed-off-by: Mike McKiernan <[email protected]>
1 parent 34a94c6 commit c0fcdf5

File tree

3 files changed

+8
-25
lines changed

3 files changed

+8
-25
lines changed

docs/getting-started/2-core-colang-concepts/README.md

+2-6
Original file line numberDiff line numberDiff line change
@@ -273,9 +273,7 @@ In our "Hello World" example, the predefined messages "Hello world!" and "How ar
273273

274274
In the previous example, the LLM is prompted once. The following figure provides a summary of the outlined sequence of steps:
275275

276-
<div align="center">
277-
<img src="../../_static/puml/core_colang_concepts_fig_1.png" width="486">
278-
</div>
276+
!["Sequence diagram showing the three main steps of processing a user greeting: 1) Computing the canonical form of the user message, 2) Determining the next step using flows, and 3) Generating the bot's response message"](../../_static/puml/core_colang_concepts_fig_1.png){w=486px align=center
279277

280278
Let's examine the same process for the follow-up question "What is the capital of France?".
281279

@@ -321,9 +319,7 @@ Summary: 3 LLM call(s) took 1.79 seconds and used 1374 tokens.
321319

322320
Based on these steps, we can see that the `ask general question` canonical form is predicted for the user utterance "What is the capital of France?". Since there is no flow that matches it, the LLM is asked to predict the next step, which in this case is `bot response for general question`. Also, since there is no predefined response, the LLM is asked a third time to predict the final message.
323321

324-
<div align="center">
325-
<img src="../../_static/puml/core_colang_concepts_fig_2.png" width="686">
326-
</div>
322+
![Sequence diagram showing the three main steps of processing a follow-up question in NeMo Guardrails: 1) Computing the canonical form of the user message, such as 'ask general question' for 'What is the capital of France?', 2) Determining the next step using the LLM, such as 'bot response for general question', and 3) Generating the bot's response message. These are the steps to handle a question that doesn't have a predefined flow.](../../_static/puml/core_colang_concepts_fig_2.png){w=686px align=center}
327323

328324
## Wrapping up
329325

docs/getting-started/4-input-rails/README.md

+2-6
Original file line numberDiff line numberDiff line change
@@ -283,9 +283,7 @@ print(info.llm_calls[0].completion)
283283

284284
The following figure depicts in more details how the self-check input rail works:
285285

286-
<div align="center">
287-
<img src="../../_static/puml/input_rails_fig_1.png" width="815">
288-
</div>
286+
![Sequence diagram showing how the self-check input rail works in NeMo Guardrails: 1) Application code sends a user message to the Programmable Guardrails system, 2) The message is passed to the Input Rails component, 3) Input Rails calls the self_check_input action, 4) The action uses an LLM to evaluate the message, 5) If the LLM returns 'Yes' indicating inappropriate content, the input is blocked and the bot responds with 'I am not able to respond to this.'](../../_static/puml/input_rails_fig_1.png){w=815px align=center}
289287

290288
The `self check input` rail calls the `self_check_input` action, which in turn calls the LLM using the `self_check_input` task prompt.
291289

@@ -327,9 +325,7 @@ print(info.llm_calls[0].completion)
327325

328326
Because the input rail was not triggered, the flow continued as usual.
329327

330-
<div align="center">
331-
<img src="../../_static/puml/input_rails_fig_2.png" width="740">
332-
</div>
328+
![Sequence diagram showing how the self-check input rail works in NeMo Guardrails when processing a valid user message: 1) Application code sends a user message to the Programmable Guardrails system, 2) The message is passed to the Input Rails component, 3) Input Rails calls the self_check_input action, 4) The action uses an LLM to evaluate the message, 5) If the LLM returns 'No' (indicating appropriate content), the input is allowed to continue, 6) The system then proceeds to generate a bot response using the general task prompt](../../_static/puml/input_rails_fig_2.png){w=740px align=center}
333329

334330
Note that the final answer is not correct.
335331

docs/user-guides/guardrails-process.md

+4-13
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,7 @@ This guide provides an overview of the main types of rails supported in NeMo Gua
66

77
NeMo Guardrails has support for five main categories of rails: input, dialog, output, retrieval, and execution. The diagram below provides an overview of the high-level flow through these categories of flows.
88

9-
<p style="text-align: center;">
10-
<img src="../_static/images/programmable_guardrails_flow.png" style="max-width: 913px; width: 75%;">
11-
</p>
9+
!["High-level flow through the five main categories of guardrails in NeMo Guardrails: input rails for validating user input, dialog rails for controlling conversation flow, output rails for validating bot responses, retrieval rails for handling retrieved information, and execution rails for managing custom actions.](../_static/images/programmable_guardrails_flow.png){w=75% align=center}
1210

1311
## Categories of Rails
1412

@@ -28,24 +26,19 @@ There are five types of rails supported in NeMo Guardrails:
2826

2927
The diagram below depicts the guardrails process in detail:
3028

31-
<div aling="center">
32-
<img src="../_static/puml/master_rails_flow.png" width="75%">
33-
</div>
29+
![Sequence diagram showing the complete guardrails process in NeMo Guardrails: 1) Input Validation stage where user messages are processed by input rails that can use actions and LLM to validate or alter input, 2) Dialog stage where messages are processed by dialog rails that can interact with a knowledge base, use retrieval rails to filter retrieved information, and use execution rails to perform custom actions, 3) Output Validation stage where bot responses are processed by output rails that can use actions and LLM to validate or alter output. The diagram shows all optional components and their interactions, including knowledge base queries, custom actions, and LLM calls at various stages.](../_static/puml/master_rails_flow.png){w="75%" align=center}
3430

3531
The guardrails process has multiple stages that a user message goes through:
3632

3733
1. **Input Validation stage**: The user input is first processed by the input rails. The input rails decide if the input is allowed, whether it should be altered or rejected.
3834
2. **Dialog stage**: If the input is allowed and the configuration contains dialog rails (i.e., at least one user message is defined), then the user message is processed by the dialog flows. This will ultimately result in a bot message.
3935
3. **Output Validation stage**: After a bot message is generated by the dialog rails, it is processed by the output rails. The Output rails decide if the output is allowed, whether it should be altered, or rejected.
4036

41-
4237
## The Dialog Rails Flow
4338

4439
The diagram below depicts the dialog rails flow in detail:
4540

46-
<p align="center">
47-
<img src="../_static/puml/dialog_rails_flow.png" width="500">
48-
</p>
41+
![Sequence diagram showing the detailed dialog rails flow in NeMo Guardrails: 1) User Intent Generation stage where the system first searches for similar canonical form examples in a vector database, then either uses the closest match if embeddings_only is enabled, or asks the LLM to generate the user's intent. 2) Next Step Prediction stage where the system either uses a matching flow if one exists, or searches for similar flow examples and asks the LLM to generate the next step. 3) Bot Message Generation stage where the system either uses a predefined message if one exists, or searches for similar bot message examples and asks the LLM to generate an appropriate response. The diagram shows all the interactions between the application code, LLM Rails system, vector database, and LLM, with clear branching paths based on configuration options and available predefined content.](../_static/puml/dialog_rails_flow.png){w=500px align=center}
4942

5043
The dialog rails flow has multiple stages that a user message goes through:
5144

@@ -59,6 +52,4 @@ The dialog rails flow has multiple stages that a user message goes through:
5952

6053
When the `single_llm_call.enabled` is set to `True`, the dialog rails flow will be simplified to a single LLM call that predicts all the steps at once. The diagram below depicts the simplified dialog rails flow:
6154

62-
<p align="center">
63-
<img src="../_static/puml/single_llm_call_flow.png" width="500">
64-
</p>
55+
![Sequence diagram showing the simplified dialog rails flow in NeMo Guardrails when single LLM call is enabled: 1) The system first searches for similar examples in the vector database for canonical forms, flows, and bot messages. 2) A single LLM call is made using the generate_intent_steps_message task prompt to predict the user's canonical form, next step, and bot message all at once. 3) The system then either uses the next step from a matching flow if one exists, or uses the LLM-generated next step. 4) Finally, the system either uses a predefined bot message if available, uses the LLM-generated message if the next step came from the LLM, or makes one additional LLM call to generate the bot message. This simplified flow reduces the number of LLM calls needed to process a user message.](../_static/puml/single_llm_call_flow.png){w=500px align=center}

0 commit comments

Comments
 (0)