Skip to content

0.3.0 #490

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 162 commits into from
Closed

0.3.0 #490

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
162 commits
Select commit Hold shift + click to select a range
2f28c0a
current state
tthoraldson Oct 29, 2023
709ca38
Merge branch 'main' into feature/claude
tthoraldson Oct 29, 2023
74872a0
working anthropic llm_provider
tthoraldson Oct 29, 2023
9e3f9fa
add Anthropic example to docs
tthoraldson Oct 29, 2023
4293344
Add Anthropic Completions instance check
tthoraldson Oct 29, 2023
1519f6e
description, var clean up
tthoraldson Oct 29, 2023
06fb32c
run linting/autoformatting
tthoraldson Oct 31, 2023
afd808e
Add ToxicLanguage validator
thekaranacharya Nov 2, 2023
84a9b5e
Add walkthrough notebook
thekaranacharya Nov 2, 2023
e0d2e8f
Update model name and truncation/padding logic
thekaranacharya Nov 2, 2023
dbed25a
Add unit tests and integration tests
thekaranacharya Nov 2, 2023
3cafdb2
Modify dev requirements for unit test
thekaranacharya Nov 2, 2023
80c9dd3
Update default threshold based on new experiments
thekaranacharya Nov 3, 2023
beb080a
Update docstring with link to W&B project
thekaranacharya Nov 3, 2023
7cd32df
Merge branch 'main' into karan/sensitive-language
thekaranacharya Nov 3, 2023
d22da73
Update setup.py
thekaranacharya Nov 3, 2023
b3a2717
Bugfix
thekaranacharya Nov 3, 2023
664aa75
Strong type value to str and handle empty value
thekaranacharya Nov 3, 2023
5b86bbb
Add check for nltk in validate_each_sentence and else condition in va…
thekaranacharya Nov 3, 2023
ae36c1d
Add check for non-empty value in get_toxicity
thekaranacharya Nov 3, 2023
933f347
Add check for empty results from pipeline
thekaranacharya Nov 3, 2023
e93e628
Type cast to list
thekaranacharya Nov 3, 2023
95a555f
Convert results to list
thekaranacharya Nov 3, 2023
5e85ffd
Remove list type casting
thekaranacharya Nov 3, 2023
d489ed1
Remove extra unnecessary check
thekaranacharya Nov 3, 2023
fa79e8f
Merge branch 'main' into karan/sensitive-language
thekaranacharya Nov 13, 2023
f8828bc
Delete setup.py
thekaranacharya Nov 13, 2023
4d91e59
Update pyproject
thekaranacharya Nov 13, 2023
28b3f5b
Update both poetry files
thekaranacharya Nov 13, 2023
ebde652
Revert "Update both poetry files"
thekaranacharya Nov 13, 2023
12e497f
Only update pyproject without poetry.lock
thekaranacharya Nov 13, 2023
9c415f1
Run poetry lock --no-update
thekaranacharya Nov 13, 2023
e6e4b13
Remove torch dependency
thekaranacharya Nov 13, 2023
cc20815
Probable fix for torch dependency
thekaranacharya Nov 13, 2023
ea0929f
Remove spacing
thekaranacharya Nov 13, 2023
e1ac6fd
Add strong type casting for results from transformers model
thekaranacharya Nov 13, 2023
815dd07
Fix merge conflicts
thekaranacharya Nov 13, 2023
92e99cb
Fix linting
thekaranacharya Nov 13, 2023
8aae6fb
Change casting
thekaranacharya Nov 13, 2023
3221256
merge main
irgolic Nov 20, 2023
f1ad9ed
llm_api_wrappers: Fixup anthropic docs
irgolic Nov 20, 2023
91d3b52
llm_providers: Correctly import anthropic resources
irgolic Nov 20, 2023
eecbeb2
Add anthropic optional dependency
irgolic Nov 20, 2023
3e58834
format
irgolic Nov 20, 2023
6e098e6
pyproject.toml: Clamp pydantic to ">=1.10.9, <2.5"
irgolic Nov 20, 2023
a954535
pyproject.toml: Clamp pyright to 1.1.334
irgolic Nov 20, 2023
7781485
Update lockfile
irgolic Nov 20, 2023
087271d
implement stack
CalebCourier Nov 20, 2023
aa136e5
autoformat
CalebCourier Nov 21, 2023
8bdb664
Merge branch 'main' into history-and-logs
CalebCourier Nov 21, 2023
1b43fc8
typing
CalebCourier Nov 21, 2023
04f202d
inputs, start outputs
CalebCourier Nov 21, 2023
e80ba5c
base_prompt: Remove warning on old schema
irgolic Nov 21, 2023
9cae16f
Merge branch 'main' into feature/claude
irgolic Nov 22, 2023
a00810d
default everything to allow chaining
CalebCourier Nov 22, 2023
e268caf
Merge branch 'main' into history-and-logs
CalebCourier Nov 22, 2023
c0176c1
tests and lint
CalebCourier Nov 27, 2023
6e7a959
Validation Outcome (#431)
CalebCourier Nov 28, 2023
b78505f
impl and refactor
CalebCourier Nov 28, 2023
8b1b60e
autoformat and lint fixes
CalebCourier Nov 28, 2023
e35019a
use copies for trimming
CalebCourier Nov 28, 2023
6dd7149
Validation Outcome (#431)
CalebCourier Nov 28, 2023
6cccedd
impl and refactor
CalebCourier Nov 28, 2023
5d93d4a
autoformat and lint fixes
CalebCourier Nov 28, 2023
e10c767
lint fixes
CalebCourier Nov 28, 2023
e20a171
fix reask merging, harmonize Call and ValidationOutcome
CalebCourier Nov 29, 2023
51bba00
mark test as TODO
CalebCourier Nov 29, 2023
e071de4
force clean merge
CalebCourier Nov 29, 2023
64d55b8
Merge branch '0.3.0' into history-and-logs
CalebCourier Nov 29, 2023
73a1d09
Internal plumbing for input validation
irgolic Nov 29, 2023
ad2bc9f
Input validation via composition
irgolic Nov 29, 2023
bdba151
Updated pydantic instructions for openai v1
jamesbraza Nov 29, 2023
3a888b5
Moved to gpt-3.5-turbo-instruct since text-davinci-003 is deprecated
jamesbraza Nov 29, 2023
e238b7e
Added docstring to escape, explaining its purpose
jamesbraza Nov 29, 2023
1ac6553
Reverting back to text-gen model, since newer OpenAI model had differ…
jamesbraza Nov 29, 2023
ad1d977
Expanded test case of escape to account for unclosed curly
jamesbraza Nov 29, 2023
b8707a3
many test fixes, many more to come
CalebCourier Nov 29, 2023
c4b2901
Moved LangChain docs to use openai>=1
jamesbraza Nov 30, 2023
8fb9bcf
msg_history validation
irgolic Nov 30, 2023
96c321c
add prompt and instructions shortcuts
CalebCourier Nov 30, 2023
01589e7
show shortcut usage in tests
CalebCourier Nov 30, 2023
ad35773
start clean up, lift n shift rich print properties
CalebCourier Nov 30, 2023
615e7e2
remove old log classes
CalebCourier Nov 30, 2023
c99c89d
lint fixes
CalebCourier Nov 30, 2023
7051e3c
type fixes
CalebCourier Nov 30, 2023
faa15f0
lint fixes and cleanup
CalebCourier Nov 30, 2023
11e7550
fix errors from typing
CalebCourier Nov 30, 2023
fa120c1
ignore pyright bc it doesn't understand pydantic
CalebCourier Nov 30, 2023
39932b8
try to run CI
CalebCourier Nov 30, 2023
8e48155
fix the same tests again
CalebCourier Nov 30, 2023
cdc2269
update poetry lock
CalebCourier Nov 30, 2023
3aa6043
update example notebooks
CalebCourier Nov 30, 2023
cb0f15b
update supplemental docs
CalebCourier Nov 30, 2023
a1024d1
initial docs update
CalebCourier Nov 30, 2023
709c8d4
track initial prompt source, add samples to logs doc
CalebCourier Dec 1, 2023
e7021f2
lint fix
CalebCourier Dec 1, 2023
d59b58f
cleanup
CalebCourier Dec 1, 2023
ee18b8c
Merge pull request #460 from guardrails-ai/history-and-logs
CalebCourier Dec 1, 2023
787ab7c
Use safe get for async validate_dependents
ShreyaR Dec 1, 2023
6a3ddaa
raise on msg_history/prompt validation mismatch
irgolic Dec 4, 2023
be87cd0
Merge remote-tracking branch 'upstream/0.3.0' into input-validation
irgolic Dec 4, 2023
01a9cca
amend input validation call logging
irgolic Dec 4, 2023
3524f2d
tests, format
irgolic Dec 4, 2023
a0a693e
add async tests
irgolic Dec 4, 2023
33428c6
Validator refactor (#478)
zsimjee Dec 4, 2023
59057f0
fix one-line validator docstring (#484)
nefertitirogers Dec 4, 2023
fce8266
Add LlamaIndex example with GuardrailsOutputParser
thekaranacharya Nov 20, 2023
5f199ef
use updated notebook
zsimjee Dec 4, 2023
a1349e3
ignore llamaindex notebook
zsimjee Dec 4, 2023
0874c8b
Merge branch '0.3.0' into feature/claude
zsimjee Dec 4, 2023
f71d2dd
fix merge conflicts
Dec 4, 2023
8ee9d61
lint fix
Dec 4, 2023
8022c6f
Capture logs (#485)
CalebCourier Dec 4, 2023
eff5e5a
Merge pull request #467 from jamesbraza/escape-docs
CalebCourier Dec 4, 2023
70af18d
update from bad merge
Dec 4, 2023
baf92f1
Merge pull request #469 from jamesbraza/modernizing-langchain
CalebCourier Dec 4, 2023
cfed130
add validator to init file
Dec 4, 2023
c409794
Merge pull request #466 from jamesbraza/fixing-pydantic-examples
CalebCourier Dec 4, 2023
87ba4c7
add pipeline
Dec 4, 2023
f1cbb2e
merge 0.3.0
zsimjee Dec 4, 2023
89f89b3
re-add competitor check
zsimjee Dec 5, 2023
8694b00
lint
zsimjee Dec 5, 2023
c3dd838
Remove string formatting deprecation warning from prompt.py and instr…
irgolic Dec 5, 2023
79bbbd4
Merge remote-tracking branch 'upstream/0.3.0' into input-validation
irgolic Dec 5, 2023
28355fa
always push iteration to stack first
irgolic Dec 5, 2023
b437d85
wrap user facing exceptions
irgolic Dec 5, 2023
5bce4de
Merge branch 'main' into 0.3.0
CalebCourier Dec 5, 2023
07a4def
Merge branch '0.3.0' into 0.3.x-docs-updates
CalebCourier Dec 5, 2023
7a66f0d
lint fix
CalebCourier Dec 5, 2023
050c01a
Use field names in message history instead of explanations
zsimjee Dec 5, 2023
bf8286b
Merge pull request #487 from guardrails-ai/0.3.x-docs-updates
zsimjee Dec 6, 2023
321d87f
lint
zsimjee Dec 6, 2023
89b6d53
Merge branch '0.3.0' into karan/sensitive-language
zsimjee Dec 6, 2023
3500799
ref validated outputs in toxic lang tests
zsimjee Dec 7, 2023
5de4cb6
Merge pull request #422 from guardrails-ai/karan/sensitive-language
zsimjee Dec 7, 2023
5ec43b0
merge 0.3.0
zsimjee Dec 7, 2023
a4ab2ab
correct typing of safeget function call
zsimjee Dec 7, 2023
2edbffd
Merge pull request #481 from guardrails-ai/shreya/safe-get-async-vali…
zsimjee Dec 7, 2023
7963e3c
Merge pull request #444 from irgolic/resolve-string-formatting-deprec…
zsimjee Dec 7, 2023
1e280c7
merge in 0.3.0
zsimjee Dec 7, 2023
0509f80
Merge branch 'tthoraldson-feature/claude' into 0.3.0
zsimjee Dec 7, 2023
645c7c6
Merge branch '0.3.0' into input-validation
irgolic Dec 7, 2023
b5d9f6e
run: Split long control flow into submethods
irgolic Dec 7, 2023
6d03e84
Add input validation notebook
irgolic Dec 7, 2023
43f974c
Merge pull request #488 from guardrails-ai/err-msg-ref-fields
CalebCourier Dec 7, 2023
837e1d2
Merge branch '0.3.0' into input-validation
irgolic Dec 7, 2023
037cb40
datatypes: Handle missing optional lists and objects
irgolic Dec 7, 2023
8c4a3da
Merge pull request #494 from guardrails-ai/fix-optional-list-object
zsimjee Dec 7, 2023
9c1dfb3
Merge pull request #493 from guardrails-ai/input-validation
zsimjee Dec 7, 2023
bcf8066
Exclude competitor check notebook (#495)
thekaranacharya Dec 7, 2023
f136eb8
fix deps and tests
CalebCourier Dec 7, 2023
09c1c4c
feat Add Enum datatype
emekaokoli19 Dec 5, 2023
9a7d117
Enum: fix functionality
irgolic Dec 6, 2023
9072c85
Enum: add tests
irgolic Dec 6, 2023
8723375
fix test
CalebCourier Dec 7, 2023
7718b18
Merge branch 'main' into 0.3.0
CalebCourier Dec 7, 2023
f9315ae
fix error expectation
CalebCourier Dec 7, 2023
0278c84
Validation Outcome (#431)
CalebCourier Nov 28, 2023
192047b
parent 7890d7b6b21d0270405947db29388a3317a93a20
CalebCourier Nov 30, 2023
4e3d34a
Add LlamaIndex example with GuardrailsOutputParser
thekaranacharya Nov 20, 2023
4281836
parent ee1a8dc9bc74e0c358749acf555f93cd46eb781e
thekaranacharya Nov 2, 2023
d8a7d8b
current state
tthoraldson Oct 29, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ on:
branches:
- main
- dev
- '0.3.0'

# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/scripts/run_notebooks.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,8 @@ cd docs/examples
# Function to process a notebook
process_notebook() {
notebook="$1"
if [ "$notebook" != "valid_chess_moves.ipynb" ] && [ "$notebook" != "translation_with_quality_check.ipynb" ] && [ "$notebook" != "competitors_check.ipynb" ]; then
invalid_notebooks=("valid_chess_moves.ipynb" "translation_with_quality_check.ipynb" "llamaindex-output-parsing.ipynb" "competitors_check.ipynb")
if [[ ! " ${invalid_notebooks[@]} " =~ " ${notebook} " ]]; then
echo "Processing $notebook..."
poetry run jupyter nbconvert --to notebook --execute "$notebook"
if [ $? -ne 0 ]; then
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ dist/*
.cache
scratch/
.coverage*
coverage.xml
test.db
test.index
htmlcov
Expand Down
6 changes: 6 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,9 @@ test-cov:
view-test-cov:
poetry run pytest tests/ --cov=./guardrails/ --cov-report html && open htmlcov/index.html

view-test-cov-file:
poetry run pytest tests/unit_tests/test_logger.py --cov=./guardrails/ --cov-report html && open htmlcov/index.html

docs-serve:
poetry run mkdocs serve -a $(MKDOCS_SERVE_ADDR)

Expand All @@ -59,6 +62,9 @@ dev:
full:
poetry install --all-extras

self-install:
pip install -e .

all: autoformat type lint docs test

precommit:
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ Call the `Guard` object with the LLM API call as the first argument and add any
import openai

# Wrap the OpenAI API call with the `guard` object
raw_llm_output, validated_output = guard(
raw_llm_output, validated_output, *rest = guard(
openai.Completion.create,
engine="text-davinci-003",
max_tokens=1024,
Expand Down
4 changes: 4 additions & 0 deletions docs/api_reference/helper_classes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: guardrails.classes.generic
options:
members:
- "Stack"
8 changes: 8 additions & 0 deletions docs/api_reference/history_and_logs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
::: guardrails.classes.history
options:
members:
- "Call"
- "CallInputs"
- "Inputs"
- "Iteration"
- "Outputs"
2 changes: 1 addition & 1 deletion docs/concepts/guard.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ from guardrails import Guard

guard = Guard.from_rail(...)

raw_output, validated_output = guard(
raw_output, validated_output, *rest = guard(
openai.Completion.create,
engine="text-davinci-003",
max_tokens=1024,
Expand Down
215 changes: 164 additions & 51 deletions docs/concepts/logs.md
Original file line number Diff line number Diff line change
@@ -1,78 +1,191 @@
# Inspecting logs

All `gd.Guard` calls are logged internally, and can be accessed via two methods, `gd.Guard.guard_state` or `guardrails.log`.
All `Guard` calls are logged internally, and can be accessed via the guard history.

## 🪵 Accessing logs via `guardrails.log`
## 🇻🇦 Accessing logs via `Guard.history`

This is the simplest way to access logs. It returns a list of all `gd.Guard` calls, in the order they were made.
`history` is an attribute of the `Guard` class. It implements a standard `Stack` interface with a few extra helper methods and properties. For more information on our `Stack` implementation see the [Helper Classes](/api_reference/helper_classes) page.

In order to access logs, run:
Each entry in the history stack is a `Call` log which will contain information specific to a particular `Guard.__call__` or `Guard.parse` call in the order that they were executed within the current session.

```bash

eliot-tree --output-format=ascii guardrails.log
For example, if you have a guard:

```py
my_guard = Guard.from_rail(...)
```

## 🇻🇦 Accessing logs via `gd.Guard.guard_state`
and you call it multiple times:

```py
response_1 = my_guard(...)

`guard_state` is an attribute of the `gd.Guard` class. It contains:
response_2 = my_guard.parse(...)
```

1. A list of all `gd.Guard` calls, in the order they were made.
2. For each call, reasks needed and their results.
Then `guard.history` will have two call logs with the first representing the first call `response_1 = my_guard(...)` and the second representing the following `parse` call `response_2 = my_guard.parse(...)`.

To pretty print logs, run:
To pretty print logs for the latest call, run:

```python
from rich import print

print(guard.state.most_recent_call.tree)
print(guard.history.last.tree)
```
--8<--

![guard_state](../img/guard_history.png)
docs/html/single-step-history.html

To access fine-grained logs on field validation, see the FieldValidationLogs object:
--8<--

```python
validation_logs = guard.guard_state.all_histories[0].history[0].field_validation_logs
print(validation_logs.json(indent=2))
The `Call` log will contain initial and final information about a particular guard call.

```py
first_call = my_guard.history.first
```

For example, it tracks the initial inputs as provided:
```py
print("prompt\n-----")
print(first_call.prompt)
print("prompt params\n------------- ")
print(first_call.prompt_params)
```
```log
prompt
-----

You are a human in an enchanted forest. You come across opponents of different types. You should fight smaller opponents, run away from bigger ones, and freeze if the opponent is a bear.

You run into a ${opp_type}. What do you do?

${gr.complete_json_suffix_v2}


Here are a few examples

goblin: {"action": {"chosen_action": "fight", "weapon": "crossbow"}}
troll: {"action": {"chosen_action": "fight", "weapon": "sword"}}
giant: {"action": {"chosen_action": "flight", "flight_direction": "north", "distance": 1}}
dragon: {"action": {"chosen_action": "flight", "flight_direction": "south", "distance": 4}}
black bear: {"action": {"chosen_action": "freeze", "duration": 3}}
beets: {"action": {"chosen_action": "fight", "weapon": "fork"}}

prompt params
-------------
{'opp_type': 'grizzly'}
```

```json
as well as the final outputs:
```py
print("status: ", first_call.status) # The final status of this guard call
print("validated response:", first_call.validated_output) # The final valid output of this guard call
```
```log
status: pass
validated response: {'action': {'chosen_action': 'freeze', 'duration': 3}}
```


The `Call` log also tracks cumulative values from any iterations that happen within the call.

For example, if the first response from the LLM fails validation and a reask occurs, the `Call` log can provide total tokens consumed (*currently only for OpenAI models), as well as access to all of the raw outputs from the LLM:
```py
print("prompt token usage: ", first_call.prompt_tokens_consumed) # Total number of prompt tokens consumed across iterations within this call
print("completion token usage: ", first_call.completion_tokens_consumed) # Total number of completion tokens consumed across iterations within this call
print("total token usage: ",first_call.tokens_consumed) # Total number of tokens consumed; equal to the sum of the two values above
print("llm responses\n-------------") # An Stack of the LLM responses in order that they were received
for r in first_call.raw_outputs:
print(r)
```
```log
prompt token usage: 909
completion token usage: 57
total token usage: 966

llm responses
-------------
{"action": {"chosen_action": "freeze"}}
{
"validator_logs": [],
"children": {
"name": {
"validator_logs": [
{
"validator_name": "TwoWords",
"value_before_validation": "peter parker the second",
"validation_result": {
"outcome": "fail",
"metadata": null,
"error_message": "must be exactly two words",
"fix_value": "peter parker"
},
"value_after_validation": {
"incorrect_value": "peter parker the second",
"fail_results": [
{
"outcome": "fail",
"metadata": null,
"error_message": "must be exactly two words",
"fix_value": "peter parker"
}
],
"path": [
"name"
]
}
}
],
"children": {}
}
"action": {
"chosen_action": "freeze",
"duration": null
}
}
{
"action": {
"chosen_action": "freeze",
"duration": 1
}
}
```

For more information on `Call`, see the [History & Logs](/api_reference/history_and_logs/#guardrails.classes.history.Call) page.

## 🇻🇦 Accessing logs from individual steps
In addition to the cumulative values available directly on the `Call` log, it also contains a `Stack` of `Iteration`'s. Each `Iteration` represent the logs from within a step in the guardrails process. This includes the call to the LLM, as well as parsing and validating the LLM's response.

Each `Iteration` is treated as a stateless entity so it will only contain information about the inputs and outputs of the particular step it represents.

For example, in order to see the raw LLM response as well as the logs for the specific validations that failed during the first step of a call, we can access this information via that steps `Iteration`:

```py
first_step = first_call.iterations.first

first_llm_output = first_step.raw_output
print("First LLM response\n------------------")
print(first_llm_output)
print(" ")

validation_logs = first_step.validator_logs
print("\nValidator Logs\n--------------")
for log in validation_logs:
print(log.json(indent=2))
```
```log
First LLM response
------------------
{"action": {"chosen_action": "fight", "weapon": "spoon"}}


Validator Logs
--------------
{
"validator_name": "ValidChoices",
"value_before_validation": "spoon",
"validation_result": {
"outcome": "fail",
"metadata": null,
"error_message": "Value spoon is not in choices ['crossbow', 'axe', 'sword', 'fork'].",
"fix_value": null
},
"value_after_validation": {
"incorrect_value": "spoon",
"fail_results": [
{
"outcome": "fail",
"metadata": null,
"error_message": "Value spoon is not in choices ['crossbow', 'axe', 'sword', 'fork'].",
"fix_value": null
}
],
"path": [
"action",
"weapon"
]
}
}
```

Similar to the `Call` log, we can also see the token usage for just this step:
```py
print("prompt token usage: ", first_step.prompt_tokens_consumed)
print("completion token usage: ", first_step.completion_tokens_consumed)
print("token usage for this step: ",first_step.tokens_consumed)
```
```log
prompt token usage: 617
completion token usage: 16
token usage for this step: 633
```

```
For more information on the properties available on `Iteration`, ee the [History & Logs](/api_reference/history_and_logs/#guardrails.classes.history.Iteration) page.
4 changes: 2 additions & 2 deletions docs/concepts/validators.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Sometimes validators need addtional parameters that are only availble during run
```python
guard = Guard.from_rail("my_railspec.rail")

raw_output, guarded_output = guard(
raw_output, guarded_output, *rest = guard(
llm_api=openai.ChatCompletion.create,
model="gpt-3.5-turbo",
num_reasks=3,
Expand Down Expand Up @@ -134,7 +134,7 @@ ${guardrails.complete_json_suffix}

guard = Guard.from_rail_string(rail_string=rail_str)

raw_output, guarded_output = guard(
raw_output, guarded_output, *rest = guard(
llm_api=openai.ChatCompletion.create,
model="gpt-3.5-turbo"
)
Expand Down
Loading