-
Notifications
You must be signed in to change notification settings - Fork 398
Guardrails Langchain integration with streamable AgentExecutor #651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, I'm looking into this one now. |
I've got reason to believe this is actually not a Guardrails specific problem but rather a chaining one in general - when I swap out the
I get the same RunnableSequence error. I'm diving deeper to see what the actual issue is. |
ok the second use seems to be the right place to chain in (where we got the 'API must be provided' error). I attached a debugger, and found that when the guard |
When I chain the guard after the OpenAIToolsAgentOutputParser(), a value is passed to the guard finally, however, the value includes the function calls. The problem here is that the agent is not executed by the time the results are passed to teh guard. Figuring out if there's a way to fix that [OpenAIToolAgentAction(tool='get_retriever_docs', tool_input={'query': 'secret'}, log="\nInvoking: |
Where we chain is very important - the place to chain the guard with output validation will be after the agent executes. See the code sample below from langchain import hub
from langchain.agents import AgentExecutor
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain.tools import tool
from langchain_openai import ChatOpenAI
from langchain_core.documents.base import Document
from guardrails.hub import RegexMatch
from guardrails import Guard
from langchain_core.runnables import RunnablePassthrough
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
prompt = hub.pull("hwchase17/openai-tools-agent")
@tool
def get_retriever_docs(query: str) -> list[Document]:
"""Returns a list of documents from the retriever."""
return [
Document(
page_content="# test file\n\nThis is a test file with a secret code of 'blue-green-apricot-brownie-cake-mousepad'.",
metadata={"source": "./test.md"},
)
]
# Set up a Guard
topic = "apricot"
guard = Guard().use(RegexMatch(topic, match_type="search", on_fail="filter"))
model = ChatOpenAI(temperature=0, streaming=False)
llm = model
tools = [get_retriever_docs]
############################################ this is a copy-paste from langchain.agents.create_openai_tools_agent
llm_with_tools = llm.bind(tools=[convert_to_openai_tool(tool) for tool in tools])
agent = (
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
)
)
| prompt
| llm_with_tools
| OpenAIToolsAgentOutputParser()
)
############################################
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True).with_config(
{"run_name": "Agent"}
)
chain = agent_executor | guard
query = "call get_retriever_docs and tell me a secret from the docs"
print(chain.invoke({"input": query})) |
@zsimjee Confirmed that chaining the |
Awesome! I wish there was a way to chain the actual RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
)
)
| prompt
| llm_with_tools
| OpenAIToolsAgentOutputParser()
| AgentExecutor().with_config("run_name": "Agent")
| guard
| [other output parsers] Closing for now |
Discussed in #644
Description
Unable to integrate guardrails with a langchain Agent Executor. This issue can be reproduced by running the follow code after installing RegexMatch:
partial System info (let me know have a complete minimal example would be helpful)
The text was updated successfully, but these errors were encountered: