-
Notifications
You must be signed in to change notification settings - Fork 398
[bug] llm_api function triggering twice #681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@edisontim I'm taking a look. I'll update here when I've found something. |
@edisontim I found something. There was an old flow for pydantic guards where we first tried to use function calling and if it raised an exception called again without it. We removed this for synchronous flows, but for some reason it still exists for the async flows. I don't see any obvious reason why this still needs to happen so I'm going to dig into it and see if we can remove this. |
Cool! Thanks. I think there's another bug in the async API as well, if you check the text of my issue in the Additional Context part, I have to delete the |
@edisontim Our async support for OpenAI does indeed seem to be broken and not well documented. I'm escalating this accordingly. In the meantime, to get past the specific issue you're encountering try this: async def request_prompt_completion(self, prompt: str, instructions: str, *args, **kwargs) -> str:
print("Printing")
response = await self.client.chat.completions.create(
messages=[
{
"role": "user",
"content": prompt,
},
{"role": "system", "content": instructions},
],
*args,
**kwargs,
)
msg = response.choices[0].message.content
return msg |
I submitted a PR to resolve the initial issue you reported concerning the LLM API being called twice. As for the issue with async support, we're looking at some better and more transparent ways to handle our LLM interactions in general. I may move that to a different issue if you don't mind so we can track it in a more dedicated way. |
I submitted a new issue for OpenAI v1.x AsyncClient support. The duplicate call fix has been merged onto main and will be released as part of guardrails v0.4.3. I'm going to close this issue for now, but please raise another if you experience any other problems. |
Describe the bug
I have a wrapper function around the async
openai.AsyncClient().chat.completions.create
. Printing inside the function after passing it to the guard make the print happen twice, wondering the API call gets also triggered twice.To Reproduce
Steps to reproduce the behavior:
Expected behavior
Whatever happens in my wrapper function should only trigger once
Library version:
0.4.2
Additional context
By the way, I struggled hard to get the async function of OpenAi to work and I'm wondering if I even did it the right way (see how the wrapper function needs to delete the
instructions
field from the kwargs, as otherwise OpenAi triggers an Exception). The documentation on guardrails' website seems to rely on an old version of the OpenAI python API, could you update it please? Would be very much appreciated :)The text was updated successfully, but these errors were encountered: