Skip to content

[bug] llm_api function triggering twice #681

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
edisontim opened this issue Mar 27, 2024 · 6 comments
Closed

[bug] llm_api function triggering twice #681

edisontim opened this issue Mar 27, 2024 · 6 comments
Assignees
Labels
bug Something isn't working

Comments

@edisontim
Copy link

Describe the bug
I have a wrapper function around the async openai.AsyncClient().chat.completions.create. Printing inside the function after passing it to the guard make the print happen twice, wondering the API call gets also triggered twice.

To Reproduce
Steps to reproduce the behavior:

class AsyncOpenAiClient:
    client: openai.AsyncOpenAI

    def __init__(self):
        self.client = openai.AsyncClient()

    async def request_prompt_completion(self, input_str: str, *args, **kwargs) -> str:
        system_prompt = kwargs["instructions"]
        del kwargs["instructions"]

        print("Printing")
        response = await self.client.chat.completions.create(
            messages=[
                {
                    "role": "user",
                    "content": input_str,
                },
                {"role": "system", "content": system_prompt},
            ],
            *args,
            **kwargs,
        )
        msg = response.choices[0].message.content
        return msg

    async def request_embedding(self, input, **kwargs) -> list[float]:
        response = await self.client.embeddings.create(input=input, **kwargs)
        return response.data[0].embedding
class DialogueSegment(BaseModel):
    full_name: str = Field(description="Full name of the NPC speaking.")
    dialogue_segment: str = Field(description="The dialogue spoken by the NPC.")

class Thought(BaseModel):
    full_name: str = Field(description="Full name of the NPC expressing the thought.")
    value: str = Field(
        description="""The NPC's thoughts and feelings about the discussion, including nuanced emotional responses and sentiments towards the topics being discussed."""
    )

class Townhall(BaseModel):
    dialogue: list[DialogueSegment] = Field(
        description="""Discussion held by the NPCs, structured to ensure each NPC speaks twice, revealing their viewpoints and
            emotional reactions to the discussion topics."""
    )
    thoughts: list[Thought] = Field(
        description="""Collection of NPCs' thoughts post-discussion, highlighting their reflective sentiments and emotional
            responses to the topics covered."""
    )
    plotline: str = Field(description="The central theme or main storyline that unfolds throughout the dialogue.")


llm_client = AsyncOpenAiClient()
guard = Guard.from_pydantic(output_class=Townhall, instructions="whatever", num_reasks=0)

_raw_llm_response, validated_response, *_rest = await guard(
    llm_api=self.client.request_prompt_completion,
    prompt="Hello world!",
    model="gpt-4-0125-preview",
    temperature=1,
)

Expected behavior
Whatever happens in my wrapper function should only trigger once

Library version:
0.4.2

Additional context
By the way, I struggled hard to get the async function of OpenAi to work and I'm wondering if I even did it the right way (see how the wrapper function needs to delete the instructions field from the kwargs, as otherwise OpenAi triggers an Exception). The documentation on guardrails' website seems to rely on an old version of the OpenAI python API, could you update it please? Would be very much appreciated :)

@edisontim edisontim added the bug Something isn't working label Mar 27, 2024
@CalebCourier CalebCourier self-assigned this Mar 27, 2024
@CalebCourier
Copy link
Collaborator

@edisontim I'm taking a look. I'll update here when I've found something.

@CalebCourier
Copy link
Collaborator

@edisontim I found something. There was an old flow for pydantic guards where we first tried to use function calling and if it raised an exception called again without it. We removed this for synchronous flows, but for some reason it still exists for the async flows. I don't see any obvious reason why this still needs to happen so I'm going to dig into it and see if we can remove this.

@edisontim
Copy link
Author

Cool! Thanks. I think there's another bug in the async API as well, if you check the text of my issue in the Additional Context part, I have to delete the instructions coming into my wrapper function to pass it correctly to the OpenAI api, in the form of an array of messages

@CalebCourier
Copy link
Collaborator

@edisontim Our async support for OpenAI does indeed seem to be broken and not well documented. I'm escalating this accordingly.

In the meantime, to get past the specific issue you're encountering try this:

async def request_prompt_completion(self, prompt: str, instructions: str, *args, **kwargs) -> str:
    print("Printing")
    response = await self.client.chat.completions.create(
        messages=[
            {
                "role": "user",
                "content": prompt,
            },
            {"role": "system", "content": instructions},
        ],
        *args,
        **kwargs,
    )
    msg = response.choices[0].message.content
    return msg

@CalebCourier
Copy link
Collaborator

I submitted a PR to resolve the initial issue you reported concerning the LLM API being called twice. As for the issue with async support, we're looking at some better and more transparent ways to handle our LLM interactions in general. I may move that to a different issue if you don't mind so we can track it in a more dedicated way.

@CalebCourier
Copy link
Collaborator

I submitted a new issue for OpenAI v1.x AsyncClient support. The duplicate call fix has been merged onto main and will be released as part of guardrails v0.4.3. I'm going to close this issue for now, but please raise another if you experience any other problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants