Skip to content

support for responses.create() with AzureOpenAI and AsyncAzureOpenAI #2280

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
NikGor opened this issue Apr 7, 2025 · 2 comments
Open
1 task done

support for responses.create() with AzureOpenAI and AsyncAzureOpenAI #2280

NikGor opened this issue Apr 7, 2025 · 2 comments
Labels
Azure for issues relating to the Azure OpenAI service

Comments

@NikGor
Copy link

NikGor commented Apr 7, 2025

Confirm this is a feature request for the Python library and not the underlying OpenAI API.

  • This is a feature request for the Python library

Describe the feature or improvement you're requesting

Hi OpenAI team 👋

We’ve noticed that, as of now, the AzureOpenAI and AsyncAzureOpenAI clients do not expose the .responses resource like the default OpenAI client does. This is a bit limiting, especially considering that Azure OpenAI has recently added support for the /openai/responses endpoint as part of the 2025-03-01-preview API version.

Currently, attempting to use:

client = AsyncAzureOpenAI(...)
await client.responses.create(...)

raises an AttributeError because .responses is not available on that class.

It would be great if .responses was added to AzureOpenAI and AsyncAzureOpenAI, similarly to how .chat.completions are exposed.

• Azure now supports /openai/responses endpoint
• model / deployment ID is passed in the JSON body, not in the path
• No need to add it to _deployments_endpoints
• Adding this would make the SDK consistent and easier to use for Azure users

Thanks in advance! Happy to contribute a PR if you’re open to it.

Best,
Nikolai

Additional context

No response

@kiranimmadi2
Copy link

Hi there,

check code below once

client = AsyncAzureOpenAI(...)
await client.responses.create(...)
results in an AttributeError because the .responses property isn't available.

Since Azure now supports the /openai/responses endpoint (using the 2025-03-01-preview API version), it makes sense to add this feature. It would align the behavior of these clients with how .chat.completions is implemented, making everything more consistent. Plus, because the model/deployment ID is passed in the JSON body instead of the endpoint path, integrating this shouldn't require major changes.

Overall, this enhancement would make the client more user-friendly for Azure users. Thanks again for proposing this improvement, and it's great to know you're willing to help with a PR if needed!

@RobertCraigie RobertCraigie added the Azure for issues relating to the Azure OpenAI service label Apr 14, 2025
@trrwilson
Copy link

@NikGor, @kiranimmadi2, which version of the openai library are you encountering this with? Azure support for /responses should certainly be present; basic Azure OpenAI use of the synchronous client is documented on learn.microsoft.com and I just validated the following quick modification to use the async client with 1.74.1 (and an earlier 1.69.0 environment for a positive comparison):

import asyncio
import os
from openai import AsyncAzureOpenAI

async def main():
    client = AsyncAzureOpenAI(
        api_key=os.getenv("AZURE_OPENAI_API_KEY"),  
        api_version="2025-03-01-preview",
        azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
        )

    response = await client.responses.create(
        model="gpt-4o-mini", # replace with your model deployment name 
        input="This is a test."
        #truncation="auto" required when using computer-use-preview model.
    )

    print(response.output[0]);

asyncio.run(main())

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Azure for issues relating to the Azure OpenAI service
Projects
None yet
Development

No branches or pull requests

4 participants