Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for cache control in ChatCompletionMessage #963

Closed
wants to merge 1 commit into from

Conversation

trashhalo
Copy link

Describe the change
Fixes #897 many openai compatible apis use this key to let users control token caching for models that support it.

Provide OpenAI documentation link
Not a openai feature, but common in openai compatible endpoints. https://docs.litellm.ai/docs/completion/prompt_caching#anthropic-example

Describe your solution
Add the extra keys

@trashhalo trashhalo closed this Apr 12, 2025
@trashhalo
Copy link
Author

Closing because I realized I need way more LiteLLM specific keys like citations which is returned by perplexity seems out of scope fro this project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for cache control in ChatCompletionMessage
1 participant