openai: Don't send prompt_cache_key for OpenAI-compatible models (#36231)

Some APIs fail when they get this parameter

Closes #36215

Release Notes:

- Fixed OpenAI-compatible providers that don't support prompt caching
and/or reasoning
This commit is contained in:
Oleksiy Syvokon 2025-08-15 13:54:24 +03:00
parent f7fefa3406
commit 09e9ef4705
8 changed files with 29 additions and 2 deletions

View file

@ -359,6 +359,7 @@ impl LanguageModel for XAiLanguageModel {
request,
self.model.id(),
self.model.supports_parallel_tool_calls(),
self.model.supports_prompt_cache_key(),
self.max_output_tokens(),
None,
);