openai: Don't send prompt_cache_key for OpenAI-compatible models (#36231)
Some APIs fail when they get this parameter Closes #36215 Release Notes: - Fixed OpenAI-compatible providers that don't support prompt caching and/or reasoning
This commit is contained in:
parent
d891348442
commit
2a57b160b0
8 changed files with 29 additions and 2 deletions
|
@ -355,6 +355,7 @@ impl LanguageModel for VercelLanguageModel {
|
|||
request,
|
||||
self.model.id(),
|
||||
self.model.supports_parallel_tool_calls(),
|
||||
self.model.supports_prompt_cache_key(),
|
||||
self.max_output_tokens(),
|
||||
None,
|
||||
);
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue