language_models: Add thinking support for ollama (#31665)
This PR updates how we handle Ollama responses, leveraging the new [v0.9.0](https://github.com/ollama/ollama/releases/tag/v0.9.0) release. Previously, thinking text was embedded within the model's main content, leading to it appearing directly in the agent's response. Now, thinking content is provided as a separate parameter, allowing us to display it correctly within the agent panel, similar to other providers. I have tested this with qwen3:8b and works nicely. ~~We can release this once the ollama is release is stable.~~ It's released now as stable. <img width="433" alt="image" src="https://github.com/user-attachments/assets/2983ef06-6679-4033-82c2-231ea9cd6434" /> Release Notes: - Add thinking support for ollama --------- Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
This commit is contained in:
parent
1e1d4430c2
commit
65e3e84cbc
3 changed files with 39 additions and 7 deletions
|
@ -372,6 +372,7 @@ impl AgentSettingsContent {
|
|||
None,
|
||||
None,
|
||||
Some(language_model.supports_tools()),
|
||||
None,
|
||||
)),
|
||||
api_url,
|
||||
});
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue