Mistral just released a sota coding model:
https://mistral.ai/news/devstral
This PR adds support for it in both ollama and mistral
Release Notes:
- Add DevstralSmallLatest model to Mistral and Ollama
https://github.com/zed-industries/zed/issues/30972 brought up another
case where our context is not enough to track the actual source of the
issue: we get a general top-level error without inner error.
The reason for this was `.ok_or_else(|| anyhow!("failed to read HEAD
SHA"))?; ` on the top level.
The PR finally reworks the way we use anyhow to reduce such issues (or
at least make it simpler to bubble them up later in a fix).
On top of that, uses a few more anyhow methods for better readability.
* `.ok_or_else(|| anyhow!("..."))`, `map_err` and other similar error
conversion/option reporting cases are replaced with `context` and
`with_context` calls
* in addition to that, various `anyhow!("failed to do ...")` are
stripped with `.context("Doing ...")` messages instead to remove the
parasitic `failed to` text
* `anyhow::ensure!` is used instead of `if ... { return Err(...); }`
calls
* `anyhow::bail!` is used instead of `return Err(anyhow!(...));`
Release Notes:
- N/A
Closes https://github.com/zed-industries/zed/issues/29855
Implement tool use handling in Mistral provider, including mapping tool
call events and updating request construction. Add support for
tool_choice and parallel_tool_calls in Mistral API requests.
This works fine with all the existing models. Didn't touched anything
else but for future. Fetching models using their models api, deducting
tool call support, parallel tool calls etc should be done from model
data from api response.
<img width="547" alt="Screenshot 2025-05-06 at 4 52 37 PM"
src="https://github.com/user-attachments/assets/4c08b544-1174-40cc-a40d-522989953448"
/>
Tasks:
- [x] Add tool call support
- [x] Auto Fetch models using mistral api
- [x] Add tests for mistral crates.
- [x] Fix mistral configurations for llm providers.
Release Notes:
- agent: Add tool call support for existing mistral models
---------
Co-authored-by: Peter Tripp <peter@zed.dev>
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
* Adds a fast / cheaper model to providers and defaults thread
summarization to this model. Initial motivation for this was that
https://github.com/zed-industries/zed/pull/29099 would cause these
requests to fail when used with a thinking model. It doesn't seem
correct to use a thinking model for summarization.
* Skips system prompt, context, and thinking segments.
* If tool use is happening, allows 2 tool uses + one more agent response
before summarizing.
Downside of this is that there was potential for some prefix cache reuse
before, especially for title summarization (thread summarization omitted
tool results and so would not share a prefix for those). This seems fine
as these requests should typically be fairly small. Even for full thread
summarization, skipping all tool use / context should greatly reduce the
token use.
Release Notes:
- N/A