Commit graph

4 commits

Author SHA1 Message Date
Michael Sloan
0fb0059b5f
Switch from open-codestral-mamba to codestral-latest for default mistral model (#29104)
Couldn't find mistral cloud pricing for open-codestral-mamba on the
mistral site, but codestral-latest is newer and appears to be cheaper
based on
https://sdk.vercel.ai/playground/mistral:codestral-mamba-latest,mistral:codestral-2501

Release Notes:

- N/A
2025-04-19 23:29:36 +00:00
Michael Sloan
fbf7caf93e
Default to fast model for thread summaries and titles + don't include system prompt / context / thinking segments (#29102)
* Adds a fast / cheaper model to providers and defaults thread
summarization to this model. Initial motivation for this was that
https://github.com/zed-industries/zed/pull/29099 would cause these
requests to fail when used with a thinking model. It doesn't seem
correct to use a thinking model for summarization.

* Skips system prompt, context, and thinking segments.

* If tool use is happening, allows 2 tool uses + one more agent response
before summarizing.

Downside of this is that there was potential for some prefix cache reuse
before, especially for title summarization (thread summarization omitted
tool results and so would not share a prefix for those). This seems fine
as these requests should typically be fairly small. Even for full thread
summarization, skipping all tool use / context should greatly reduce the
token use.

Release Notes:

- N/A
2025-04-19 23:26:29 +00:00
Piotr Osiewicz
dc64ec9cc8
chore: Bump Rust edition to 2024 (#27800)
Follow-up to https://github.com/zed-industries/zed/pull/27791

Release Notes:

- N/A
2025-03-31 20:55:27 +02:00
Shidfar Hodizoda
7ee492746d
assistant: Add Mistral support (#24879)
Closes #12519.

Release Notes:

- Added support for Mistral to the Assistant.

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-02-14 13:07:41 -05:00