Commit graph

3 commits

Author SHA1 Message Date
Michael Sloan
fbf7caf93e
Default to fast model for thread summaries and titles + don't include system prompt / context / thinking segments (#29102)
* Adds a fast / cheaper model to providers and defaults thread
summarization to this model. Initial motivation for this was that
https://github.com/zed-industries/zed/pull/29099 would cause these
requests to fail when used with a thinking model. It doesn't seem
correct to use a thinking model for summarization.

* Skips system prompt, context, and thinking segments.

* If tool use is happening, allows 2 tool uses + one more agent response
before summarizing.

Downside of this is that there was potential for some prefix cache reuse
before, especially for title summarization (thread summarization omitted
tool results and so would not share a prefix for those). This seems fine
as these requests should typically be fairly small. Even for full thread
summarization, skipping all tool use / context should greatly reduce the
token use.

Release Notes:

- N/A
2025-04-19 23:26:29 +00:00
Piotr Osiewicz
dc64ec9cc8
chore: Bump Rust edition to 2024 (#27800)
Follow-up to https://github.com/zed-industries/zed/pull/27791

Release Notes:

- N/A
2025-03-31 20:55:27 +02:00
邻二氮杂菲
29bfb56739
Add DeepSeek support (#23551)
- Added support for DeepSeek as a new language model provider in Zed
Assistant
- Implemented streaming API support for real-time responses from
DeepSeek models.
- Added a configuration UI for DeepSeek API key management and settings.
- Updated documentation with detailed setup instructions for DeepSeek
integration.
- Added DeepSeek-specific icons and model definitions for seamless
integration into the Zed UI.
- Integrated DeepSeek into the language model registry, making it
available alongside other providers like OpenAI and Anthropic.

Release Notes:

- Added support for DeepSeek to the Assistant.

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-01-27 13:40:59 -05:00