Default to fast model for thread summaries and titles + don't include system prompt / context / thinking segments (#29102)

* Adds a fast / cheaper model to providers and defaults thread
summarization to this model. Initial motivation for this was that
https://github.com/zed-industries/zed/pull/29099 would cause these
requests to fail when used with a thinking model. It doesn't seem
correct to use a thinking model for summarization.

* Skips system prompt, context, and thinking segments.

* If tool use is happening, allows 2 tool uses + one more agent response
before summarizing.

Downside of this is that there was potential for some prefix cache reuse
before, especially for title summarization (thread summarization omitted
tool results and so would not share a prefix for those). This seems fine
as these requests should typically be fairly small. Even for full thread
summarization, skipping all tool use / context should greatly reduce the
token use.

Release Notes:

- N/A
This commit is contained in:
Michael Sloan 2025-04-19 17:26:29 -06:00 committed by GitHub
parent d48152d958
commit fbf7caf93e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
25 changed files with 270 additions and 205 deletions

View file

@ -102,6 +102,10 @@ pub enum Model {
}
impl Model {
pub fn default_fast() -> Self {
Self::FourPointOneMini
}
pub fn from_id(id: &str) -> Result<Self> {
match id {
"gpt-3.5-turbo" => Ok(Self::ThreePointFiveTurbo),