This PR adds a new `intent` field to completion requests to assist in
categorizing them correctly.
Release Notes:
- N/A
---------
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
Follow up to https://github.com/zed-industries/zed/pull/31470.
I started looking at config and changed preferred_completion_mode to
burn to only find its max so made changes to align it better with
rebrand. As this is in preview build now.
This doesn't touch zed_llm_client. Only the Zed changes the code and doc
to match the new UI of burn mode. There are still more things to be
renamed, though.
Release Notes:
- N/A
---------
Signed-off-by: Umesh Yadav <git@umesh.dev>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
This expands our deserialization of JSON from models to be more tolerant
of different variations that the model may send, including
capitalization, wrapping things in objects vs. being plain strings, etc.
Also when deserialization fails, it reports the entire error in the JSON
so we can see what failed to deserialize. (Previously these errors were
very unhelpful at diagnosing the problem.)
Finally, also removes the `WrappedText` variant since the custom
deserializer just turns that style of JSON into a normal `Text` variant.
Release Notes:
- N/A
I was surprised to see this being done for thread summaries, but not
commit messages.
I believe it's a better default as most people would want a faster
commit message generation without spending premium requests.
Considering how the default fast model for copilot is set to the base
one, this is ideal for me (and likely many others), as opposed to
tweaking the configuration every time the base model changes.
Release Notes:
- git: Default to fast model first if not configured for generating
commit messages
This PR updates the Zed LLM provider to fetch the available models from
the server instead of hard-coding them in the binary.
Release Notes:
- Updated the Zed provider to fetch the list of available language
models from the server.
This PR adds support for [Claude
4](https://www.anthropic.com/news/claude-4).
Release Notes:
- Added support for Claude Opus 4 and Claude Sonnet 4.
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
https://github.com/zed-industries/zed/issues/30972 brought up another
case where our context is not enough to track the actual source of the
issue: we get a general top-level error without inner error.
The reason for this was `.ok_or_else(|| anyhow!("failed to read HEAD
SHA"))?; ` on the top level.
The PR finally reworks the way we use anyhow to reduce such issues (or
at least make it simpler to bubble them up later in a fix).
On top of that, uses a few more anyhow methods for better readability.
* `.ok_or_else(|| anyhow!("..."))`, `map_err` and other similar error
conversion/option reporting cases are replaced with `context` and
`with_context` calls
* in addition to that, various `anyhow!("failed to do ...")` are
stripped with `.context("Doing ...")` messages instead to remove the
parasitic `failed to` text
* `anyhow::ensure!` is used instead of `if ... { return Err(...); }`
calls
* `anyhow::bail!` is used instead of `return Err(anyhow!(...));`
Release Notes:
- N/A
Some providers sometimes send `{ "type": "text", "text": ... }` instead
of just the text as a string. Now we accept those instead of erroring.
Release Notes:
- N/A
This is very basic support for them. There are a number of other TODOs
before this is really a first-class supported feature, so not adding any
release notes for it; for now, this PR just makes it so that if
read_file tries to read a PNG (which has come up in practice), it at
least correctly sends it to Anthropic instead of messing up.
This also lays the groundwork for future PRs for more first-class
support for images in tool calls across more image file formats and LLM
providers.
Release Notes:
- N/A
---------
Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
Problem Statement:
Support for image analysis (vision) is currently restricted to Anthropic
and Gemini models. This limits users who wish to leverage vision
capabilities available in other models, such as Copilot, for tasks like
attaching image context within the agent message editor.
Proposed Change:
This PR extends vision support to include Copilot models that are
already equipped with vision capabilities. This integration will allow
users within VS Code to attach and analyze images using supported
Copilot models via the agent message editor.
Scope Limitation:
This PR does not implement controls within the message editor to ensure
that image context (e.g., through copy-paste or attachment) is
exclusively enabled or prompted only when a vision-supported model is
active. Long term the message editor should have access to each models
vision capability and stop the users from attaching images by either
greying out the context saying it's not support or not work through both
copy paste and file/directory search.
Closes#30076
Release Notes:
- Add vision support for Copilot Chat models
---------
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
This PR adds a notice when reaching consecutive tool use limits when
using normal mode.
Here's an example with the limit artificially lowered to 2 consecutive
tool uses:
https://github.com/user-attachments/assets/32da8d38-67de-4d6b-8f24-754d2518e5d4
Release Notes:
- agent: Added a notice when reaching consecutive tool use limits when
using a model in normal mode.
This sets us up to display queue position information to the user, once
our language model backend is updated to support request queuing.
The JSON returned by the LLM backend will need to look like this:
```json
{"queue": {"status": "queued", "position": 1}}
{"queue": {"status": "started"}}
{"event": {"THE_UPSTREAM_MODEL_PROVIDER_EVENT": "..."}}
```
Release Notes:
- N/A
---------
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
Release Notes:
- agent: Add support for @mentioning images
- agent: Add support for including images via file context picker
---------
Co-authored-by: Oleksiy Syvokon <oleksiy.syvokon@gmail.com>
This PR makes it possible to use different LLM models in the agent
panels of two different projects, simultaneously. It also properly
restores a thread's original model when restoring it from the history,
rather than having it use the default model. As before, newly-created
threads will use the current default model.
Release Notes:
- Enabled different project windows to use different models in the agent
panel
- Enhanced the agent panel so that when revisiting old threads, their
original model will be used.
---------
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
This PR adds a "max mode" toggle to the Agent panel, for models that
support it.
Only visible to folks in the `new-billing` feature flag.
Icon is just a placeholder.
Release Notes:
- N/A
This is to enable alternative streaming solutions at the application
layer. I'm not sure we really should have performed parsing of the input
at this layer. Either way I want to experiment with streaming approaches
in a separate crate on a branch, and this will help.
/cc @maxdeviant @bennetbo @rtfeldman
Closes #ISSUE
Release Notes:
- N/A
Our provider code in `language_models` filters out messages for which
`LanguageModelRequestMessage::contents_empty` returns `false`. This
doesn't seem wrong by itself, but `contents_empty` was returning `false`
for messages whose first segment didn't contain non-whitespace text even
if they contained other non-empty segments. This caused requests to fail
when a message with a tool call didn't contain any preceding text.
Release Notes:
- N/A
* Adds a fast / cheaper model to providers and defaults thread
summarization to this model. Initial motivation for this was that
https://github.com/zed-industries/zed/pull/29099 would cause these
requests to fail when used with a thinking model. It doesn't seem
correct to use a thinking model for summarization.
* Skips system prompt, context, and thinking segments.
* If tool use is happening, allows 2 tool uses + one more agent response
before summarizing.
Downside of this is that there was potential for some prefix cache reuse
before, especially for title summarization (thread summarization omitted
tool results and so would not share a prefix for those). This seems fine
as these requests should typically be fairly small. Even for full thread
summarization, skipping all tool use / context should greatly reduce the
token use.
Release Notes:
- N/A
Looks like the required backend component of this was deployed.
https://github.com/zed-industries/monorepo/actions/runs/14541199197
Release Notes:
- N/A
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
Co-authored-by: Nathan Sobo <nathan@zed.dev>
This PR attaches the thread ID and the new prompt ID to telemetry events
for completions in the Agent panel.
Release Notes:
- N/A
---------
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
This PR makes it so the thread summarization also reports the model
request usage, to prevent the case where the count would appear to jump
by 2 the next time a message was sent after summarization.
Release Notes:
- N/A
This PR updates the Agent to extract the usage information from the
response headers, if they are present.
For now we just log the information, but we'll be using this soon to
populate some UI.
Release Notes:
- N/A
Release Notes:
- Add support for OpenAI o3 and o4-mini models via OpenAI API and
Copilot Chat providers.
---------
Co-authored-by: Peter Tripp <peter@zed.dev>
The UI was mistakenly using the cumulative token usage for the token
counter. It will now display the last request token count, plus an
estimation of the tokens in the message editor and context entries that
haven't been sent yet.
https://github.com/user-attachments/assets/0438c501-b850-4397-9135-57214ca3c07a
Additionally, when the user edits a message, we'll display the actual
token count up to it and estimate the tokens in the new message.
Note: We don't currently estimate the delta when switching profiles. In
the future, we want to use the count tokens API to measure every part of
the request and display a breakdown.
Release Notes:
- agent: Made the token count more accurate and added back estimation of
used tokens as you type and add context.
---------
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
This PR adds an error message when the model requests limit has been
hit.
Release Notes:
- N/A
Co-authored-by: Oleksiy Syvokon <oleksiy.syvokon@gmail.com>
Release Notes:
- Add support for OpenAI GPT-4.1 via Copilot Chat and OpenAI API
---------
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
Release Notes:
- agent: Show recommended models in the agent model selector and display
the provider in the model selector's trigger.
---------
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Danilo Leal <67129314+danilo-leal@users.noreply.github.com>
This PR renames the `AssistantEvent` type to `AssistantEventData`, as it
no longer represents the event itself, just the data needed to construct
it.
Pulling out of https://github.com/zed-industries/zed/pull/25179.
Release Notes:
- N/A
Closes: https://github.com/zed-industries/zed/issues/20582
Allows users to select a specific model for each AI-powered feature:
- Agent panel
- Inline assistant
- Thread summarization
- Commit message generation
If unspecified for a given feature, it will use the `default_model`
setting.
Release Notes:
- Added support for configuring a specific model for each AI-powered
feature
---------
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
This PR adds the token count to the active thread view. It doesn't
behaves quite like Assistant 1 where it updates as you type, though; it
updates after you submit the message.
<img
src="https://github.com/user-attachments/assets/82d2a180-554a-43ee-b776-3743359b609b"
width="700" />
---
Release Notes:
- agent: Add token count in the thread view
---------
Co-authored-by: Agus Zubiaga <hi@aguz.me>