We'll now use the anthropic provider to get credentials for `claude` and
embed its configuration view in the panel when they are not present.
Release Notes:
- N/A
TODO
- [x] OpenAI Compatible API Icon
- [x] Docs
- [x] Link to docs in OpenAI provider section about configuring OpenAI
API compatible providers
Closes#33992
Related to #30010
Release Notes:
- agent: Add support for adding multiple OpenAI API compatible providers
---------
Co-authored-by: MrSubidubi <dev@bahn.sh>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
This includes making sure that both the agent panel and Zed's edit
prediction have a consistent narrative when it comes to onboarding users
into the AI features, considering the possible different plans and
conditions (such as being signed in/out, account age, etc.)
Release Notes:
- N/A
---------
Co-authored-by: Bennet Bo Fenner <53836821+bennetbo@users.noreply.github.com>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
As we are in the process of improving our Onboarding UX for Zed AI, I
added component previews for the Zed AI Configuration section. This
should make it easier to inspect the different states we can run into.
<img width="1198" alt="image"
src="https://github.com/user-attachments/assets/eb774f27-9091-450d-bfae-c688d533c25e"
/>
Release Notes:
- N/A
* Updates to `zed_llm_client-0.8.5` which adds support for `retry_after`
when anthropic provides it.
* Distinguishes upstream provider errors and rate limits from errors
that originate from zed's servers
* Moves `LanguageModelCompletionError::BadInputJson` to
`LanguageModelCompletionEvent::ToolUseJsonParseError`. While arguably
this is an error case, the logic in thread is cleaner with this move.
There is also precedent for inclusion of errors in the event type -
`CompletionRequestStatus::Failed` is how cloud errors arrive.
* Updates `PROVIDER_ID` / `PROVIDER_NAME` constants to use proper types
instead of `&str`, since they can be constructed in a const fashion.
* Removes use of `CLIENT_SUPPORTS_EXA_WEB_SEARCH_PROVIDER_HEADER_NAME`
as the server no longer reads this header and just defaults to that
behavior.
Release notes for this is covered by #33275
Release Notes:
- N/A
---------
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
Co-authored-by: Richard <richard@zed.dev>
cc @osyvokon
We were seeing a bunch of errors in our backend when people were using
Claude models with thinking enabled.
In the logs we would see
> an error occurred while interacting with the Anthropic API:
invalid_request_error: messages.x.content.0.type: Expected `thinking` or
`redacted_thinking`, but found `text`. When `thinking` is enabled, a
final `assistant` message must start with a thinking block (preceeding
the lastmost set of `tool_use` and `tool_result` blocks). We recommend
you include thinking blocks from previous turns. To avoid this
requirement, disable `thinking`. Please consult our documentation at
https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking
However, this issue did not occur frequently and was not easily
reproducible. Turns out it was triggered by us not correctly handling
[Redacted Thinking
Blocks](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking#thinking-redaction).
I could constantly reproduce this issue by including this magic string:
`ANTHROPIC_MAGIC_STRING_TRIGGER_REDACTED_THINKING_46C9A13E193C177646C7398A98432ECCCE4C1253D5E2D82641AC0E52CC2876CB
` in the request, which forces `claude-3-7-sonnet` to emit redacted
thinking blocks (confusingly the magic string does not seem to be
working for `claude-sonnet-4`). As soon as we hit a tool call Anthropic
would return an error.
Thanks to @osyvokon for pointing me in the right direction 😄!
Release Notes:
- agent: Fixed an issue where Anthropic models would sometimes return an
error when thinking was enabled
This PR is in preparation for doing automatic retries for certain
errors, e.g. Overloaded. It doesn't change behavior yet (aside from some
granularity of error messages shown to the user), but rather mostly
changes some error handling to be exhaustive enum matches instead of
`anyhow` downcasts, and leaves some comments for where the behavior
change will be in a future PR.
Release Notes:
- N/A
Having `Thread::last_usage` as an override of the initially fetched
usage could cause the initial usage to be displayed when the current
thread is empty or in text threads. Fix is to just store last usage info
in `UserStore` and not have these overrides
Release Notes:
- Agent: Fixed request usage display to always include the most recently
known usage - there were some cases where it would show the initially
requested usage.
Previously we were using a mix of `u32` and `usize`, e.g. `max_tokens:
usize, max_output_tokens: Option<u32>` in the same `struct`.
Although [tiktoken](https://github.com/openai/tiktoken) uses `usize`,
token counts should be consistent across targets (e.g. the same model
doesn't suddenly get a smaller context window if you're compiling for
wasm32), and these token counts could end up getting serialized using a
binary protocol, so `usize` is not the right choice for token counts.
I chose to standardize on `u64` over `u32` because we don't store many
of them (so the extra size should be insignificant) and future models
may exceed `u32::MAX` tokens.
Release Notes:
- N/A
Bubbles up rate limit information so that we can retry after a certain
duration if needed higher up in the stack.
Also caps the number of concurrent evals running at once to also help.
Release Notes:
- N/A
Removes the load_model trait method and its implementations in Ollama
and LM Studio providers, along with associated preload_model functions
and unused imports.
Release Notes:
- N/A
Follow up to https://github.com/zed-industries/zed/pull/31470.
I started looking at config and changed preferred_completion_mode to
burn to only find its max so made changes to align it better with
rebrand. As this is in preview build now.
This doesn't touch zed_llm_client. Only the Zed changes the code and doc
to match the new UI of burn mode. There are still more things to be
renamed, though.
Release Notes:
- N/A
---------
Signed-off-by: Umesh Yadav <git@umesh.dev>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
This PR updates the Zed LLM provider to fetch the available models from
the server instead of hard-coding them in the binary.
Release Notes:
- Updated the Zed provider to fetch the list of available language
models from the server.
This PR adds support for [Claude
4](https://www.anthropic.com/news/claude-4).
Release Notes:
- Added support for Claude Opus 4 and Claude Sonnet 4.
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
https://github.com/zed-industries/zed/issues/30972 brought up another
case where our context is not enough to track the actual source of the
issue: we get a general top-level error without inner error.
The reason for this was `.ok_or_else(|| anyhow!("failed to read HEAD
SHA"))?; ` on the top level.
The PR finally reworks the way we use anyhow to reduce such issues (or
at least make it simpler to bubble them up later in a fix).
On top of that, uses a few more anyhow methods for better readability.
* `.ok_or_else(|| anyhow!("..."))`, `map_err` and other similar error
conversion/option reporting cases are replaced with `context` and
`with_context` calls
* in addition to that, various `anyhow!("failed to do ...")` are
stripped with `.context("Doing ...")` messages instead to remove the
parasitic `failed to` text
* `anyhow::ensure!` is used instead of `if ... { return Err(...); }`
calls
* `anyhow::bail!` is used instead of `return Err(anyhow!(...));`
Release Notes:
- N/A
This is very basic support for them. There are a number of other TODOs
before this is really a first-class supported feature, so not adding any
release notes for it; for now, this PR just makes it so that if
read_file tries to read a PNG (which has come up in practice), it at
least correctly sends it to Anthropic instead of messing up.
This also lays the groundwork for future PRs for more first-class
support for images in tool calls across more image file formats and LLM
providers.
Release Notes:
- N/A
---------
Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
This PR adds a notice when reaching consecutive tool use limits when
using normal mode.
Here's an example with the limit artificially lowered to 2 consecutive
tool uses:
https://github.com/user-attachments/assets/32da8d38-67de-4d6b-8f24-754d2518e5d4
Release Notes:
- agent: Added a notice when reaching consecutive tool use limits when
using a model in normal mode.
This sets us up to display queue position information to the user, once
our language model backend is updated to support request queuing.
The JSON returned by the LLM backend will need to look like this:
```json
{"queue": {"status": "queued", "position": 1}}
{"queue": {"status": "started"}}
{"event": {"THE_UPSTREAM_MODEL_PROVIDER_EVENT": "..."}}
```
Release Notes:
- N/A
---------
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
This PR makes it possible to use different LLM models in the agent
panels of two different projects, simultaneously. It also properly
restores a thread's original model when restoring it from the history,
rather than having it use the default model. As before, newly-created
threads will use the current default model.
Release Notes:
- Enabled different project windows to use different models in the agent
panel
- Enhanced the agent panel so that when revisiting old threads, their
original model will be used.
---------
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
This PR adds a "max mode" toggle to the Agent panel, for models that
support it.
Only visible to folks in the `new-billing` feature flag.
Icon is just a placeholder.
Release Notes:
- N/A
This is to enable alternative streaming solutions at the application
layer. I'm not sure we really should have performed parsing of the input
at this layer. Either way I want to experiment with streaming approaches
in a separate crate on a branch, and this will help.
/cc @maxdeviant @bennetbo @rtfeldman
Closes #ISSUE
Release Notes:
- N/A
* Adds a fast / cheaper model to providers and defaults thread
summarization to this model. Initial motivation for this was that
https://github.com/zed-industries/zed/pull/29099 would cause these
requests to fail when used with a thinking model. It doesn't seem
correct to use a thinking model for summarization.
* Skips system prompt, context, and thinking segments.
* If tool use is happening, allows 2 tool uses + one more agent response
before summarizing.
Downside of this is that there was potential for some prefix cache reuse
before, especially for title summarization (thread summarization omitted
tool results and so would not share a prefix for those). This seems fine
as these requests should typically be fairly small. Even for full thread
summarization, skipping all tool use / context should greatly reduce the
token use.
Release Notes:
- N/A
Looks like the required backend component of this was deployed.
https://github.com/zed-industries/monorepo/actions/runs/14541199197
Release Notes:
- N/A
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
Co-authored-by: Nathan Sobo <nathan@zed.dev>
This PR makes it so the thread summarization also reports the model
request usage, to prevent the case where the count would appear to jump
by 2 the next time a message was sent after summarization.
Release Notes:
- N/A
This PR updates the Agent to extract the usage information from the
response headers, if they are present.
For now we just log the information, but we'll be using this soon to
populate some UI.
Release Notes:
- N/A
The UI was mistakenly using the cumulative token usage for the token
counter. It will now display the last request token count, plus an
estimation of the tokens in the message editor and context entries that
haven't been sent yet.
https://github.com/user-attachments/assets/0438c501-b850-4397-9135-57214ca3c07a
Additionally, when the user edits a message, we'll display the actual
token count up to it and estimate the tokens in the new message.
Note: We don't currently estimate the delta when switching profiles. In
the future, we want to use the count tokens API to measure every part of
the request and display a breakdown.
Release Notes:
- agent: Made the token count more accurate and added back estimation of
used tokens as you type and add context.
---------
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Release Notes:
- agent: Show recommended models in the agent model selector and display
the provider in the model selector's trigger.
---------
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Danilo Leal <67129314+danilo-leal@users.noreply.github.com>
This PR adds the token count to the active thread view. It doesn't
behaves quite like Assistant 1 where it updates as you type, though; it
updates after you submit the message.
<img
src="https://github.com/user-attachments/assets/82d2a180-554a-43ee-b776-3743359b609b"
width="700" />
---
Release Notes:
- agent: Add token count in the thread view
---------
Co-authored-by: Agus Zubiaga <hi@aguz.me>
This PR removes the `use_any_tool` method from the `LanguageModel`
trait.
It was not being used anywhere, and doesn't really fit in our new tool
use story.
Release Notes:
- N/A
This PR removes the dependency on the `ui` crate from the
`language_model` crate.
We were only depending on it to import `IconName`—which now lives in
`icons`—and some re-exported GPUI items.
Release Notes:
- N/A
Closes#25671
Release Notes:
- Added support for `claude-3-7-sonnet-thinking` in the assistant panel
---------
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Agus Zubiaga <hi@aguz.me>