We'll now use the anthropic provider to get credentials for `claude` and
embed its configuration view in the panel when they are not present.
Release Notes:
- N/A
This pull request should be idempotent, but lays the groundwork for
avoiding to connect to collab in order to interact with AI features
provided by Zed.
Release Notes:
- N/A
---------
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
This PR updates the Zed Edit Prediction provider to acquire the LLM
token from Cloud instead of Collab to allow using Edit Predictions even
when disconnected from or unable to connect to the Collab server.
Release Notes:
- N/A
---------
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
TODO
- [x] OpenAI Compatible API Icon
- [x] Docs
- [x] Link to docs in OpenAI provider section about configuring OpenAI
API compatible providers
Closes#33992
Related to #30010
Release Notes:
- agent: Add support for adding multiple OpenAI API compatible providers
---------
Co-authored-by: MrSubidubi <dev@bahn.sh>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
This includes making sure that both the agent panel and Zed's edit
prediction have a consistent narrative when it comes to onboarding users
into the AI features, considering the possible different plans and
conditions (such as being signed in/out, account age, etc.)
Release Notes:
- N/A
---------
Co-authored-by: Bennet Bo Fenner <53836821+bennetbo@users.noreply.github.com>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
This introduces a new field `thinking_allowed` on `LanguageModelRequest`
which lets us control whether thinking should be enabled if the model
supports it.
We permit thinking in the Inline Assistant, Edit File tool and the Git
Commit message generator, this should make generation faster when using
a thinking model, e.g. `claude-sonnet-4-thinking`
Release Notes:
- N/A
As we are in the process of improving our Onboarding UX for Zed AI, I
added component previews for the Zed AI Configuration section. This
should make it easier to inspect the different states we can run into.
<img width="1198" alt="image"
src="https://github.com/user-attachments/assets/eb774f27-9091-450d-bfae-c688d533c25e"
/>
Release Notes:
- N/A
* Updates to `zed_llm_client-0.8.5` which adds support for `retry_after`
when anthropic provides it.
* Distinguishes upstream provider errors and rate limits from errors
that originate from zed's servers
* Moves `LanguageModelCompletionError::BadInputJson` to
`LanguageModelCompletionEvent::ToolUseJsonParseError`. While arguably
this is an error case, the logic in thread is cleaner with this move.
There is also precedent for inclusion of errors in the event type -
`CompletionRequestStatus::Failed` is how cloud errors arrive.
* Updates `PROVIDER_ID` / `PROVIDER_NAME` constants to use proper types
instead of `&str`, since they can be constructed in a const fashion.
* Removes use of `CLIENT_SUPPORTS_EXA_WEB_SEARCH_PROVIDER_HEADER_NAME`
as the server no longer reads this header and just defaults to that
behavior.
Release notes for this is covered by #33275
Release Notes:
- N/A
---------
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
Co-authored-by: Richard <richard@zed.dev>
cc @osyvokon
We were seeing a bunch of errors in our backend when people were using
Claude models with thinking enabled.
In the logs we would see
> an error occurred while interacting with the Anthropic API:
invalid_request_error: messages.x.content.0.type: Expected `thinking` or
`redacted_thinking`, but found `text`. When `thinking` is enabled, a
final `assistant` message must start with a thinking block (preceeding
the lastmost set of `tool_use` and `tool_result` blocks). We recommend
you include thinking blocks from previous turns. To avoid this
requirement, disable `thinking`. Please consult our documentation at
https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking
However, this issue did not occur frequently and was not easily
reproducible. Turns out it was triggered by us not correctly handling
[Redacted Thinking
Blocks](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking#thinking-redaction).
I could constantly reproduce this issue by including this magic string:
`ANTHROPIC_MAGIC_STRING_TRIGGER_REDACTED_THINKING_46C9A13E193C177646C7398A98432ECCCE4C1253D5E2D82641AC0E52CC2876CB
` in the request, which forces `claude-3-7-sonnet` to emit redacted
thinking blocks (confusingly the magic string does not seem to be
working for `claude-sonnet-4`). As soon as we hit a tool call Anthropic
would return an error.
Thanks to @osyvokon for pointing me in the right direction 😄!
Release Notes:
- agent: Fixed an issue where Anthropic models would sometimes return an
error when thinking was enabled
This PR is in preparation for doing automatic retries for certain
errors, e.g. Overloaded. It doesn't change behavior yet (aside from some
granularity of error messages shown to the user), but rather mostly
changes some error handling to be exhaustive enum matches instead of
`anyhow` downcasts, and leaves some comments for where the behavior
change will be in a future PR.
Release Notes:
- N/A
Having `Thread::last_usage` as an override of the initially fetched
usage could cause the initial usage to be displayed when the current
thread is empty or in text threads. Fix is to just store last usage info
in `UserStore` and not have these overrides
Release Notes:
- Agent: Fixed request usage display to always include the most recently
known usage - there were some cases where it would show the initially
requested usage.
Previously we were using a mix of `u32` and `usize`, e.g. `max_tokens:
usize, max_output_tokens: Option<u32>` in the same `struct`.
Although [tiktoken](https://github.com/openai/tiktoken) uses `usize`,
token counts should be consistent across targets (e.g. the same model
doesn't suddenly get a smaller context window if you're compiling for
wasm32), and these token counts could end up getting serialized using a
binary protocol, so `usize` is not the right choice for token counts.
I chose to standardize on `u64` over `u32` because we don't store many
of them (so the extra size should be insignificant) and future models
may exceed `u32::MAX` tokens.
Release Notes:
- N/A
Minor refactor that I'm extracting from a branch because it can stand
alone.
- Now we no longer spawn an executor for `report_anthropic_event` if
it's just going to immediately fail due to API key being missing
- `report_anthropic_event` now takes a `String` API key instead of
`Option<String>` and the error reporting if the key is missing has been
moved to the caller.
- `report_anthropic_event` is longer coupled to `AnthropicError`,
because all it ever did was generate an `AnthropicEvent::Other`, which
in turn was then only used for `log_err` - so, can just be an
`anyhow::Result`.
Release Notes:
- N/A
Consolidates configuration error handling by moving the error type and
logic from assistant_context_editor to language_model::registry.
The registry now provides a single method to check for configuration
errors, making the error handling more consistent across the agent panel
and context editor.
This also now checks if the issue is that we don't have any providers,
or if we just can't find the model.
Previously, an incorrect model name showed up as having no providers,
which is very confusing.
Release Notes:
- N/A
Bubbles up rate limit information so that we can retry after a certain
duration if needed higher up in the stack.
Also caps the number of concurrent evals running at once to also help.
Release Notes:
- N/A
Removes the load_model trait method and its implementations in Ollama
and LM Studio providers, along with associated preload_model functions
and unused imports.
Release Notes:
- N/A
This PR adds a new `intent` field to completion requests to assist in
categorizing them correctly.
Release Notes:
- N/A
---------
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
Follow up to https://github.com/zed-industries/zed/pull/31470.
I started looking at config and changed preferred_completion_mode to
burn to only find its max so made changes to align it better with
rebrand. As this is in preview build now.
This doesn't touch zed_llm_client. Only the Zed changes the code and doc
to match the new UI of burn mode. There are still more things to be
renamed, though.
Release Notes:
- N/A
---------
Signed-off-by: Umesh Yadav <git@umesh.dev>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
This expands our deserialization of JSON from models to be more tolerant
of different variations that the model may send, including
capitalization, wrapping things in objects vs. being plain strings, etc.
Also when deserialization fails, it reports the entire error in the JSON
so we can see what failed to deserialize. (Previously these errors were
very unhelpful at diagnosing the problem.)
Finally, also removes the `WrappedText` variant since the custom
deserializer just turns that style of JSON into a normal `Text` variant.
Release Notes:
- N/A
I was surprised to see this being done for thread summaries, but not
commit messages.
I believe it's a better default as most people would want a faster
commit message generation without spending premium requests.
Considering how the default fast model for copilot is set to the base
one, this is ideal for me (and likely many others), as opposed to
tweaking the configuration every time the base model changes.
Release Notes:
- git: Default to fast model first if not configured for generating
commit messages
This PR updates the Zed LLM provider to fetch the available models from
the server instead of hard-coding them in the binary.
Release Notes:
- Updated the Zed provider to fetch the list of available language
models from the server.
This PR adds support for [Claude
4](https://www.anthropic.com/news/claude-4).
Release Notes:
- Added support for Claude Opus 4 and Claude Sonnet 4.
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
https://github.com/zed-industries/zed/issues/30972 brought up another
case where our context is not enough to track the actual source of the
issue: we get a general top-level error without inner error.
The reason for this was `.ok_or_else(|| anyhow!("failed to read HEAD
SHA"))?; ` on the top level.
The PR finally reworks the way we use anyhow to reduce such issues (or
at least make it simpler to bubble them up later in a fix).
On top of that, uses a few more anyhow methods for better readability.
* `.ok_or_else(|| anyhow!("..."))`, `map_err` and other similar error
conversion/option reporting cases are replaced with `context` and
`with_context` calls
* in addition to that, various `anyhow!("failed to do ...")` are
stripped with `.context("Doing ...")` messages instead to remove the
parasitic `failed to` text
* `anyhow::ensure!` is used instead of `if ... { return Err(...); }`
calls
* `anyhow::bail!` is used instead of `return Err(anyhow!(...));`
Release Notes:
- N/A
Some providers sometimes send `{ "type": "text", "text": ... }` instead
of just the text as a string. Now we accept those instead of erroring.
Release Notes:
- N/A
This is very basic support for them. There are a number of other TODOs
before this is really a first-class supported feature, so not adding any
release notes for it; for now, this PR just makes it so that if
read_file tries to read a PNG (which has come up in practice), it at
least correctly sends it to Anthropic instead of messing up.
This also lays the groundwork for future PRs for more first-class
support for images in tool calls across more image file formats and LLM
providers.
Release Notes:
- N/A
---------
Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
Problem Statement:
Support for image analysis (vision) is currently restricted to Anthropic
and Gemini models. This limits users who wish to leverage vision
capabilities available in other models, such as Copilot, for tasks like
attaching image context within the agent message editor.
Proposed Change:
This PR extends vision support to include Copilot models that are
already equipped with vision capabilities. This integration will allow
users within VS Code to attach and analyze images using supported
Copilot models via the agent message editor.
Scope Limitation:
This PR does not implement controls within the message editor to ensure
that image context (e.g., through copy-paste or attachment) is
exclusively enabled or prompted only when a vision-supported model is
active. Long term the message editor should have access to each models
vision capability and stop the users from attaching images by either
greying out the context saying it's not support or not work through both
copy paste and file/directory search.
Closes#30076
Release Notes:
- Add vision support for Copilot Chat models
---------
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
This PR adds a notice when reaching consecutive tool use limits when
using normal mode.
Here's an example with the limit artificially lowered to 2 consecutive
tool uses:
https://github.com/user-attachments/assets/32da8d38-67de-4d6b-8f24-754d2518e5d4
Release Notes:
- agent: Added a notice when reaching consecutive tool use limits when
using a model in normal mode.
This sets us up to display queue position information to the user, once
our language model backend is updated to support request queuing.
The JSON returned by the LLM backend will need to look like this:
```json
{"queue": {"status": "queued", "position": 1}}
{"queue": {"status": "started"}}
{"event": {"THE_UPSTREAM_MODEL_PROVIDER_EVENT": "..."}}
```
Release Notes:
- N/A
---------
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
Release Notes:
- agent: Add support for @mentioning images
- agent: Add support for including images via file context picker
---------
Co-authored-by: Oleksiy Syvokon <oleksiy.syvokon@gmail.com>
This PR makes it possible to use different LLM models in the agent
panels of two different projects, simultaneously. It also properly
restores a thread's original model when restoring it from the history,
rather than having it use the default model. As before, newly-created
threads will use the current default model.
Release Notes:
- Enabled different project windows to use different models in the agent
panel
- Enhanced the agent panel so that when revisiting old threads, their
original model will be used.
---------
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
This PR adds a "max mode" toggle to the Agent panel, for models that
support it.
Only visible to folks in the `new-billing` feature flag.
Icon is just a placeholder.
Release Notes:
- N/A