* Updates to `zed_llm_client-0.8.5` which adds support for `retry_after`
when anthropic provides it.
* Distinguishes upstream provider errors and rate limits from errors
that originate from zed's servers
* Moves `LanguageModelCompletionError::BadInputJson` to
`LanguageModelCompletionEvent::ToolUseJsonParseError`. While arguably
this is an error case, the logic in thread is cleaner with this move.
There is also precedent for inclusion of errors in the event type -
`CompletionRequestStatus::Failed` is how cloud errors arrive.
* Updates `PROVIDER_ID` / `PROVIDER_NAME` constants to use proper types
instead of `&str`, since they can be constructed in a const fashion.
* Removes use of `CLIENT_SUPPORTS_EXA_WEB_SEARCH_PROVIDER_HEADER_NAME`
as the server no longer reads this header and just defaults to that
behavior.
Release notes for this is covered by #33275
Release Notes:
- N/A
---------
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
Co-authored-by: Richard <richard@zed.dev>
Previously we were using a mix of `u32` and `usize`, e.g. `max_tokens:
usize, max_output_tokens: Option<u32>` in the same `struct`.
Although [tiktoken](https://github.com/openai/tiktoken) uses `usize`,
token counts should be consistent across targets (e.g. the same model
doesn't suddenly get a smaller context window if you're compiling for
wasm32), and these token counts could end up getting serialized using a
binary protocol, so `usize` is not the right choice for token counts.
I chose to standardize on `u64` over `u32` because we don't store many
of them (so the extra size should be insignificant) and future models
may exceed `u32::MAX` tokens.
Release Notes:
- N/A
Bubbles up rate limit information so that we can retry after a certain
duration if needed higher up in the stack.
Also caps the number of concurrent evals running at once to also help.
Release Notes:
- N/A
For DeepSeek provider thinking is returned as reasoning_content and we
don't have to send the reasoning_content back in the request.
Release Notes:
- Add thinking support to DeepSeek provider
[deepseek function call
api](https://api-docs.deepseek.com/guides/function_calling)
has been released and it is same as openai.
Release Notes:
- Added tool calling support for Deepseek Models
---------
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
https://github.com/zed-industries/zed/issues/30972 brought up another
case where our context is not enough to track the actual source of the
issue: we get a general top-level error without inner error.
The reason for this was `.ok_or_else(|| anyhow!("failed to read HEAD
SHA"))?; ` on the top level.
The PR finally reworks the way we use anyhow to reduce such issues (or
at least make it simpler to bubble them up later in a fix).
On top of that, uses a few more anyhow methods for better readability.
* `.ok_or_else(|| anyhow!("..."))`, `map_err` and other similar error
conversion/option reporting cases are replaced with `context` and
`with_context` calls
* in addition to that, various `anyhow!("failed to do ...")` are
stripped with `.context("Doing ...")` messages instead to remove the
parasitic `failed to` text
* `anyhow::ensure!` is used instead of `if ... { return Err(...); }`
calls
* `anyhow::bail!` is used instead of `return Err(anyhow!(...));`
Release Notes:
- N/A
This is very basic support for them. There are a number of other TODOs
before this is really a first-class supported feature, so not adding any
release notes for it; for now, this PR just makes it so that if
read_file tries to read a PNG (which has come up in practice), it at
least correctly sends it to Anthropic instead of messing up.
This also lays the groundwork for future PRs for more first-class
support for images in tool calls across more image file formats and LLM
providers.
Release Notes:
- N/A
---------
Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
* Adds a fast / cheaper model to providers and defaults thread
summarization to this model. Initial motivation for this was that
https://github.com/zed-industries/zed/pull/29099 would cause these
requests to fail when used with a thinking model. It doesn't seem
correct to use a thinking model for summarization.
* Skips system prompt, context, and thinking segments.
* If tool use is happening, allows 2 tool uses + one more agent response
before summarizing.
Downside of this is that there was potential for some prefix cache reuse
before, especially for title summarization (thread summarization omitted
tool results and so would not share a prefix for those). This seems fine
as these requests should typically be fairly small. Even for full thread
summarization, skipping all tool use / context should greatly reduce the
token use.
Release Notes:
- N/A
This PR removes the `use_any_tool` method from the `LanguageModel`
trait.
It was not being used anywhere, and doesn't really fit in our new tool
use story.
Release Notes:
- N/A
This is the core change:
https://github.com/zed-industries/zed/pull/26758/files#diff-044302c0d57147af17e68a0009fee3e8dcdfb4f32c27a915e70cfa80e987f765R1052
TODO:
- [x] Use AsyncFn instead of Fn() -> Future in GPUI spawn methods
- [x] Implement it in the whole app
- [x] Implement it in the debugger
- [x] Glance at the RPC crate, and see if those box future methods can
be switched over. Answer: It can't directly, as you can't make an
AsyncFn* into a trait object. There's ways around that, but they're all
more complex than just keeping the code as is.
- [ ] Fix platform specific code
Release Notes:
- N/A
I've been bothered by using simple hyphens for bullet lists here for a
while; it kinda looked cheap and not well-formatted. So, in this PR, I'm
adding a new, custom UI component in the `language_models` crate, called
`InstructionListItem`, based off the `ListItem` that's somewhat
mimic'ing what a `<li>` would be on the web.
It does have a "rigid" structure as in it's always a label followed by a
button (which is optional), but that seems okay given it has been the
overall shape of the copy we've been using here. Also, never really
loved that we were pasting URLs directly, that kinda felt cheap, too. I
could see an argument where it's just clearer, but it looks too
cluttered, as URLs aren't super pretty, necessarily.
| Before | After |
|--------|--------|
| <img
src="https://github.com/user-attachments/assets/ffd1ac27-b1f4-450d-abf5-079285fc9877"
width="700px" /> | <img
src="https://github.com/user-attachments/assets/28fb9d0d-205d-45d8-9e43-1aaa947adc96"
width="700px" /> |
Release Notes:
- N/A
This PR removes the dependencies on the individual model provider crates
from the `language_model` crate.
The various conversion methods for converting a `LanguageModelRequest`
into its provider-specific request type have been inlined into the
various provider modules in the `language_models` crate.
The model providers we provide via Zed's cloud offering get to stay, for
now.
Release Notes:
- N/A
While investigating #24896, I noticed two issues:
1. The default configuration for the `zed.dev` provider was using the
wrong string for Claude 3.5 Sonnet. This meant the provider would always
result as not configured until the user selected it from the model
picker, because we couldn't deserialize that string to a valid
`anthropic::Model` enum variant.
2. When clicking on `Open New Chat`/`Start New Thread` in the provider
configuration, we would select `Claude 3.5 Haiku` by default instead of
Claude 3.5 Sonnet.
Release Notes:
- Fixed some issues that caused AI providers to sometimes be
misconfigured.
This PR adds a new `CredentialsProvider` trait that abstracts over
interacting with the system keychain.
We had previously introduced a version of this scoped just to Zed auth
in https://github.com/zed-industries/zed/pull/11505.
However, after landing https://github.com/zed-industries/zed/pull/25123,
we now have a similar issue with the credentials for language model
providers that are also stored in the keychain (and thus also produce a
spam of popups when running a development build of Zed).
This PR takes the existing approach and makes it more generic, such that
we can use it everywhere that we need to read/store credentials in the
keychain.
There are still two credential provider implementations:
- `KeychainCredentialsProvider` will interact with the system keychain
(using the existing GPUI APIs)
- `DevelopmentCredentialsProvider` will use a local file on the file
system
We only use the `DevelopmentCredentialsProvider` when:
1. We are running a development build of Zed
2. The `ZED_DEVELOPMENT_AUTH` environment variable is set
- I am considering removing the need for this and making it the default,
but that will be explored in a follow-up PR.
Release Notes:
- N/A
This PR updates the `LanguageModelProvider::authenticate` method to
return an `AuthenticateError` instead of an `anyhow::Error`.
This allows us to model the "credentials not found" state explicitly as
`AuthenticateError::CredentialsNotFound`, which enables the caller to
check for this state and act accordingly.
Planning to use this in #25123 to silence errors about missing
credentials when authenticating providers in the background.
Release Notes:
- N/A
Done automatically with
> ast-grep -p '$A.background_executor().spawn($B)' -r
'$A.background_spawn($B)' --update-all --globs "\!crates/gpui"
Followed by:
* `cargo fmt`
* Unexpected need to remove some trailing whitespace.
* Manually adding imports of `gpui::{AppContext as _}` which provides
`background_spawn`
* Added `AppContext as _` to existing use of `AppContext`
Release Notes:
- N/A
This PR does some somewhat light UI adjustment to the Assistant 2
settings view. The Prompt Library section should feature the default
prompts in the future, so that's why it's been separated that way.
<img width="800" alt="Screenshot 2025-01-29 at 2 59 59 PM"
src="https://github.com/user-attachments/assets/7b033bde-51ab-44d5-9e53-3f72b8ff5f51"
/>
Release Notes:
- N/A
- Added support for DeepSeek as a new language model provider in Zed
Assistant
- Implemented streaming API support for real-time responses from
DeepSeek models.
- Added a configuration UI for DeepSeek API key management and settings.
- Updated documentation with detailed setup instructions for DeepSeek
integration.
- Added DeepSeek-specific icons and model definitions for seamless
integration into the Zed UI.
- Integrated DeepSeek into the language model registry, making it
available alongside other providers like OpenAI and Anthropic.
Release Notes:
- Added support for DeepSeek to the Assistant.
---------
Co-authored-by: Marshall Bowers <git@maxdeviant.com>