This PR removes the dependency on the `ui` crate from the
`language_model` crate.
We were only depending on it to import `IconName`—which now lives in
`icons`—and some re-exported GPUI items.
Release Notes:
- N/A
Closes#25671
Release Notes:
- Added support for `claude-3-7-sonnet-thinking` in the assistant panel
---------
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Agus Zubiaga <hi@aguz.me>
This PR updates the client-side checks to give Zed AI users access to
Claude 3.7 Sonnet.
Requires https://github.com/zed-industries/zed/pull/25576 to be
deployed.
Release Notes:
- Added support for Claude 3.7 Sonnet to Zed AI.
This PR removes the dependents of the `language_models` crate.
The following types have been moved from `language_models` to
`language_model` to facilitate this:
- `LlmApiToken`
- `RefreshLlmTokenListener`
- `MaxMonthlySpendReachedError`
- `PaymentRequiredError`
With this change only `zed` now depends on `language_models`.
Release Notes:
- N/A
This PR removes the dependencies on the individual model provider crates
from the `language_model` crate.
The various conversion methods for converting a `LanguageModelRequest`
into its provider-specific request type have been inlined into the
various provider modules in the `language_models` crate.
The model providers we provide via Zed's cloud offering get to stay, for
now.
Release Notes:
- N/A
Add support for the newly released Gemini 2.0 models from Google announced this new family of models earlier this week (2025-02-05).
Release Notes:
- Added support for Google's new Gemini 2.0 models.
Release Notes:
- Added support for Google's Gemini 2.0 Flash experimental model.
Note:
Weirdly enough the model is slow on small talk responses like 'hi' (in
my tests) but very fast on things that need more tokens like 'write me a
snake game in python'. Likely an API problem.
TESTED ONLY ON WINDOWS! Would test further but don't have Linux
installed and don't have an Mac. Will likely work everywhere.
Why?:
I think Gemini 2.0 Flash is incredibly good model at coding and
following instructions. I think it would be nice to have it in the
editor. I did as minimal changes as possible while adding the model and
streaming validation. I think it's worth merging the commits as they
bring good improvements.
---------
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
Release Notes:
- Added support for OpenAI o1-mini and o1-preview models.
---------
Co-authored-by: Jason Mancuso <7891333+jvmncs@users.noreply.github.com>
Co-authored-by: Bennet <bennet@zed.dev>
Add `/auto` behind a feature flag that's disabled for now, even for
staff.
We've decided on a different design for context inference, but there are
parts of /auto that will be useful for that, so we want them in the code
base even if they're unused for now.
Release Notes:
- N/A
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
This commit adds a custom icon for Anthropic hosted models.


- Adding a new SVG icon for Anthropic hosted models.
- The new icon is located at: `assets/icons/ai_anthropic_hosted.svg`
- Updating the LanguageModel trait to include an optional icon method
- Implementing the icon method for CloudModel to return the custom icon
for Anthropic hosted models
- Updating the UI components to use the model-specific icon when
available
- Adding a new IconName variant for the Anthropic hosted icon
We should change the non-hosted icon in some small way to distinguish it
from the hosted version. I duplicated the path for now so we can
hopefully add it for the next release.
Release Notes:
- N/A
This PR updates the `LanguageModel` trait with a new method for denoting
the availability of a model.
Right now we have two variants:
- `Public` for models that have no additional restrictions (other than
their respective setup/authentication requirements)
- `RequiresPlan` for models that require a specific Zed plan
Release Notes:
- N/A
In this pull request, we change the zed.dev protocol so that we pass the
raw JSON for the specified provider directly to our server. This avoids
the need to define a protobuf message that's a superset of all these
formats.
@bennetbo: We also changed the settings for available_models under
zed.dev to be a flat format, because the nesting seemed too confusing.
Can you help us upgrade the local provider configuration to be
consistent with this? We do whatever we need to do when parsing the
settings to make this simple for users, even if it's a bit more complex
on our end. We want to use versioning to avoid breaking existing users,
but need to keep making progress.
```json
"zed.dev": {
"available_models": [
{
"provider": "anthropic",
"name": "some-newly-released-model-we-havent-added",
"max_tokens": 200000
}
]
}
```
Release Notes:
- N/A
---------
Co-authored-by: Nathan <nathan@zed.dev>
<img width="624" alt="image"
src="https://github.com/user-attachments/assets/f492b0bd-14c3-49e2-b2ff-dc78e52b0815">
- [x] Correctly set custom model token count
- [x] How to count tokens for Gemini models?
- [x] Feature flag zed.dev provider
- [x] Figure out how to configure custom models
- [ ] Update docs
Release Notes:
- Added support for quickly switching between multiple language model
providers in the assistant panel
---------
Co-authored-by: Antonio <antonio@zed.dev>
We will soon need `semantic_index` to be able to use
`CompletionProvider`. This is currently impossible due to a cyclic crate
dependency, because `CompletionProvider` lives in the `assistant` crate,
which depends on `semantic_index`.
This PR breaks the dependency cycle by extracting two crates out of
`assistant`: `language_model` and `completion`.
Only one piece of logic changed: [this
code](922fcaf5a6 (diff-3857b3707687a4d585f1200eec4c34a7a079eae8d303b4ce5b4fce46234ace9fR61-R69)).
* As of https://github.com/zed-industries/zed/pull/13276, whenever we
ask a given completion provider for its available models, OpenAI
providers would go and ask the global assistant settings whether the
user had configured an `available_models` setting, and if so, return
that.
* This PR changes it so that instead of eagerly asking the assistant
settings for this info (the new crate must not depend on `assistant`, or
else the dependency cycle would be back), OpenAI completion providers
now store the user-configured settings as part of their struct, and
whenever the settings change, we update the provider.
In theory, this change should not change user-visible behavior...but
since it's the only change in this large PR that's more than just moving
code around, I'm mentioning it here in case there's an unexpected
regression in practice! (cc @amtoaer in case you'd like to try out this
branch and verify that the feature is still working the way you expect.)
Release Notes:
- N/A
---------
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>