Closes#31438
Release Notes:
- agent: Fixed an edge case were the request would fail when using
Claude and multiple images were attached
---------
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
Removes the load_model trait method and its implementations in Ollama
and LM Studio providers, along with associated preload_model functions
and unused imports.
Release Notes:
- N/A
This pull request adds full integration with OpenRouter, allowing users
to access a wide variety of language models through a single API key.
**Implementation Details:**
* **Provider Registration:** Registers OpenRouter as a new language
model provider within the application's model registry. This includes UI
for API key authentication, token counting, streaming completions, and
tool-call handling.
* **Dedicated Crate:** Adds a new `open_router` crate to manage
interactions with the OpenRouter HTTP API, including model discovery and
streaming helpers.
* **UI & Configuration:** Extends workspace manifests, the settings
schema, icons, and default configurations to surface the OpenRouter
provider and its settings within the UI.
* **Readability:** Reformats JSON arrays within the settings files for
improved readability.
**Design Decisions & Discussion Points:**
* **Code Reuse:** I leveraged much of the existing logic from the
`openai` provider integration due to the significant similarities
between the OpenAI and OpenRouter API specifications.
* **Default Model:** I set the default model to `openrouter/auto`. This
model automatically routes user prompts to the most suitable underlying
model on OpenRouter, providing a convenient starting point.
* **Model Population Strategy:**
* <strike>I've implemented dynamic population of available models by
querying the OpenRouter API upon initialization.
* Currently, this involves three separate API calls: one for all models,
one for tool-use models, and one for models good at programming.
* The data from the tool-use API call sets a `tool_use` flag for
relevant models.
* The data from the programming models API call is used to sort the
list, prioritizing coding-focused models in the dropdown.</strike>
* <strike>**Feedback Welcome:** I acknowledge this multi-call approach
is API-intensive. I am open to feedback and alternative implementation
suggestions if the team believes this can be optimized.</strike>
* **Update: Now this has been simplified to one api call.**
* **UI/UX Considerations:**
* <strike>Authentication Method: Currently, I've implemented the
standard API key input in settings, similar to other providers like
OpenAI/Anthropic. However, OpenRouter also supports OAuth 2.0 with PKCE.
This could offer a potentially smoother, more integrated setup
experience for users (e.g., clicking a button to authorize instead of
copy-pasting a key). Should we prioritize implementing OAuth PKCE now,
or perhaps add it as an alternative option later?</strike>(PKCE is not
straight forward and complicated so skipping this for now. So that we
can add the support and work on this later.)
* <strike>To visually distinguish models better suited for programming,
I've considered adding a marker (e.g., `</>` or `🧠`) next to their
names. Thoughts on this proposal?</strike>. (This will require a changes
and discussion across model provider. This doesn't fall under the scope
of current PR).
* OpenRouter offers 300+ models. The current implementation loads all of
them. **Feedback Needed:** Should we refine this list or implement more
sophisticated filtering/categorization for better usability?
**Motivation:**
This integration directly addresses one of the most highly upvoted
feature requests/discussions within the Zed community. Adding OpenRouter
support significantly expands the range of AI models accessible to
users.
I welcome feedback from the Zed team on this implementation and the
design choices made. I am eager to refine this feature and make it
available to users.
ISSUES: https://github.com/zed-industries/zed/discussions/16576
Release Notes:
- Added support for OpenRouter as a language model provider.
---------
Signed-off-by: Umesh Yadav <umesh4257@gmail.com>
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
Closes#30535
Release Notes:
- AWS Bedrock: Add support for Meta Llama 4 Scout and Maverick models.
- AWS Bedrock: Fixed cross-region inference for all regions.
- AWS Bedrock: Updated all models available through Cross Region
inference.
---------
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
Hello,
This is my first contribution so apologies if I'm not following the
proper process (I haven't seen anything special in
https://github.com/zed-industries/zed/blob/main/CONTRIBUTING.md). Also,
I have tested my changes manually, but I could not figure out an easy we
to instantiate a `LanguageModelSelector` in the unit tests, so I didn't
write a test. If you can provide some guidance I'd be happy to write a
test.
---
If the user configured the models with custom names via `display_name`,
we want the ollama models to be sorted based on the name that is
actually displayed.
~~The original issue is only about ollama but this change will also
affect the other providers.~~
Closes#30854
Release Notes:
- Ollama: Changed models to be sorted by name.
Closes#31243
As described in my issue, the [thinking
budget](https://ai.google.dev/gemini-api/docs/thinking) gets
automatically chosen by Gemini unless it is specifically set to
something. In order to have fast responses (inline assistant) I prefer
to set it to 0.
Release Notes:
- ai: Added `thinking` mode for custom Google models with configurable
token budget
---------
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
[deepseek function call
api](https://api-docs.deepseek.com/guides/function_calling)
has been released and it is same as openai.
Release Notes:
- Added tool calling support for Deepseek Models
---------
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
This PR updates how we handle Ollama responses, leveraging the new
[v0.9.0](https://github.com/ollama/ollama/releases/tag/v0.9.0) release.
Previously, thinking text was embedded within the model's main content,
leading to it appearing directly in the agent's response. Now, thinking
content is provided as a separate parameter, allowing us to display it
correctly within the agent panel, similar to other providers. I have
tested this with qwen3:8b and works nicely. ~~We can release this once
the ollama is release is stable.~~ It's released now as stable.
<img width="433" alt="image"
src="https://github.com/user-attachments/assets/2983ef06-6679-4033-82c2-231ea9cd6434"
/>
Release Notes:
- Add thinking support for ollama
---------
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
We report the total number of input tokens by summing the numbers of
1. Prompt tokens
2. Cached tokens
But Google API returns prompt tokens (1) that already include cached
tokens (2), so we were double counting tokens in some cases.
Release Notes:
- Fixed bug with double-counting tokens in Gemini
This PR adds a new `intent` field to completion requests to assist in
categorizing them correctly.
Release Notes:
- N/A
---------
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
This expands our deserialization of JSON from models to be more tolerant
of different variations that the model may send, including
capitalization, wrapping things in objects vs. being plain strings, etc.
Also when deserialization fails, it reports the entire error in the JSON
so we can see what failed to deserialize. (Previously these errors were
very unhelpful at diagnosing the problem.)
Finally, also removes the `WrappedText` variant since the custom
deserializer just turns that style of JSON into a normal `Text` variant.
Release Notes:
- N/A
Closes#30004
**Quick demo:**
https://github.com/user-attachments/assets/0ac93851-81d7-4128-a34b-1f3ae4bcff6d
**Additional notes:**
I've tried to stick to existing code in OpenAI provider as much as
possible without changing much to keep the diff small.
This PR is done in collaboration with @yagil from LM Studio. We agreed
upon the format in which LM Studio will return information about tool
use support for the model in the upcoming version. As of current stable
version nothing is going to change for the users, but once they update
to a newer LM Studio tool use gets automatically enabled for them. I
think this is much better UX then defaulting to true right now.
Release Notes:
- Added support for tool calls to LM Studio provider
---------
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
Fixes regression caused by:
https://github.com/zed-industries/zed/pull/30639
Assistant messages can come back with no content, and we no longer
allowed that in the deserialization.
Release Notes:
- open_ai: fixed deserialization issue if assistant content was empty
This PR updates the Zed LLM provider to fetch the available models from
the server instead of hard-coding them in the binary.
Release Notes:
- Updated the Zed provider to fetch the list of available language
models from the server.
This PR updates the default/recommended models for the Anthropic and Zed
providers to be Claude Sonnet 4.
Release Notes:
- Updated default/recommended Anthropic models to Claude Sonnet 4.
This PR adds support for [Claude
4](https://www.anthropic.com/news/claude-4).
Release Notes:
- Added support for Claude Opus 4 and Claude Sonnet 4.
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
https://github.com/zed-industries/zed/issues/30972 brought up another
case where our context is not enough to track the actual source of the
issue: we get a general top-level error without inner error.
The reason for this was `.ok_or_else(|| anyhow!("failed to read HEAD
SHA"))?; ` on the top level.
The PR finally reworks the way we use anyhow to reduce such issues (or
at least make it simpler to bubble them up later in a fix).
On top of that, uses a few more anyhow methods for better readability.
* `.ok_or_else(|| anyhow!("..."))`, `map_err` and other similar error
conversion/option reporting cases are replaced with `context` and
`with_context` calls
* in addition to that, various `anyhow!("failed to do ...")` are
stripped with `.context("Doing ...")` messages instead to remove the
parasitic `failed to` text
* `anyhow::ensure!` is used instead of `if ... { return Err(...); }`
calls
* `anyhow::bail!` is used instead of `return Err(anyhow!(...));`
Release Notes:
- N/A
Some providers sometimes send `{ "type": "text", "text": ... }` instead
of just the text as a string. Now we accept those instead of erroring.
Release Notes:
- N/A
Closes https://github.com/zed-industries/zed/issues/29855
Implement tool use handling in Mistral provider, including mapping tool
call events and updating request construction. Add support for
tool_choice and parallel_tool_calls in Mistral API requests.
This works fine with all the existing models. Didn't touched anything
else but for future. Fetching models using their models api, deducting
tool call support, parallel tool calls etc should be done from model
data from api response.
<img width="547" alt="Screenshot 2025-05-06 at 4 52 37 PM"
src="https://github.com/user-attachments/assets/4c08b544-1174-40cc-a40d-522989953448"
/>
Tasks:
- [x] Add tool call support
- [x] Auto Fetch models using mistral api
- [x] Add tests for mistral crates.
- [x] Fix mistral configurations for llm providers.
Release Notes:
- agent: Add tool call support for existing mistral models
---------
Co-authored-by: Peter Tripp <peter@zed.dev>
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
I was able to get this fix in upstream, so now we can have simpler code
paths for our model selection.
I also added a test to catch if this would cause a bug again in the
future.
Release Notes:
- N/A
This PR removes an instance of marking a local `Subscription` binding as
unused.
While we `_` the field to prevent unused warnings, the locals shouldn't
be marked as unused as we do use them (and want them to participate in
usage tracking).
Release Notes:
- N/A
Thread doesn't run pending tools when `stop_reason` is not `ToolUse`.
Perhaps we should change that so that it always runs pending tools if
there are some, but for now this change just fixes setting `stop_reason`
for Google models.
Release Notes:
- N/A
This is very basic support for them. There are a number of other TODOs
before this is really a first-class supported feature, so not adding any
release notes for it; for now, this PR just makes it so that if
read_file tries to read a PNG (which has come up in practice), it at
least correctly sends it to Anthropic instead of messing up.
This also lays the groundwork for future PRs for more first-class
support for images in tool calls across more image file formats and LLM
providers.
Release Notes:
- N/A
---------
Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
Problem Statement:
Support for image analysis (vision) is currently restricted to Anthropic
and Gemini models. This limits users who wish to leverage vision
capabilities available in other models, such as Copilot, for tasks like
attaching image context within the agent message editor.
Proposed Change:
This PR extends vision support to include Copilot models that are
already equipped with vision capabilities. This integration will allow
users within VS Code to attach and analyze images using supported
Copilot models via the agent message editor.
Scope Limitation:
This PR does not implement controls within the message editor to ensure
that image context (e.g., through copy-paste or attachment) is
exclusively enabled or prompted only when a vision-supported model is
active. Long term the message editor should have access to each models
vision capability and stop the users from attaching images by either
greying out the context saying it's not support or not work through both
copy paste and file/directory search.
Closes#30076
Release Notes:
- Add vision support for Copilot Chat models
---------
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
I noticed the discussion in #28881, and had thought of exactly the same
a few days prior.
This implementation should preserve existing functionality fairly well.
I've added a dependency (serde_with) to allow the deserializer to skip
models which cannot be deserialized, which could occur if a future
provider, for instance, is added. Without this modification, such a
change could break all models. If extra dependencies aren't desired, a
manual implementation could be used instead.
- Closes#29369
Release Notes:
- Dynamically detect available Copilot Chat models, including all models
with tool support
---------
Co-authored-by: AidanV <aidanvanduyne@gmail.com>
Co-authored-by: imumesh18 <umesh4257@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
Co-authored-by: Agus Zubiaga <hi@aguz.me>
This was a particular problem in the Amazon Bedrock section (at least
for now) where there were multiple buttons and none of them actually
worked because they all had the same id.
Release Notes:
- agent: Fixed Amazon Bedrock settings link buttons not working.
Fixes https://github.com/zed-industries/zed/issues/30346
The model can output an empty string to indicate the absence of
arguments, which can't be parsed as a `serde_json::Value`. When that
happens, we now create an empty object instead on behalf of the model.
Release Notes:
- Fixed a bug that prevented Copilot models from calling the
`diagnostic` tool.
This PR removes the individual URL overrides for the LLM service.
We initially had `ZED_PREDICT_EDITS_URL` to allow for directing traffic
to the LLM Worker back when there was still the split of the
Collab-based LLM Service and the Cloudflare-based LLM Worker.
But now that all of the LLM functionality has been moved into the
Worker, we can just direct all traffic there.
Release Notes:
- N/A
tiktoken_rs is a bit behind (and even upstream tiktoken doesn't have all
of these models)
We were incorrectly using the cl100k tokenizer for some models that
actually use the o200k tokenizers. So that is updated.
I also made the match arms specific so that we do a better job of
catching whether or not tiktoken-rs accurately supports new models we
add in.
I will also do a PR upstream to see if we can move some of this logic
back out if tiktoken better supports the newer models.
Release Notes:
- Improved tokenizer support for openai models.
Copilot chat still returns a 400 if the dummy tool uses the `{}` schema.
This is a follow-up to https://github.com/zed-industries/zed/pull/30007.
Release Notes:
- Fixed a bug where agent edits would fail when using GitHub Copilot
Chat.
Co-authored-by: Agus Zubiaga <hi@aguz.me>
This PR updates the copy around the Zed Pro description to be more
accurate.
Release Notes:
- agent: Updated some copy about Zed Pro in the configuration view.
This PR makes it so we send up an `x-zed-version` header with the
client's version when making a request to llm.zed.dev for edit
predictions and completions.
Release Notes:
- N/A
Adds a new `agent.model_parameters` setting that allows the user to
specify a custom temperature for a provider AND/OR model:
```json5
"model_parameters": [
// To set parameters for all requests to OpenAI models:
{
"provider": "openai",
"temperature": 0.5
},
// To set parameters for all requests in general:
{
"temperature": 0
},
// To set parameters for a specific provider and model:
{
"provider": "zed.dev",
"model": "claude-3-7-sonnet-latest",
"temperature": 1.0
}
],
```
Release Notes:
- agent: Allow customizing temperature by provider/model
---------
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
Closes#29781
Tested this with llama3, gemma3 and qwen3.
This is a breaking change, which means after adding this code changes in
future version zed we will require atleast lmstudio >= 0.3.15. For
context why it's breaking changes check out the issue: #29781.
What this doesn't try to solve is:
* Tool calling, thinking text rendering. Will raise a seperate PR for
these as those are not required in this PR to make it work.
https://github.com/user-attachments/assets/945f9c73-6323-4a88-92e2-2219b760a249
Release Notes:
- lmstudio: Fixed Zed support for LMStudio >= v0.3.15 (breaking change -- older versions are no longer supported).
---------
Co-authored-by: Peter Tripp <peter@zed.dev>
The API will return a Bad Request (with no error message) when tools
were used previously in the conversation but no tools are provided as
part of a new request.
Inserting a dummy tool seems to circumvent this error.
Release Notes:
- Fixed an error that could sometimes occur when editing using Copilot
Chat.
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>