Commit graph

204 commits

Author SHA1 Message Date
Bennet Bo Fenner
59aeede50d
vercel: Use proper model identifiers and add image support (#33377)
Follow up to previous PRs:
- Return `true` in `supports_images` - v0 supports images already
- Rename model id to match the exact version of the model `v0-1.5-md`
(For now we do not expose `sm`/`lg` variants since they seem not to be
available via the API)
- Provide autocompletion in settings for using `vercel` as a `provider`

Release Notes:

- N/A
2025-06-25 13:26:41 +00:00
Bennet Bo Fenner
18f1221a44
vercel: Reuse existing OpenAI code (#33362)
Follow up to #33292

Since Vercel's API is OpenAI compatible, we can reuse a bunch of code.

Release Notes:

- N/A
2025-06-25 15:04:43 +02:00
Shardul Vaidya
4396ac9dd6
bedrock: DeepSeek does not support receiving Reasoning Blocks (#33326)
Closes #32341

Release Notes:

- Fixed DeepSeek R1 errors for reasoning blocks being sent back to the model.
2025-06-25 14:51:25 +03:00
Vladimir Kuznichenkov
c6ff58675f
bedrock: Fix empty tool input on project diagnostic in bedrock (#33369)
Bedrock [do not accept][1] `null` as a JSON value input for the tool
call when called back.

Instead of passing null, we will pass back an empty object, which is
accepted by API

Closes #33204

Release Notes:

- Fixed project diagnostic tool call for bedrock

[1]:
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ToolUseBlock.html
2025-06-25 14:28:36 +03:00
Umesh Yadav
108162423d
language_models: Emit UsageUpdate events for token usage in DeepSeek and OpenAI (#33242)
Closes #ISSUE

Release Notes:

- N/A
2025-06-25 09:42:30 +02:00
Vladimir Kuznichenkov
098896146e
bedrock: Fix subsequent bedrock tool calls fail (#33174)
Closes #30714

Bedrock converse api expect to see tool options if at least one tool was
used in conversation in the past messages.

Right now if `LanguageModelToolChoice::None` isn't supported edit agent
[remove][1] tools from request. That point breaks Converse API of
Bedrock. As was proposed in [the issue][2] we won't drop tool choose but
instead will deny any of them if model will respond with a tool choose.

[1]:
fceba6c795/crates/assistant_tools/src/edit_agent.rs (L703)
[2]:
https://github.com/zed-industries/zed/issues/30714#issuecomment-2886422716

Release Notes:

- Fixed bedrock tool calls in edit mode
2025-06-25 10:37:07 +03:00
Bennet Bo Fenner
7be57baef0
agent: Fix issue with Anthropic thinking models (#33317)
cc @osyvokon 

We were seeing a bunch of errors in our backend when people were using
Claude models with thinking enabled.

In the logs we would see
> an error occurred while interacting with the Anthropic API:
invalid_request_error: messages.x.content.0.type: Expected `thinking` or
`redacted_thinking`, but found `text`. When `thinking` is enabled, a
final `assistant` message must start with a thinking block (preceeding
the lastmost set of `tool_use` and `tool_result` blocks). We recommend
you include thinking blocks from previous turns. To avoid this
requirement, disable `thinking`. Please consult our documentation at
https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking

However, this issue did not occur frequently and was not easily
reproducible. Turns out it was triggered by us not correctly handling
[Redacted Thinking
Blocks](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking#thinking-redaction).

I could constantly reproduce this issue by including this magic string:
`ANTHROPIC_MAGIC_STRING_TRIGGER_REDACTED_THINKING_46C9A13E193C177646C7398A98432ECCCE4C1253D5E2D82641AC0E52CC2876CB
` in the request, which forces `claude-3-7-sonnet` to emit redacted
thinking blocks (confusingly the magic string does not seem to be
working for `claude-sonnet-4`). As soon as we hit a tool call Anthropic
would return an error.

Thanks to @osyvokon for pointing me in the right direction 😄!


Release Notes:

- agent: Fixed an issue where Anthropic models would sometimes return an
error when thinking was enabled
2025-06-24 16:23:59 +00:00
Danilo Leal
94735aef69
Add support for Vercel as a language model provider (#33292)
Vercel v0 is an OpenAI-compatible model, so this is mostly a dupe of the
OpenAI provider files with some adaptations for v0, including going
ahead and using the custom endpoint for the API URL field.

Release Notes:

- Added support for Vercel as a language model provider.
2025-06-24 11:02:06 -03:00
Richard Feldman
c610ebfb03
Thread Anthropic errors into LanguageModelKnownError (#33261)
This PR is in preparation for doing automatic retries for certain
errors, e.g. Overloaded. It doesn't change behavior yet (aside from some
granularity of error messages shown to the user), but rather mostly
changes some error handling to be exhaustive enum matches instead of
`anyhow` downcasts, and leaves some comments for where the behavior
change will be in a future PR.

Release Notes:

- N/A
2025-06-23 18:48:26 +00:00
Peter Tripp
595f61f0d6
bedrock: Use Claude 3.0 Haiku where Haiku 3.5 is not available (#33214)
Closes: https://github.com/zed-industries/zed/issues/33183

@kuzaxak Can you confirm this works for you?

Release Notes:

- bedrock: Use Anthropic Haiku 3.0 in AWS regions where Haiku 3.5 is
unavailable
2025-06-22 15:15:20 -04:00
Umesh Yadav
dfdd2b9558
language_models: Add thinking support to OpenRouter provider (#32541)
Did some bit cleanup of code for loading models for settings as that is
not required as we are fetching all the models from openrouter so it's
better to maintain one source of truth

Release Notes:

- Add thinking support to OpenRouter provider
2025-06-21 08:03:50 +02:00
Michael Sloan
7e801dccb0
agent: Fix issues with usage display sometimes showing initially fetched usage (#33125)
Having `Thread::last_usage` as an override of the initially fetched
usage could cause the initial usage to be displayed when the current
thread is empty or in text threads. Fix is to just store last usage info
in `UserStore` and not have these overrides

Release Notes:

- Agent: Fixed request usage display to always include the most recently
known usage - there were some cases where it would show the initially
requested usage.
2025-06-20 21:28:48 +00:00
Danilo Leal
2624950472
agent: Fix text wrapping in the provider set up list items (#33063)
Release Notes:

- agent: Fixed text wrapping in the provider set up list items in the
settings view.
2025-06-19 18:17:56 -03:00
Oleksiy Syvokon
3b31db1b1f
open_router: Avoid redundant model list downloads (#33033)
Previously, the OpenRouter models list (~412kb) was being downloaded
around 10 times during startup -- even when OpenRouter was not
configured.

This update addresses the issue by:

1. Fetching the models list only when OpenRouter settings change.
2. Skipping API calls if OpenRouter is not configured.


Release Notes:

- Avoid unnecessary requests to OpenRouter
2025-06-19 14:41:36 +00:00
Danilo Leal
ec0f2fa79a
agent: Fix button ids for resetting keys in OpenAI settings (#33032)
These "Reset API Key" and "Reset API URL" button had the same ids, so
therefore, they weren't working.

Release Notes:

- N/A
2025-06-19 14:09:53 +00:00
Bennet Bo Fenner
c34b24b5fb
open_ai: Fix issues with OpenAI compatible APIs (#32982)
Ran into this while adding support for Vercel v0s models:
- The timestamp seems to be returned in Milliseconds instead of seconds
so it breaks the bounds of `created: u32`. We did not use this field
anywhere so just decided to remove it
- Sometimes the `choices` field can be empty when the last chunk comes
in because it only contains `usage`

Release Notes:

- N/A
2025-06-18 21:51:51 +00:00
Danilo Leal
629bd42276
agent: Add ability to change the API base URL for OpenAI via the UI (#32979)
The `api_url` setting is one that most providers already support and can
be changed via the `settings.json`. We're adding the ability to change
it via the UI for OpenAI specifically so it can be more easily connected
to v0.

Release Notes:

- agent: Added ability to change the API base URL for OpenAI via the UI

---------

Co-authored-by: Bennet Bo Fenner <53836821+bennetbo@users.noreply.github.com>
2025-06-18 18:47:43 -03:00
Bennet Bo Fenner
d2ca68bd5d
copilot chat: Remove invalid assertions (#32977)
Related to #32888, but will not fix the issue. 
Turns out these assertions are wrong (Not sure if they were correct at
some point).
I tested with this code:
```
        request = LanguageModelRequest {
            messages: vec![
                LanguageModelRequestMessage {
                    role: Role::User,
                    content: vec![MessageContent::Text("Give me 10 jokes".to_string())],
                    cache: false,
                },
                LanguageModelRequestMessage {
                    role: Role::Assistant,
                    content: vec![MessageContent::Text("Sure, here are 10 jokes:".to_string())],
                    cache: false,
                },
            ],
            ..request
        };
```
The API happily accepted this and Claude proceeded to tell me 10 jokes.

Release Notes:

- N/A
2025-06-18 22:17:31 +02:00
Ben Brandt
0191f16ebc
Update Gemini Models (#32902)
Updates google_ai to use latest model information from the respective
model cards: https://ai.google.dev/gemini-api/docs/models

Release Notes:

- google: Update to latest Gemini 2.5 models
2025-06-17 20:26:27 +00:00
Richard Feldman
5405c2c2d3
Standardize on u64 for token counts (#32869)
Previously we were using a mix of `u32` and `usize`, e.g. `max_tokens:
usize, max_output_tokens: Option<u32>` in the same `struct`.

Although [tiktoken](https://github.com/openai/tiktoken) uses `usize`,
token counts should be consistent across targets (e.g. the same model
doesn't suddenly get a smaller context window if you're compiling for
wasm32), and these token counts could end up getting serialized using a
binary protocol, so `usize` is not the right choice for token counts.

I chose to standardize on `u64` over `u32` because we don't store many
of them (so the extra size should be insignificant) and future models
may exceed `u32::MAX` tokens.

Release Notes:

- N/A
2025-06-17 10:43:07 -04:00
Umesh Yadav
ed4b29f80c
language_models: Improve token counting for providers (#32853)
We push the usage data whenever we receive it from the provider to make
sure the counting is correct after the turn has ended.

- [x] Ollama 
- [x] Copilot 
- [x] Mistral 
- [x] OpenRouter 
- [x] LMStudio

Put all the changes into a single PR open to move these to separate PR
if that makes the review and testing easier.

Release Notes:

- N/A
2025-06-17 10:46:29 +00:00
Umesh Yadav
4b88090cca
language_models: Add images support to LMStudio provider (#32741)
Tested with gemma3:4b
LMStudio: beta version 0.3.17

Release Notes:

- Add images support to LMStudio provider
2025-06-17 12:14:44 +02:00
Umesh Yadav
b13144eb1f
copilot: Allow enterprise to sign in and use copilot (#32296)
This addresses:
https://github.com/zed-industries/zed/pull/32248#issuecomment-2952060834.

This PR address two main things one allowing enterprise users to use
copilot chat and completion while also introducing the new way to handle
copilot url specific their subscription. Simplifying the UX around the
github copilot and removes the burden of users figuring out what url to
use for their subscription.

- [x] Pass enterprise_uri to copilot lsp so that it can redirect users
to their enterprise server. Ref:
https://github.com/github/copilot-language-server-release#configuration-management
- [x] Remove the old ui and config language_models.copilot which allowed
users to specify their copilot_chat specific endpoint. We now derive
that automatically using token endpoint for copilot so that we can send
the requests to specific copilot endpoint for depending upon the url
returned by copilot server.
- [x] Tested this for checking the both enterprise and non-enterprise
flow work. Thanks to @theherk for the help to debug and test it.
- [ ] Udpdate the zed.dev/docs to refelect how to setup enterprise
copilot.

What this doesn't do at the moment:

* Currently zed doesn't allow to have two seperate accounts as the token
used in chat is same as the one generated by lsp. After this changes
also this behaviour remains same and users can't have both enterprise
and personal copilot installed.

P.S: Might need to do some bit of code cleanup and other things but
overall I felt this PR was ready for atleast first pass of review to
gather feedback around the implementation and code itself.


Release Notes:

- Add enterprise support for GitHub copilot

---------

Signed-off-by: Umesh Yadav <git@umesh.dev>
2025-06-17 11:36:53 +02:00
Oleksiy Syvokon
41e9f3148c
gemini: Send thought signatures back to API (#32064)
This is a follow-up to:
- #31925 
- #31902

Release Notes:

- Support Gemini thought signatures
2025-06-16 14:24:44 +00:00
Ben Brandt
2d4e427b45
OpenAI cleanups (#32597)
Release Notes:

- openai: Remove support for deprecated o1-preview and o1-mini models 
- openai: Support streaming for o1 model
2025-06-12 08:55:48 +00:00
Umesh Yadav
0852912fd6
language_models: Add image support to OpenRouter models (#32012)
- [x] Manual Testing(Tested this with Qwen2.5 VL 32B Instruct (free) and
Llama 4 Scout (free), Llama 4 Maverick (free). Llama models have some
issues in write profile due to one of the in built tools schema, so I
tested it with minimal profile.

Closes #ISSUE

Release Notes:

- Add image support to OpenRouter models

---------

Signed-off-by: Umesh Yadav <umesh4257@gmail.com>
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-06-11 08:01:29 +00:00
Ben Brandt
e4bd115a63
More resilient eval (#32257)
Bubbles up rate limit information so that we can retry after a certain
duration if needed higher up in the stack.

Also caps the number of concurrent evals running at once to also help.

Release Notes:

- N/A
2025-06-09 18:07:22 +00:00
Clauses Kim
1fe10117b7
Add GitHub token environment variable support for Copilot (#31392)
Add support for environment variables as authentication alternatives to
OAuth flow for Copilot. Closes #31172

We can include the token in HTTPS request headers to hopefully resolve
the rate limiting issue in #9483. This change will be part of a separate
PR.

Release Notes:

- Added support for manually providing an OAuth token for GitHub Copilot
Chat by assigning the GH_COPILOT_TOKEN environment variable

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-06-09 12:39:44 +02:00
Umesh Yadav
0bc9478b46
language_models: Add support for images to Mistral models (#32154)
Tested with following models. Hallucinates with whites outline images
like white lined zed logo but works fine with zed black outlined logo:

Pixtral 12B (pixtral-12b-latest)
Pixtral Large (pixtral-large-latest)
Mistral Medium (mistral-medium-latest)
Mistral Small (mistral-small-latest)

After this PR, almost all of the zed's llm provider who support images
are now supported. Only remaining one is LMStudio. Hopefully we will get
that one as well soon.

Release Notes:

- Add support for images to mistral models

---------

Signed-off-by: Umesh Yadav <git@umesh.dev>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
2025-06-09 10:00:02 +00:00
Umesh Yadav
4ac7935589
language_models: Add thinking support to LM Studio provider (#32337)
It works similar to how deepseek works where the thinking is returned as
reasoning_content and we don't have to send the reasoning_content back
in the request.

This is a experiment feature which can be enabled from settings like
this:
<img width="1381" alt="Screenshot 2025-06-08 at 4 26 06 PM"
src="https://github.com/user-attachments/assets/d2f60f3c-0f93-45fc-bae2-4ded42981820"
/>

Here is how it looks to use(tested with
`deepseek/deepseek-r1-0528-qwen3-8b`

<img width="528" alt="Screenshot 2025-06-08 at 5 12 33 PM"
src="https://github.com/user-attachments/assets/f7716f52-5417-4f14-82b8-e853de054f63"
/>


Release Notes:

- Add thinking support to LM Studio provider
2025-06-09 11:55:34 +02:00
Umesh Yadav
c75ad2fd11
language_models: Add thinking support to DeepSeek provider (#32338)
For DeepSeek provider thinking is returned as reasoning_content and we
don't have to send the reasoning_content back in the request.

Release Notes:

- Add thinking support to DeepSeek provider
2025-06-09 11:10:55 +02:00
Umesh Yadav
104f601413
language_models: Fix Copilot models not loading (#32288)
Recently in this PR: https://github.com/zed-industries/zed/pull/32248
github copilot settings was introduced. This had missing settings update
which was leading to github copilot models not getting fetched. This had
missing subscription to update the settings inside the copilot language
model provider. Which caused it not show models at all.

cc @osiewicz 

Release Notes:

- N/A

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-06-07 09:32:01 +00:00
Elijah McMorris
52fa7ababb
lmstudio: Fill max_tokens using the response from /models (#25606)
The info for `max_tokens` for the model is included in
`{api_url}/models`
I don't think this needs to be `.clamp` like in
`crates/ollama/src/ollama.rs` `get_max_tokens`, but it might need to be

## Before:
Every model shows 2k

![image](https://github.com/user-attachments/assets/676075c8-0ceb-44b1-ae27-72ed6a6d783c)

## After:

![image](https://github.com/user-attachments/assets/8291535b-976e-4601-b617-1a508bf44e12)

### Json from `{api_url}/models` with model not loaded
```json
  {
      "id": "qwen2.5-coder-1.5b-instruct-mlx",
      "object": "model",
      "type": "llm",
      "publisher": "lmstudio-community",
      "arch": "qwen2",
      "compatibility_type": "mlx",
      "quantization": "4bit",
      "state": "not-loaded",
      "max_context_length": 32768
    },
```

## Notes
The response from `{api_url}/models` seems to return the `max_tokens`
for the model, not the currently configured context length, but I think
showing the `max_tokens` for the model is better than setting 2k for
everything

`loaded_context_length` exists, but only if the model is loaded at the
startup of zed, which usually isn't the case

maybe `fetch_models` should be rerun when swapping lmstudio models

### Currently configured context
this isn't shown in `{api_url}/models`

![image](https://github.com/user-attachments/assets/8511cb9d-914b-4065-9eba-c0b086ad253b)

### Json from `{api_url}/models` with model loaded
```json
  {
     "id": "qwen2.5-coder-1.5b-instruct-mlx",
      "object": "model",
      "type": "llm",
      "publisher": "lmstudio-community",
      "arch": "qwen2",
      "compatibility_type": "mlx",
      "quantization": "4bit",
      "state": "loaded",
      "max_context_length": 32768,
      "loaded_context_length": 4096
    },
```

Release Notes:

- lmstudio: Fixed showing `max_tokens` in the assistant panel

---------

Co-authored-by: Peter Tripp <peter@zed.dev>
2025-06-06 20:21:23 +00:00
Piotr Osiewicz
73cd6ef92c
Add UI for configuring the API Url directly (#32248)
Closes #22901 

Release Notes:

- Copilot Chat endpoint URLs can now be configured via `settings.json`
or Configuration View.
2025-06-06 18:05:40 +02:00
Umesh Yadav
b8c1b54f9e
language_models: Fix Mistral tool->user message sequence handling (#31736)
Closes #31491

### Problem
Mistral API enforces strict conversation flow requirements that other
providers don't. Specifically, after a `tool` message, the next message
**must** be from the `assistant` role, not `user`. This causes the
error:
```
"Unexpected role 'user' after role 'tool'"
```
This can also occur in normal conversation flow where mistral doesn't
return the assistant message but that is something which can't be
reproduce reliably.

### Root Cause
When users interrupt an ongoing tool call sequence by sending a new
message, we insert a `user` message directly after a `tool` message,
violating Mistral's protocol.

**Expected Mistral flow:**
```
user → assistant (with tool_calls) → tool (results) → assistant (processes results) → user (next input)
```

**What we were doing:**
```
user → assistant (with tool_calls) → tool (results) → user (interruption) 
```

### Solution
Insert an empty `assistant` message between any `tool` → `user` sequence
in the Mistral provider's request construction. This satisfies Mistral's
API requirements without affecting other providers or requiring UX
changes.

### Testing
To reproduce the original error:
1. Start agent chat with `codestral-latest`
2. Send: "Describe this project using tool call only"
3. Once tool calls begin, send: "stop this"
4. Main branch: API error
5. This fix: Works correctly

Release Notes:

- Fixed Mistral tool calling in some cases
2025-06-06 12:35:22 +03:00
Oleksiy Syvokon
04cd3fcd23
google: Add latest versions of Gemini 2.5 Pro and Flash Preview (#32183)
Release Notes:

- Added the latest versions of Gemini 2.5 Pro and Flash Preview
2025-06-05 19:30:34 +00:00
Bennet Bo Fenner
28da99cc06
anthropic: Fix error when attaching multiple images (#32092)
Closes #31438

Release Notes:

- agent: Fixed an edge case were the request would fail when using
Claude and multiple images were attached

---------

Co-authored-by: Richard Feldman <oss@rtfeldman.com>
2025-06-05 16:29:49 +00:00
Ben Brandt
4304521655
Remove unused load_model method from LanguageModelProvider (#32070)
Removes the load_model trait method and its implementations in Ollama
and LM Studio providers, along with associated preload_model functions
and unused imports.

Release Notes:

- N/A
2025-06-04 14:07:01 +00:00
Umesh Yadav
c9c603b1d1
Add support for OpenRouter as a language model provider (#29496)
This pull request adds full integration with OpenRouter, allowing users
to access a wide variety of language models through a single API key.

**Implementation Details:**

* **Provider Registration:** Registers OpenRouter as a new language
model provider within the application's model registry. This includes UI
for API key authentication, token counting, streaming completions, and
tool-call handling.
* **Dedicated Crate:** Adds a new `open_router` crate to manage
interactions with the OpenRouter HTTP API, including model discovery and
streaming helpers.
* **UI & Configuration:** Extends workspace manifests, the settings
schema, icons, and default configurations to surface the OpenRouter
provider and its settings within the UI.
* **Readability:** Reformats JSON arrays within the settings files for
improved readability.

**Design Decisions & Discussion Points:**

* **Code Reuse:** I leveraged much of the existing logic from the
`openai` provider integration due to the significant similarities
between the OpenAI and OpenRouter API specifications.
* **Default Model:** I set the default model to `openrouter/auto`. This
model automatically routes user prompts to the most suitable underlying
model on OpenRouter, providing a convenient starting point.
* **Model Population Strategy:**
* <strike>I've implemented dynamic population of available models by
querying the OpenRouter API upon initialization.
* Currently, this involves three separate API calls: one for all models,
one for tool-use models, and one for models good at programming.
* The data from the tool-use API call sets a `tool_use` flag for
relevant models.
* The data from the programming models API call is used to sort the
list, prioritizing coding-focused models in the dropdown.</strike>
* <strike>**Feedback Welcome:** I acknowledge this multi-call approach
is API-intensive. I am open to feedback and alternative implementation
suggestions if the team believes this can be optimized.</strike>
    * **Update: Now this has been simplified to one api call.**
* **UI/UX Considerations:**
* <strike>Authentication Method: Currently, I've implemented the
standard API key input in settings, similar to other providers like
OpenAI/Anthropic. However, OpenRouter also supports OAuth 2.0 with PKCE.
This could offer a potentially smoother, more integrated setup
experience for users (e.g., clicking a button to authorize instead of
copy-pasting a key). Should we prioritize implementing OAuth PKCE now,
or perhaps add it as an alternative option later?</strike>(PKCE is not
straight forward and complicated so skipping this for now. So that we
can add the support and work on this later.)
* <strike>To visually distinguish models better suited for programming,
I've considered adding a marker (e.g., `</>` or `🧠`) next to their
names. Thoughts on this proposal?</strike>. (This will require a changes
and discussion across model provider. This doesn't fall under the scope
of current PR).
* OpenRouter offers 300+ models. The current implementation loads all of
them. **Feedback Needed:** Should we refine this list or implement more
sophisticated filtering/categorization for better usability?

**Motivation:**

This integration directly addresses one of the most highly upvoted
feature requests/discussions within the Zed community. Adding OpenRouter
support significantly expands the range of AI models accessible to
users.

I welcome feedback from the Zed team on this implementation and the
design choices made. I am eager to refine this feature and make it
available to users.

ISSUES: https://github.com/zed-industries/zed/discussions/16576

Release Notes:

- Added support for OpenRouter as a language model provider.

---------

Signed-off-by: Umesh Yadav <umesh4257@gmail.com>
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-06-03 15:59:46 +00:00
Shardul Vaidya
e13b494c9e
bedrock: Fix cross-region inference (#30659)
Closes #30535

Release Notes:

- AWS Bedrock: Add support for Meta Llama 4 Scout and Maverick models.
- AWS Bedrock: Fixed cross-region inference for all regions.
- AWS Bedrock: Updated all models available through Cross Region
inference.

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-06-03 15:46:35 +00:00
little-dude
c0397727e0
language_models: Sort Ollama models by name (#31620)
Hello,

This is my first contribution so apologies if I'm not following the
proper process (I haven't seen anything special in
https://github.com/zed-industries/zed/blob/main/CONTRIBUTING.md). Also,
I have tested my changes manually, but I could not figure out an easy we
to instantiate a `LanguageModelSelector` in the unit tests, so I didn't
write a test. If you can provide some guidance I'd be happy to write a
test.

---

If the user configured the models with custom names via `display_name`,
we want the ollama models to be sorted based on the name that is
actually displayed.

~~The original issue is only about ollama but this change will also
affect the other providers.~~

Closes #30854

Release Notes:

- Ollama: Changed models to be sorted by name.
2025-06-03 15:37:08 +00:00
90aca
cf931247d0
Add thinking budget for Gemini custom models (#31251)
Closes #31243

As described in my issue, the [thinking
budget](https://ai.google.dev/gemini-api/docs/thinking) gets
automatically chosen by Gemini unless it is specifically set to
something. In order to have fast responses (inline assistant) I prefer
to set it to 0.

Release Notes:

- ai: Added `thinking` mode for custom Google models with configurable
token budget

---------

Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-06-03 13:40:20 +02:00
Fernando Freire
3077abf9cf
google_ai: Parse thought parts in Gemini responses (#31925)
Fixes thinking Gemini models.

Closes #31902

Release Notes:

- Updated Google Gemini client to match the latest API
2025-06-03 10:37:06 +00:00
Umesh Yadav
59686f1f44
language_models: Add images support for Ollama vision models (#31883)
Ollama supports vision to process input images. This PR adds support for
same. I have tested this with gemma3:4b and have attached the screenshot
of it working.

<img width="435" alt="image"
src="https://github.com/user-attachments/assets/5f17d742-0a37-4e6c-b4d8-05b750a0a158"
/>


Release Notes:

- Add image support for [Ollama vision models](https://ollama.com/search?c=vision)
2025-06-03 11:12:59 +02:00
THELOSTSOUL
b820aa1fcd
Add tool support for DeepSeek (#30223)
[deepseek function call
api](https://api-docs.deepseek.com/guides/function_calling)
has been released and it is same as openai.

Release Notes:

- Added tool calling support for Deepseek Models

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-06-03 10:59:36 +02:00
Umesh Yadav
65e3e84cbc
language_models: Add thinking support for ollama (#31665)
This PR updates how we handle Ollama responses, leveraging the new
[v0.9.0](https://github.com/ollama/ollama/releases/tag/v0.9.0) release.
Previously, thinking text was embedded within the model's main content,
leading to it appearing directly in the agent's response. Now, thinking
content is provided as a separate parameter, allowing us to display it
correctly within the agent panel, similar to other providers. I have
tested this with qwen3:8b and works nicely. ~~We can release this once
the ollama is release is stable.~~ It's released now as stable.

<img width="433" alt="image"
src="https://github.com/user-attachments/assets/2983ef06-6679-4033-82c2-231ea9cd6434"
/>


Release Notes:

- Add thinking support for ollama

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-06-02 15:12:41 +00:00
Oleksiy Syvokon
ae219e9e99
agent: Fix bug with double-counting tokens in Gemini (#31885)
We report the total number of input tokens by summing the numbers of
1. Prompt tokens
2. Cached tokens

But Google API returns prompt tokens (1) that already include cached
tokens (2), so we were double counting tokens in some cases.

Release Notes:

- Fixed bug with double-counting tokens in Gemini
2025-06-02 10:18:44 +00:00
Marshall Bowers
a23ee61a4b
Pass up intent with completion requests (#31710)
This PR adds a new `intent` field to completion requests to assist in
categorizing them correctly.

Release Notes:

- N/A

---------

Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-05-29 20:43:12 +00:00
Umesh Yadav
4e7dc37f01
language_models: Remove handling of WrappedTextContent in tool result content (#31605)
Fixes ci pipeline

Release Notes:

- N/A
2025-05-28 16:43:08 +00:00
Richard Feldman
00fd045844
Make language model deserialization more resilient (#31311)
This expands our deserialization of JSON from models to be more tolerant
of different variations that the model may send, including
capitalization, wrapping things in objects vs. being plain strings, etc.

Also when deserialization fails, it reports the entire error in the JSON
so we can see what failed to deserialize. (Previously these errors were
very unhelpful at diagnosing the problem.)

Finally, also removes the `WrappedText` variant since the custom
deserializer just turns that style of JSON into a normal `Text` variant.

Release Notes:

- N/A
2025-05-28 12:06:07 -04:00