This commit enables extremely verbose taffy logging and adds output of the layout ids for the Open Router settings container ID and the ID for the text input that has the issue. https://gist.github.com/mgsloan/28a2d8d9c9c803fc60898eae9db1ec8e is the subsets of the logs starting with the first line with the container ID and ending with the last line that has the container ID (see the end of the logs for these IDs)
Repro for the issue:
* Set ui_font_size to 20 in user settings.json
* Open zed agent settings
* Expand Open Router settings
* Resize panel until the API key input field becomes small
This introduces a new field `thinking_allowed` on `LanguageModelRequest`
which lets us control whether thinking should be enabled if the model
supports it.
We permit thinking in the Inline Assistant, Edit File tool and the Git
Commit message generator, this should make generation faster when using
a thinking model, e.g. `claude-sonnet-4-thinking`
Release Notes:
- N/A
* Updates to `zed_llm_client-0.8.5` which adds support for `retry_after`
when anthropic provides it.
* Distinguishes upstream provider errors and rate limits from errors
that originate from zed's servers
* Moves `LanguageModelCompletionError::BadInputJson` to
`LanguageModelCompletionEvent::ToolUseJsonParseError`. While arguably
this is an error case, the logic in thread is cleaner with this move.
There is also precedent for inclusion of errors in the event type -
`CompletionRequestStatus::Failed` is how cloud errors arrive.
* Updates `PROVIDER_ID` / `PROVIDER_NAME` constants to use proper types
instead of `&str`, since they can be constructed in a const fashion.
* Removes use of `CLIENT_SUPPORTS_EXA_WEB_SEARCH_PROVIDER_HEADER_NAME`
as the server no longer reads this header and just defaults to that
behavior.
Release notes for this is covered by #33275
Release Notes:
- N/A
---------
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
Co-authored-by: Richard <richard@zed.dev>
Did some bit cleanup of code for loading models for settings as that is
not required as we are fetching all the models from openrouter so it's
better to maintain one source of truth
Release Notes:
- Add thinking support to OpenRouter provider
Previously, the OpenRouter models list (~412kb) was being downloaded
around 10 times during startup -- even when OpenRouter was not
configured.
This update addresses the issue by:
1. Fetching the models list only when OpenRouter settings change.
2. Skipping API calls if OpenRouter is not configured.
Release Notes:
- Avoid unnecessary requests to OpenRouter
Previously we were using a mix of `u32` and `usize`, e.g. `max_tokens:
usize, max_output_tokens: Option<u32>` in the same `struct`.
Although [tiktoken](https://github.com/openai/tiktoken) uses `usize`,
token counts should be consistent across targets (e.g. the same model
doesn't suddenly get a smaller context window if you're compiling for
wasm32), and these token counts could end up getting serialized using a
binary protocol, so `usize` is not the right choice for token counts.
I chose to standardize on `u64` over `u32` because we don't store many
of them (so the extra size should be insignificant) and future models
may exceed `u32::MAX` tokens.
Release Notes:
- N/A
We push the usage data whenever we receive it from the provider to make
sure the counting is correct after the turn has ended.
- [x] Ollama
- [x] Copilot
- [x] Mistral
- [x] OpenRouter
- [x] LMStudio
Put all the changes into a single PR open to move these to separate PR
if that makes the review and testing easier.
Release Notes:
- N/A
- [x] Manual Testing(Tested this with Qwen2.5 VL 32B Instruct (free) and
Llama 4 Scout (free), Llama 4 Maverick (free). Llama models have some
issues in write profile due to one of the in built tools schema, so I
tested it with minimal profile.
Closes #ISSUE
Release Notes:
- Add image support to OpenRouter models
---------
Signed-off-by: Umesh Yadav <umesh4257@gmail.com>
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
Bubbles up rate limit information so that we can retry after a certain
duration if needed higher up in the stack.
Also caps the number of concurrent evals running at once to also help.
Release Notes:
- N/A
This pull request adds full integration with OpenRouter, allowing users
to access a wide variety of language models through a single API key.
**Implementation Details:**
* **Provider Registration:** Registers OpenRouter as a new language
model provider within the application's model registry. This includes UI
for API key authentication, token counting, streaming completions, and
tool-call handling.
* **Dedicated Crate:** Adds a new `open_router` crate to manage
interactions with the OpenRouter HTTP API, including model discovery and
streaming helpers.
* **UI & Configuration:** Extends workspace manifests, the settings
schema, icons, and default configurations to surface the OpenRouter
provider and its settings within the UI.
* **Readability:** Reformats JSON arrays within the settings files for
improved readability.
**Design Decisions & Discussion Points:**
* **Code Reuse:** I leveraged much of the existing logic from the
`openai` provider integration due to the significant similarities
between the OpenAI and OpenRouter API specifications.
* **Default Model:** I set the default model to `openrouter/auto`. This
model automatically routes user prompts to the most suitable underlying
model on OpenRouter, providing a convenient starting point.
* **Model Population Strategy:**
* <strike>I've implemented dynamic population of available models by
querying the OpenRouter API upon initialization.
* Currently, this involves three separate API calls: one for all models,
one for tool-use models, and one for models good at programming.
* The data from the tool-use API call sets a `tool_use` flag for
relevant models.
* The data from the programming models API call is used to sort the
list, prioritizing coding-focused models in the dropdown.</strike>
* <strike>**Feedback Welcome:** I acknowledge this multi-call approach
is API-intensive. I am open to feedback and alternative implementation
suggestions if the team believes this can be optimized.</strike>
* **Update: Now this has been simplified to one api call.**
* **UI/UX Considerations:**
* <strike>Authentication Method: Currently, I've implemented the
standard API key input in settings, similar to other providers like
OpenAI/Anthropic. However, OpenRouter also supports OAuth 2.0 with PKCE.
This could offer a potentially smoother, more integrated setup
experience for users (e.g., clicking a button to authorize instead of
copy-pasting a key). Should we prioritize implementing OAuth PKCE now,
or perhaps add it as an alternative option later?</strike>(PKCE is not
straight forward and complicated so skipping this for now. So that we
can add the support and work on this later.)
* <strike>To visually distinguish models better suited for programming,
I've considered adding a marker (e.g., `</>` or `🧠`) next to their
names. Thoughts on this proposal?</strike>. (This will require a changes
and discussion across model provider. This doesn't fall under the scope
of current PR).
* OpenRouter offers 300+ models. The current implementation loads all of
them. **Feedback Needed:** Should we refine this list or implement more
sophisticated filtering/categorization for better usability?
**Motivation:**
This integration directly addresses one of the most highly upvoted
feature requests/discussions within the Zed community. Adding OpenRouter
support significantly expands the range of AI models accessible to
users.
I welcome feedback from the Zed team on this implementation and the
design choices made. I am eager to refine this feature and make it
available to users.
ISSUES: https://github.com/zed-industries/zed/discussions/16576
Release Notes:
- Added support for OpenRouter as a language model provider.
---------
Signed-off-by: Umesh Yadav <umesh4257@gmail.com>
Co-authored-by: Marshall Bowers <git@maxdeviant.com>