
This pull request adds full integration with OpenRouter, allowing users to access a wide variety of language models through a single API key. **Implementation Details:** * **Provider Registration:** Registers OpenRouter as a new language model provider within the application's model registry. This includes UI for API key authentication, token counting, streaming completions, and tool-call handling. * **Dedicated Crate:** Adds a new `open_router` crate to manage interactions with the OpenRouter HTTP API, including model discovery and streaming helpers. * **UI & Configuration:** Extends workspace manifests, the settings schema, icons, and default configurations to surface the OpenRouter provider and its settings within the UI. * **Readability:** Reformats JSON arrays within the settings files for improved readability. **Design Decisions & Discussion Points:** * **Code Reuse:** I leveraged much of the existing logic from the `openai` provider integration due to the significant similarities between the OpenAI and OpenRouter API specifications. * **Default Model:** I set the default model to `openrouter/auto`. This model automatically routes user prompts to the most suitable underlying model on OpenRouter, providing a convenient starting point. * **Model Population Strategy:** * <strike>I've implemented dynamic population of available models by querying the OpenRouter API upon initialization. * Currently, this involves three separate API calls: one for all models, one for tool-use models, and one for models good at programming. * The data from the tool-use API call sets a `tool_use` flag for relevant models. * The data from the programming models API call is used to sort the list, prioritizing coding-focused models in the dropdown.</strike> * <strike>**Feedback Welcome:** I acknowledge this multi-call approach is API-intensive. I am open to feedback and alternative implementation suggestions if the team believes this can be optimized.</strike> * **Update: Now this has been simplified to one api call.** * **UI/UX Considerations:** * <strike>Authentication Method: Currently, I've implemented the standard API key input in settings, similar to other providers like OpenAI/Anthropic. However, OpenRouter also supports OAuth 2.0 with PKCE. This could offer a potentially smoother, more integrated setup experience for users (e.g., clicking a button to authorize instead of copy-pasting a key). Should we prioritize implementing OAuth PKCE now, or perhaps add it as an alternative option later?</strike>(PKCE is not straight forward and complicated so skipping this for now. So that we can add the support and work on this later.) * <strike>To visually distinguish models better suited for programming, I've considered adding a marker (e.g., `</>` or `🧠`) next to their names. Thoughts on this proposal?</strike>. (This will require a changes and discussion across model provider. This doesn't fall under the scope of current PR). * OpenRouter offers 300+ models. The current implementation loads all of them. **Feedback Needed:** Should we refine this list or implement more sophisticated filtering/categorization for better usability? **Motivation:** This integration directly addresses one of the most highly upvoted feature requests/discussions within the Zed community. Adding OpenRouter support significantly expands the range of AI models accessible to users. I welcome feedback from the Zed team on this implementation and the design choices made. I am eager to refine this feature and make it available to users. ISSUES: https://github.com/zed-industries/zed/discussions/16576 Release Notes: - Added support for OpenRouter as a language model provider. --------- Signed-off-by: Umesh Yadav <umesh4257@gmail.com> Co-authored-by: Marshall Bowers <git@maxdeviant.com>
65 lines
1.9 KiB
TOML
65 lines
1.9 KiB
TOML
[package]
|
|
name = "language_models"
|
|
version = "0.1.0"
|
|
edition.workspace = true
|
|
publish.workspace = true
|
|
license = "GPL-3.0-or-later"
|
|
|
|
[lints]
|
|
workspace = true
|
|
|
|
[lib]
|
|
path = "src/language_models.rs"
|
|
|
|
[dependencies]
|
|
anthropic = { workspace = true, features = ["schemars"] }
|
|
anyhow.workspace = true
|
|
aws-config = { workspace = true, features = ["behavior-version-latest"] }
|
|
aws-credential-types = { workspace = true, features = [
|
|
"hardcoded-credentials",
|
|
] }
|
|
aws_http_client.workspace = true
|
|
bedrock.workspace = true
|
|
client.workspace = true
|
|
collections.workspace = true
|
|
credentials_provider.workspace = true
|
|
copilot.workspace = true
|
|
deepseek = { workspace = true, features = ["schemars"] }
|
|
editor.workspace = true
|
|
fs.workspace = true
|
|
futures.workspace = true
|
|
google_ai = { workspace = true, features = ["schemars"] }
|
|
gpui.workspace = true
|
|
gpui_tokio.workspace = true
|
|
http_client.workspace = true
|
|
language_model.workspace = true
|
|
lmstudio = { workspace = true, features = ["schemars"] }
|
|
log.workspace = true
|
|
menu.workspace = true
|
|
mistral = { workspace = true, features = ["schemars"] }
|
|
ollama = { workspace = true, features = ["schemars"] }
|
|
open_ai = { workspace = true, features = ["schemars"] }
|
|
open_router = { workspace = true, features = ["schemars"] }
|
|
partial-json-fixer.workspace = true
|
|
project.workspace = true
|
|
proto.workspace = true
|
|
release_channel.workspace = true
|
|
schemars.workspace = true
|
|
serde.workspace = true
|
|
serde_json.workspace = true
|
|
settings.workspace = true
|
|
smol.workspace = true
|
|
strum.workspace = true
|
|
theme.workspace = true
|
|
thiserror.workspace = true
|
|
tiktoken-rs.workspace = true
|
|
tokio = { workspace = true, features = ["rt", "rt-multi-thread"] }
|
|
ui.workspace = true
|
|
util.workspace = true
|
|
workspace-hack.workspace = true
|
|
zed_llm_client.workspace = true
|
|
|
|
[dev-dependencies]
|
|
editor = { workspace = true, features = ["test-support"] }
|
|
language_model = { workspace = true, features = ["test-support"] }
|
|
project = { workspace = true, features = ["test-support"] }
|