For ssh remoting lsps we'll need to have language server support
factored out of project.
Thus that begins
Release Notes:
- N/A
---------
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Mikayla <mikayla@zed.dev>
This PR updates the LLM service to include the GitHub login on its
spans.
We need to pass this information through on the LLM token, so it will
temporarily be `None` until this change is deployed and new tokens have
been issued.
Release Notes:
- N/A
- db deadlock in GetLlmToken for non-staff users
- typo in allowed model name for non-staff users
Release Notes:
- N/A
---------
Co-authored-by: Marshall <marshall@zed.dev>
Co-authored-by: Joseph <joseph@zed.dev>
This PR adds feature-flagged access to the LLM service.
We've repurposed the `language-models` feature flag to be used for
providing access to Claude 3.5 Sonnet through the Zed provider.
The remaining RPC endpoints that were previously behind the
`language-models` feature flag are now behind a staff check.
We also put some Zed Pro related messaging behind a feature flag.
Release Notes:
- N/A
---------
Co-authored-by: Max <max@zed.dev>
This PR restricts usage of the LLM service to accounts older than 30
days.
We now store the GitHub user's `created_at` timestamp to check the
GitHub account age. If this is not set—which it won't be for existing
users—then we use the `created_at` timestamp in the Zed database.
Release Notes:
- N/A
---------
Co-authored-by: Max <max@zed.dev>
This PR adds a check to the LLM API token issuance to ensure that we
only issue tokens to users that have accepted the terms of service.
Release Notes:
- N/A
This adds the requirement for users to accept the terms of service the
first time they send a message with the Cloud provider.
Once this is out and in a nightly, we need to add the check to the
server side too, to authenticate access to the models.
Demo:
https://github.com/user-attachments/assets/0edebf74-8120-4fa2-b801-bb76f04e8a17
Release Notes:
- N/A
This prevents users from accessing other models, such as OpenAI's GPT-4
or Google's Gemini-Pro.
Staff members can still access all models.
Co-authored-by: Thorsten <thorsten@zed.dev>
Release Notes:
- N/A
---------
Co-authored-by: Thorsten <thorsten@zed.dev>
This PR introduces a separate backend service for making LLM calls.
It exposes an HTTP interface that can be called by Zed clients. To call
these endpoints, the client must provide a `Bearer` token. These tokens
are issued/refreshed by the collab service over RPC.
We're adding this in a backwards-compatible way. Right now the access
tokens can only be minted for Zed staff, and calling this separate LLM
service is behind the `llm-service` feature flag (which is not
automatically enabled for Zed staff).
Release Notes:
- N/A
---------
Co-authored-by: Marshall <marshall@zed.dev>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
This PR changes how we report the `geoip_country_code` in the tracing
spans.
I wasn't seeing it come through in the logs, and I think it was because
we didn't declare the field on the initial span.
Release Notes:
- N/A
This PR updates the rate limits to adapt based on the user's current
plan.
For the free plan rate limits I just took one-tenth of the existing rate
limits (which are now the Pro limits). We can adjust, as needed.
Release Notes:
- N/A
---------
Co-authored-by: Max <max@zed.dev>
This PR updates the user menu to show the user's current plan.
Also adds a new RPC message to send this information down to the client
when Zed starts.
This is behind a feature flag.
Release Notes:
- N/A
---------
Co-authored-by: Max <max@zed.dev>
In this pull request, we change the zed.dev protocol so that we pass the
raw JSON for the specified provider directly to our server. This avoids
the need to define a protobuf message that's a superset of all these
formats.
@bennetbo: We also changed the settings for available_models under
zed.dev to be a flat format, because the nesting seemed too confusing.
Can you help us upgrade the local provider configuration to be
consistent with this? We do whatever we need to do when parsing the
settings to make this simple for users, even if it's a bit more complex
on our end. We want to use versioning to avoid breaking existing users,
but need to keep making progress.
```json
"zed.dev": {
"available_models": [
{
"provider": "anthropic",
"name": "some-newly-released-model-we-havent-added",
"max_tokens": 200000
}
]
}
```
Release Notes:
- N/A
---------
Co-authored-by: Nathan <nathan@zed.dev>
This PR upgrades `async-tungstenite` to v17.0.3.
We previously attempted upgrading `async-tungstenite` in #15039, but
broke authentication with collab in the process.
Upon further investigation, I determined that the root cause is due to
this change in `tungstenite` v0.17.0:
> Overhaul of the client's request generation process. Now the users are
able to pass the constructed `http::Request` "as is" to
`tungstenite-rs`, letting the library to check the correctness of the
request and specifying their own headers (including its own key if
necessary). No changes for those ones who used the client in a normal
way by connecting using a URL/URI (most common use-case).
We _were_ relying on passing an `http::Request` directly to
`tungstenite`, meaning we did not benefit from the changes to the common
path (of passing a URL/URI).
This meant that—due to changes in `tungstenite`—we were now missing the
`Sec-WebSocket-Key` header that `tungstenite` would otherwise set for
us.
Since we were only passing a custom `http::Request` to set headers, our
approach has been adjusted to construct the initial WebSocket request
using `tungstenite`'s `IntoClientRequest::into_client_request` and then
modifying the request to set our additional desired headers.
Release Notes:
- N/A
Follow-up of https://github.com/zed-industries/zed/pull/12909
* Fully preserve LSP data when sending it via collab, and only strip it
on the client.
* Avoid extra custom request handlers, and extend multi LSP server query
protocol instead.
Release Notes:
- N/A
This pull request introduces collaboration for the assistant panel by
turning `Context` into a CRDT. `ContextStore` is responsible for sending
and applying operations, as well as synchronizing missed changes while
the connection was lost.
Contexts are shared on a per-project basis, and only the host can share
them for now. Shared contexts can be accessed via the `History` tab in
the assistant panel.
<img width="1819" alt="image"
src="https://github.com/zed-industries/zed/assets/482957/c7ae46d2-cde3-4b03-b74a-6e9b1555c154">
Please note that this doesn't implement following yet, which is
scheduled for a subsequent pull request.
Release Notes:
- N/A
Here is an image of my now getting assistance responses!

I ended up adding a function to handle the use case of not serializing
the tool_calls response if it is either null or empty to keep the
functionality of the existing implementation (not deserializing if vec
is empty). I'm sorta a noob, so happy to make changes if this isn't done
correctly, although it does work and it does pass tests!
Thanks a bunch to [amtoaer](https://github.com/amtoaer) for pointing me
in the direction on how to fix it.
Release Notes:
- Fixed some responses being dropped from OpenAI-compatible providers
([#13741](https://github.com/zed-industries/zed/issues/13741)).
This PR adds support for [linked editing of
ranges](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_linkedEditingRange),
which in short means that editing one part of a file can now change
related parts in that same file. Think of automatically renaming
HTML/TSX closing tags when the opening one is changed.
TODO:
- [x] proto changes
- [x] Allow disabling linked editing ranges on a per language basis.
Fixes#4535
Release Notes:
- Added support for linked editing ranges LSP request. Editing opening
tags in HTML/TSX files (with vtsls) performs the same edit on the
closing tag as well (and vice versa). It can be turned off on a language-by-language basis with the following setting:
```
"languages": {
"HTML": {
"linked_edits": true
},
}
```
---------
Co-authored-by: Bennet <bennet@zed.dev>
#### Lazily loading channels
I've added a new RPC message called `SubscribeToChannels` that the
client now sends when it first renders the channels panel. This causes
the server to load the channels for that client and send updates to that
client as channels are updated. Previously, the server did this upon
connection.
For backwards compatibility, the server will inspect clients' version,
and continue to do this work immediately for old clients.
#### Optimizations
Running collab locally, I realized that upon connecting, we were running
two concurrent transactions that *both* queried the `channel_members`
table: one for loading your channels, and one for loading your channel
invites. I've combined these into one query. In addition, we now use a
join to load channels + members, as opposed to two separate queries.
Even though `where id in` is efficient, it adds an extra round trip to
the database, keeping the transaction open for slightly longer.
Release Notes:
- N/A
Release Notes:
- Added support for interacting with Claude in the assistant panel. You
can enable it by adding the following to your `settings.json`:
```json
"assistant": {
"version": "1",
"provider": {
"name": "anthropic"
}
}
```
This PR adds a setting to allow configuring the low-speed timeout for
the Assistant when using the OpenAI provider.
The `low_speed_timeout_in_seconds` accepts a number of seconds that the
HTTP client can go below a minimum speed limit (currently set to 100
bytes/second) before it times out.
```json
{
"assistant": {
"version": "1",
"provider": { "name": "openai", "low_speed_timeout_in_seconds": 60 }
},
}
```
This should help the case where the `openai` provider is being used with
a local model that requires higher timeouts.
Issue: https://github.com/zed-industries/zed/issues/9913
Release Notes:
- Added a `low_speed_timeout_in_seconds` setting to the Assistant's
OpenAI provider
([#9913](https://github.com/zed-industries/zed/issues/9913)).