Closes [#13107](https://github.com/zed-industries/zed/issues/13107)
Enabled pull diagnostics by default, for the language servers that
declare support in the corresponding capabilities.
```
"diagnostics": {
"lsp_pull_diagnostics_debounce_ms": null
}
```
settings can be used to disable the pulling.
Release Notes:
- Added support for the LSP `textDocument/diagnostic` command.
# Brief
This is draft PR that implements the LSP `textDocument/diagnostic`
command. The goal is to receive your feedback and establish further
steps towards fully implementing this command. I tried to re-use
existing method and structures to ensure:
1. The existing functionality works as before
2. There is no interference between the diagnostics sent by a server and
the diagnostics requested by a client.
The current implementation is done via a new LSP command
`GetDocumentDiagnostics` that is sent when a buffer is saved and when a
buffer is edited. There is a new method called `pull_diagnostic` that is
called for such events. It has debounce to ensure we don't spam a server
with commands every time the buffer is edited. Probably, we don't need
the debounce when the buffer is saved.
All in all, the goal is basically to get your feedback and ensure I am
on the right track. Thanks!
## References
1.
https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_pullDiagnostics
## In action
You can clone any Ruby repo since the `ruby-lsp` supports the pull
diagnostics only.
Steps to reproduce:
1. Clone this repo https://github.com/vitallium/stimulus-lsp-error-zed
2. Install Ruby (via `asdf` or `mise).
4. Install Ruby gems via `bundle install`
5. Install Ruby LSP with `gem install ruby-lsp`
6. Check out this PR and build Zed
7. Open any file and start editing to see diagnostics in realtime.
https://github.com/user-attachments/assets/0ef6ec41-e4fa-4539-8f2c-6be0d8be4129
---------
Co-authored-by: Kirill Bulatov <mail4score@gmail.com>
Co-authored-by: Kirill Bulatov <kirill@zed.dev>
Previously, the vision request header was only set if the last message
in a thread contained an image. This caused 400 errors from the Copilot
API when sending follow-up messages in a thread that contained images in
earlier messages.
Modified the `is_vision_request` check to scan all messages in a thread
for image content instead of just the last one, ensuring the proper
header is set for the entire conversation.
Added a unit test to verify all cases function correctly.
Release Notes:
- Fix GitHub Copilot chat provider error when sending follow-up messages
in threads containing images
Issues: #30994
I've implemented an important optimisation in response to GitHub
Copilot's recent rate limit on concurrent Vision API calls. Previously,
our system was defaulting to vision header: true for all API calls. To
prevent unnecessary calls and adhere to the new limits, I've updated our
logic: the vision header is now only sent if the current message is a
vision message, specifically when the preceding message includes an
image.
Prompt used to reproduce and verify the fix: `Give me a context for my
agent crate about. Browse my repo.`
Release Notes:
- copilot: Set Copilot-Vision-Request header based on message content
https://github.com/zed-industries/zed/issues/30972 brought up another
case where our context is not enough to track the actual source of the
issue: we get a general top-level error without inner error.
The reason for this was `.ok_or_else(|| anyhow!("failed to read HEAD
SHA"))?; ` on the top level.
The PR finally reworks the way we use anyhow to reduce such issues (or
at least make it simpler to bubble them up later in a fix).
On top of that, uses a few more anyhow methods for better readability.
* `.ok_or_else(|| anyhow!("..."))`, `map_err` and other similar error
conversion/option reporting cases are replaced with `context` and
`with_context` calls
* in addition to that, various `anyhow!("failed to do ...")` are
stripped with `.context("Doing ...")` messages instead to remove the
parasitic `failed to` text
* `anyhow::ensure!` is used instead of `if ... { return Err(...); }`
calls
* `anyhow::bail!` is used instead of `return Err(anyhow!(...));`
Release Notes:
- N/A
This is very basic support for them. There are a number of other TODOs
before this is really a first-class supported feature, so not adding any
release notes for it; for now, this PR just makes it so that if
read_file tries to read a PNG (which has come up in practice), it at
least correctly sends it to Anthropic instead of messing up.
This also lays the groundwork for future PRs for more first-class
support for images in tool calls across more image file formats and LLM
providers.
Release Notes:
- N/A
---------
Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
Problem Statement:
Support for image analysis (vision) is currently restricted to Anthropic
and Gemini models. This limits users who wish to leverage vision
capabilities available in other models, such as Copilot, for tasks like
attaching image context within the agent message editor.
Proposed Change:
This PR extends vision support to include Copilot models that are
already equipped with vision capabilities. This integration will allow
users within VS Code to attach and analyze images using supported
Copilot models via the agent message editor.
Scope Limitation:
This PR does not implement controls within the message editor to ensure
that image context (e.g., through copy-paste or attachment) is
exclusively enabled or prompted only when a vision-supported model is
active. Long term the message editor should have access to each models
vision capability and stop the users from attaching images by either
greying out the context saying it's not support or not work through both
copy paste and file/directory search.
Closes#30076
Release Notes:
- Add vision support for Copilot Chat models
---------
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
I noticed the discussion in #28881, and had thought of exactly the same
a few days prior.
This implementation should preserve existing functionality fairly well.
I've added a dependency (serde_with) to allow the deserializer to skip
models which cannot be deserialized, which could occur if a future
provider, for instance, is added. Without this modification, such a
change could break all models. If extra dependencies aren't desired, a
manual implementation could be used instead.
- Closes#29369
Release Notes:
- Dynamically detect available Copilot Chat models, including all models
with tool support
---------
Co-authored-by: AidanV <aidanvanduyne@gmail.com>
Co-authored-by: imumesh18 <umesh4257@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
Co-authored-by: Agus Zubiaga <hi@aguz.me>
Also:
* Makes sign out show status notifications and errors.
* Reinstall now prompts for sign-in after start.
Addresses some of #29250, but not all of it.
Release Notes:
- N/A
* Adds a fast / cheaper model to providers and defaults thread
summarization to this model. Initial motivation for this was that
https://github.com/zed-industries/zed/pull/29099 would cause these
requests to fail when used with a thinking model. It doesn't seem
correct to use a thinking model for summarization.
* Skips system prompt, context, and thinking segments.
* If tool use is happening, allows 2 tool uses + one more agent response
before summarizing.
Downside of this is that there was potential for some prefix cache reuse
before, especially for title summarization (thread summarization omitted
tool results and so would not share a prefix for those). This seems fine
as these requests should typically be fairly small. Even for full thread
summarization, skipping all tool use / context should greatly reduce the
token use.
Release Notes:
- N/A
Release Notes:
- Add support for OpenAI o3 and o4-mini models via OpenAI API and
Copilot Chat providers.
---------
Co-authored-by: Peter Tripp <peter@zed.dev>
Release Notes:
- Add support for OpenAI GPT-4.1 via Copilot Chat and OpenAI API
---------
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
This PR adds tool calling support for GitHub Copilot Chat models.
Currently only supports the Claude family of models.
Release Notes:
- agent: Added tool calling support for Claude models in GitHub Copilot
Chat.
---------
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
This adds a "workspace-hack" crate, see
[mozilla's](https://hg.mozilla.org/mozilla-central/file/3a265fdc9f33e5946f0ca0a04af73acd7e6d1a39/build/workspace-hack/Cargo.toml#l7)
for a concise explanation of why this is useful. For us in practice this
means that if I were to run all the tests (`cargo nextest r
--workspace`) and then `cargo r`, all the deps from the previous cargo
command will be reused. Before this PR it would rebuild many deps due to
resolving different sets of features for them. For me this frequently
caused long rebuilds when things "should" already be cached.
To avoid manually maintaining our workspace-hack crate, we will use
[cargo hakari](https://docs.rs/cargo-hakari) to update the build files
when there's a necessary change. I've added a step to CI that checks
whether the workspace-hack crate is up to date, and instructs you to
re-run `script/update-workspace-hack` when it fails.
Finally, to make sure that people can still depend on crates in our
workspace without pulling in all the workspace deps, we use a `[patch]`
section following [hakari's
instructions](https://docs.rs/cargo-hakari/0.9.36/cargo_hakari/patch_directive/index.html)
One possible followup task would be making guppy use our
`rust-toolchain.toml` instead of having to duplicate that list in its
config, I opened an issue for that upstream: guppy-rs/guppy#481.
TODO:
- [x] Fix the extension test failure
- [x] Ensure the dev dependencies aren't being unified by Hakari into
the main dependencies
- [x] Ensure that the remote-server binary continues to not depend on
LibSSL
Release Notes:
- N/A
---------
Co-authored-by: Mikayla <mikayla@zed.dev>
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
This PR adds `completions.lsp_insert_mode` and effectively changes the
default from `"replace"` to `"replace_suffix"`, which automatically
detects whether to use the LSP `replace` range instead of `insert`
range.
`"replace_suffix"` was chosen as a default because it's more
conservative than `"replace_subsequence"`, considering that deleting
text is usually faster and less disruptive than having to rewrite a long
replaced word.
Fixes#27197Fixes#23395 (again)
Fixes#4816 (again)
Release Notes:
- Added new setting `completions.lsp_insert_mode` that changes what will
be replaced when an LSP completion is accepted. The default is
`"replace_suffix"`, but it accepts 4 values: `"insert"` for replacing
only the text before the cursor, `"replace"` for replacing the whole
text, `"replace_suffix"` that acts like `"replace"` when the text after
the cursor is a suffix of the completion, and `"replace_subsequence"`
that acts like `"replace"` when the text around your cursor is a
subsequence of the completion (similiar to a fuzzy match). Check [the
documentation](https://zed.dev/docs/configuring-zed#LSP-Insert-Mode) for
more information.
---------
Co-authored-by: João Marcos <marcospb19@hotmail.com>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
This is the core change:
https://github.com/zed-industries/zed/pull/26758/files#diff-044302c0d57147af17e68a0009fee3e8dcdfb4f32c27a915e70cfa80e987f765R1052
TODO:
- [x] Use AsyncFn instead of Fn() -> Future in GPUI spawn methods
- [x] Implement it in the whole app
- [x] Implement it in the debugger
- [x] Glance at the RPC crate, and see if those box future methods can
be switched over. Answer: It can't directly, as you can't make an
AsyncFn* into a trait object. There's ways around that, but they're all
more complex than just keeping the code as is.
- [ ] Fix platform specific code
Release Notes:
- N/A
When copilot is not being used as the edit prediction provider and you
open a fresh Zed instance, we don’t run the copilot language server.
This is because copilot chat is purely handled via oauth token and
doesn’t require the language server.
In this case, if you click sign out, instead of asking the language
server to sign out (which isn’t running), we can manually clear the
config directory, which contains the oauth tokens. We already watch this
directory, and if the token is not found, we update the sign-in status.
Release Notes:
- N/A
Closes#25883
This PR allows you to use copilot chat for assistant without setting
copilot as the edit prediction provider.
[copilot.webm](https://github.com/user-attachments/assets/fecfbde1-d72c-4c0c-b080-a07671fb846e)
Todos:
- [x] Remove redudant "copilot" key from settings
- [x] Do not disable copilot LSP when `edit_prediction_provider` is not
set to `copilot`
- [x] Start copilot LSP when:
- [x] `edit_prediction_provider` is set to `copilot`
- [x] Copilot sign in clicked from assistant settings
- [x] Handle flicker for frame after starting LSP, but before signing in
caused due to signed out status
- [x] Fixed this by adding intermediate state for awaiting signing in in
sign out enum
- [x] Handle cancel button should sign out from `copilot` (existing bug)
- [x] Handle modal dismissal should sign out if not in signed in state
(existing bug)
Release Notes:
- You can now sign into Copilot from assistant settings without making
it your edit prediction provider. This is useful if you want to use
Copilot chat while keeping a different provider, like Zed, for
predictions.
- Removed the `copilot` key from `features` in settings. Use
`edit_prediction_provider` instead.
Release Notes:
- Multibuffers now use less vertical space for excerpt boundaries.
Additionally the expand up/down arrows are hidden at the start and end
of the buffers
---------
Co-authored-by: Nate Butler <iamnbutler@gmail.com>
Co-authored-by: Zed AI <claude-3.5-sonnet@zed.dev>
Closes https://github.com/zed-industries/zed/issues/4957https://github.com/user-attachments/assets/ff491378-376d-48ec-b552-6cc80f74200b
Adds `"completions"` language settings section, to configure LSP and
word completions per language.
Word-based completions may be turned on never, always (returned along
with the LSP ones), and as a fallback if no LSP completion items were
returned.
Future work:
* words are matched with the same fuzzy matching code that the rest of
the completions are
This might worsen the completion menu's usability even more, and will
require work on better completion sorting.
* completion entries currently have no icons or other ways to indicate
those are coming from LSP or from word search, or from something else
* we may work with language scopes more intelligently, group words by
them and distinguish during completions
Release Notes:
- Supported word-based completions
---------
Co-authored-by: Max Brunsfeld <max@zed.dev>
Closes#25594
This PR fixes an issue where signing into Copilot required restarting
Zed.
Copilot depends on an OAuth token that comes from either `hosts.json` or
`apps.json`. Initially, both files don't exist. If neither file is
found, we fallback to watching `hosts.json` for updates. However, if the
auth process creates `apps.json`, we won't receive updates from it,
causing the UI to remain outdated.
This PR fixes that by watching the parent `github-copilot` directory
instead, which will always contain one of those files along with an
additional version file.
I have tested this on macOS and Linux Wayland.
Release Notes:
- Fixed an issue where signing into Copilot required restarting Zed.
Closes: #25556
We were always comparing `disabled_globs` against the relative file
path, we'll now use the absolute path if the glob is also absolute.
Release Notes:
- Support absolute globs in `edit_predictions.disabled_globs`
Closes#6701 (one of the top ranking issues as of writing)
Adds the ability to specify an HTTP/HTTPS proxy to route Copilot code
completion API requests through. This should fix copilot functionality
in restricted network environments (where such a proxy is required) but
also opens up the ability to point copilot code completion requests at
your own local LLM, using e.g.:
- https://github.com/jjleng/copilot-proxy
- https://github.com/bernardo-bruning/ollama-copilot/tree/master
External MITM-proxy tools permitting, this can serve as a stop-gap to
allow local LLM code completion in Zed until a proper OpenAI-compatible
local code completions provider is implemented. With this in mind, in
this PR I've added separate `settings.json` variables to configure a
proxy server _specific to the code completions provider_ instead of
using the global `proxy` setting, to allow for cases like this where we
_only_ want to proxy e.g. the Copilot requests, but not all outgoing
traffic from the application.
Currently, two new settings are added:
- `inline_completions.copilot.proxy`: Proxy server URL (HTTP and HTTPS
schemes supported)
- `inline_completions.copilot.proxy_no_verify`: Whether to disable
certificate verification through the proxy
Example:
```js
"features": {
"inline_completion_provider": "copilot"
},
"show_completions_on_input": true,
// New:
"inline_completions": {
"copilot": {
"proxy": "http://example.com:15432",
"proxy_no_verify": true
}
}
```
Release Notes:
- Added the ability to specify an HTTP/HTTPS proxy for Copilot.
---------
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
Done automatically with
> ast-grep -p '$A.background_executor().spawn($B)' -r
'$A.background_spawn($B)' --update-all --globs "\!crates/gpui"
Followed by:
* `cargo fmt`
* Unexpected need to remove some trailing whitespace.
* Manually adding imports of `gpui::{AppContext as _}` which provides
`background_spawn`
* Added `AppContext as _` to existing use of `AppContext`
Release Notes:
- N/A
This PR updates the edit predictions to include the prediction ID
returned from the server on the resulting telemetry events indicating
whether the prediction was accepted or discarded.
The `prediction_id` on the events can then be correlated with the
`request_id` on the server-side prediction events.
Release Notes:
- N/A
- [x] snake case keymap properties
- [x] flatten actions
- [x] keymap migration + notfication
- [x] settings migration + notification
- [x] inline completions -> edit predictions
### future:
- keymap notification doesn't show up on start up, only on keymap save.
this is existing bug in zed, will be addressed in seperate PR.
Release Notes:
- Added a notification for deprecated settings and keymaps, allowing you
to migrate them with a single click. A backup of your existing keymap
and settings will be created in your home directory.
- Modified some keymap actions and settings for consistency.
---------
Co-authored-by: Piotr Osiewicz <piotr@zed.dev>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>