This PR dedupes the `AssistantSettings` so we can use the same settings
for both Assistant1 and Assistant2.
We originally forked them so we could change the Assistant2 settings
freely, but given our rollout strategy for the new Assistant, I don't
think that makes sense.
This also fixes the issue where the JSON language server would show a
"Matches multiple schemas when only one must validate" warning in
`settings.json`.
Closes#23171.
Release Notes:
- Fixed the "Matches multiple schemas when only one must validate"
warning for the `assistant` setting.
This reverts #20824 and #20899. After adding them last week we came to
the conclusion that the hints are too distracting in everyday use, see
#21128 for more details.
Release Notes:
- N/A
This PR adds a new `language_models` crate to house the various language
model providers.
By extracting the provider definitions out of `language_model`, we're
able to remove `language_model`'s dependency on `editor`, which improves
incremental compilation when changing `editor`.
Release Notes:
- N/A
Co-Authored-by: Thorsten <thorsten@zed.dev>
Co-Authored-by: Antonio <antonio@zed.dev>
Screenshot:

TODO:
- [x] docs
Release Notes:
- Added inline hints that guide users on how to invoke the inline
assistant and open the assistant panel. (These hints can be disabled by
setting `{"assistant": {"show_hints": false}}`.)
---------
Co-authored-by: Thorsten <thorsten@zed.dev>
Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Thorsten Ball <mrnugget@gmail.com>
This removes the `low_speed_timeout` setting from all providers as a
response to issue #19509.
Reason being that the original `low_speed_timeout` was only as part of
#9913 because users wanted to _get rid of timeouts_. They wanted to bump
the default timeout from 5sec to a lot more.
Then, in the meantime, the meaning of `low_speed_timeout` changed in
#19055 and was changed to a normal `timeout`, which is a different thing
and breaks slower LLMs that don't reply with a complete response in the
configured timeout.
So we figured: let's remove the whole thing and replace it with a
default _connect_ timeout to make sure that we can connect to a server
in 10s, but then give the server as long as it wants to complete its
response.
Closes#19509
Release Notes:
- Removed the `low_speed_timeout` setting from LLM provider settings,
since it was only used to _increase_ the timeout to give LLMs more time,
but since we don't have any other use for it, we simply remove the
setting to give LLMs as long as they need.
---------
Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Peter Tripp <peter@zed.dev>
This PR is only updating UI strings and pieces of the documentation—it
doesn't touch the actual code, where it's still using things such as
`NewContext` and similar terminology for variables, actions, etc.
Release Notes:
- N/A
This changes the `/workflow` command so that instead of emitting edits
in separate steps, the user is presented with a single tab, with an
editable diff that they can apply to the buffer.
Todo
* Assistant panel
* [x] Show a patch title and a list of changed files in a block
decoration
* [x] Don't store resolved patches as state on Context. Resolve on
demand.
* [ ] Better presentation of patches in the panel
* [ ] Show a spinner while patch is streaming in
* Patches
* [x] Preserve leading whitespace in new text, auto-indent insertions
* [x] Ensure patch title is very short, to fit better in tab
* [x] Improve patch location resolution, prefer skipping whitespace over
skipping `}`
* [x] Ensure patch edits are auto-indented properly
* [ ] Apply `Update` edits via a diff between the old and new text, to
get fine-grained edits.
* Proposed changes editor
* [x] Show patch title in the tab
* [x] Add a toolbar with an "Apply all" button
* [x] Make `open excerpts` open the corresponding location in the base
buffer (https://github.com/zed-industries/zed/pull/18591)
* [x] Add an apply button above every hunk
(https://github.com/zed-industries/zed/pull/18592)
* [x] Expand all diff hunks by default
(https://github.com/zed-industries/zed/pull/18598)
* [x] Fix https://github.com/zed-industries/zed/issues/18589
* [x] Syntax highlighting doesn't work until the buffer is edited
(https://github.com/zed-industries/zed/pull/18648)
* [x] Disable LSP interaction in Proposed Changes editor
(https://github.com/zed-industries/zed/pull/18945)
* [x] No auto-indent? (https://github.com/zed-industries/zed/pull/18984)
* Prompt
* [ ] make sure old_text is unique
Release Notes:
- N/A
---------
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Richard <richard@zed.dev>
Co-authored-by: Marshall <marshall@zed.dev>
Co-authored-by: Nate Butler <iamnbutler@gmail.com>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
Release Notes:
- Added a new `assistant.inline_alternatives` setting to configure
additional models that will be used to perform inline assists in
parallel.
---------
Co-authored-by: Nathan <nathan@zed.dev>
Co-authored-by: Roy <roy@anthropic.com>
Co-authored-by: Adam <wolffiex@anthropic.com>
This is a barebones modification of the OpenAI provider code to
accommodate non-streaming completions. This is specifically for the o1
models, which do not support streaming. Tested that this is working by
running a `/workflow` with the following (arbitrarily chosen) settings:
```json
{
"language_models": {
"openai": {
"version": "1",
"available_models": [
{
"name": "o1-preview",
"display_name": "o1-preview",
"max_tokens": 128000,
"max_completion_tokens": 30000
},
{
"name": "o1-mini",
"display_name": "o1-mini",
"max_tokens": 128000,
"max_completion_tokens": 20000
}
]
}
},
}
```
Release Notes:
- Changed `low_speed_timeout_in_seconds` option to `600` for OpenAI
provider to accommodate recent o1 model release.
---------
Co-authored-by: Peter <peter@zed.dev>
Co-authored-by: Bennet <bennet@zed.dev>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
Add `/auto` behind a feature flag that's disabled for now, even for
staff.
We've decided on a different design for context inference, but there are
parts of /auto that will be useful for that, so we want them in the code
base even if they're unused for now.
Release Notes:
- N/A
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
### Pull Request Title
Introduce `max_output_tokens` Field for OpenAI Models
https://platform.deepseek.com/api-docs/news/news0725/#4-8k-max_tokens-betarelease-longer-possibilities
### Description
This commit introduces a new field `max_output_tokens` to the OpenAI
models, which allows specifying the maximum number of tokens that can be
generated in the output. This field is now integrated into the request
handling across multiple crates, ensuring that the output token limit is
respected during language model completions.
Changes include:
- Adding `max_output_tokens` to the `Custom` variant of the
`open_ai::Model` enum.
- Updating the `into_open_ai` method in `LanguageModelRequest` to accept
and use `max_output_tokens`.
- Modifying the `OpenAiLanguageModel` and `CloudLanguageModel`
implementations to pass `max_output_tokens` when converting requests.
- Ensuring that the `max_output_tokens` field is correctly serialized
and deserialized in relevant structures.
This enhancement provides more control over the output length of OpenAI
model responses, improving the flexibility and accuracy of language
model interactions.
### Changes
- Added `max_output_tokens` to the `Custom` variant of the
`open_ai::Model` enum.
- Updated the `into_open_ai` method in `LanguageModelRequest` to accept
and use `max_output_tokens`.
- Modified the `OpenAiLanguageModel` and `CloudLanguageModel`
implementations to pass `max_output_tokens` when converting requests.
- Ensured that the `max_output_tokens` field is correctly serialized and
deserialized in relevant structures.
### Related Issue
https://github.com/zed-industries/zed/pull/16358
### Screenshots / Media
N/A
### Checklist
- [x] Code compiles correctly.
- [x] All tests pass.
- [ ] Documentation has been updated accordingly.
- [ ] Additional tests have been added to cover new functionality.
- [ ] Relevant documentation has been updated or added.
### Release Notes
- Added `max_output_tokens` field to OpenAI models for controlling
output token length.
We also eliminate the `completion` crate and moved its logic into
`LanguageModelRegistry`.
Release Notes:
- N/A
---------
Co-authored-by: Nathan <nathan@zed.dev>
This adds the optional `PRESERVED_KEYS` constant to the `Settings`
trait,
which allows users of the trait to specify which keys should be written
to
the settings file, even if their current value matches the default
value.
That's useful for tagged settings that have, for example, a `"version"`
field
that should always be present in the user settings file, so we can then
reparse
the user settings based on the version.
Co-Authored-By: Thorsten <thorsten@zed.dev>
Release Notes:
- N/A
---------
Co-authored-by: Thorsten <thorsten@zed.dev>
# Summary
This commit implements Github Copilot Chat support within the existing
Assistant panel/framework. It required a little bit of trickery and
internal API modification, as Copilot doesn't use the same
authentication-style as all of the existing providers, opting to use
OAuth and a short lived API key instead of a straight API key. All
existing Assistant features should work.
Release Notes:
- Added Github Copilot Chat support
([#4673](https://github.com/zed-industries/zed/issues/4673)).
## Screenshots
<img width="1552" alt="A screenshot showing a conversation between a
user and Github Copilot Chat within the Zed editor."
src="https://github.com/user-attachments/assets/73eaf6a2-792b-4c40-a7fe-f763bd6417d7">
---------
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
In this pull request, we change the zed.dev protocol so that we pass the
raw JSON for the specified provider directly to our server. This avoids
the need to define a protobuf message that's a superset of all these
formats.
@bennetbo: We also changed the settings for available_models under
zed.dev to be a flat format, because the nesting seemed too confusing.
Can you help us upgrade the local provider configuration to be
consistent with this? We do whatever we need to do when parsing the
settings to make this simple for users, even if it's a bit more complex
on our end. We want to use versioning to avoid breaking existing users,
but need to keep making progress.
```json
"zed.dev": {
"available_models": [
{
"provider": "anthropic",
"name": "some-newly-released-model-we-havent-added",
"max_tokens": 200000
}
]
}
```
Release Notes:
- N/A
---------
Co-authored-by: Nathan <nathan@zed.dev>
<img width="624" alt="image"
src="https://github.com/user-attachments/assets/f492b0bd-14c3-49e2-b2ff-dc78e52b0815">
- [x] Correctly set custom model token count
- [x] How to count tokens for Gemini models?
- [x] Feature flag zed.dev provider
- [x] Figure out how to configure custom models
- [ ] Update docs
Release Notes:
- Added support for quickly switching between multiple language model
providers in the assistant panel
---------
Co-authored-by: Antonio <antonio@zed.dev>
We will soon need `semantic_index` to be able to use
`CompletionProvider`. This is currently impossible due to a cyclic crate
dependency, because `CompletionProvider` lives in the `assistant` crate,
which depends on `semantic_index`.
This PR breaks the dependency cycle by extracting two crates out of
`assistant`: `language_model` and `completion`.
Only one piece of logic changed: [this
code](922fcaf5a6 (diff-3857b3707687a4d585f1200eec4c34a7a079eae8d303b4ce5b4fce46234ace9fR61-R69)).
* As of https://github.com/zed-industries/zed/pull/13276, whenever we
ask a given completion provider for its available models, OpenAI
providers would go and ask the global assistant settings whether the
user had configured an `available_models` setting, and if so, return
that.
* This PR changes it so that instead of eagerly asking the assistant
settings for this info (the new crate must not depend on `assistant`, or
else the dependency cycle would be back), OpenAI completion providers
now store the user-configured settings as part of their struct, and
whenever the settings change, we update the provider.
In theory, this change should not change user-visible behavior...but
since it's the only change in this large PR that's more than just moving
code around, I'm mentioning it here in case there's an unexpected
regression in practice! (cc @amtoaer in case you'd like to try out this
branch and verify that the feature is still working the way you expect.)
Release Notes:
- N/A
---------
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
This PR refactors the completion providers to only process a maximum
amount of completion requests at a time.
Also started refactoring language model providers to use traits, so it's
easier to allow specifying multiple providers in the future.
Release Notes:
- N/A
Closes#4424.
A few design decisions that may need some rethinking or later PRs:
* Other providers have a check for authentication. I use this
opportunity to fetch the models which doubles as a way of finding out if
the Ollama server is running.
* Ollama has _no_ API for getting the max tokens per model
* Ollama has _no_ API for getting the current token count
https://github.com/ollama/ollama/issues/1716
* Ollama does allow setting the `num_ctx` so I've defaulted this to
4096. It can be overridden in settings.
* Ollama models will be "slow" to start inference because they're
loading the model into memory. It's faster after that. There's no UI
affordance to show that the model is being loaded.
Release Notes:
- Added an Ollama Provider for the assistant. If you have
[Ollama](https://ollama.com/) running locally on your machine, you can
enable it in your settings under:
```jsonc
"assistant": {
"version": "1",
"provider": {
"name": "ollama",
// Recommended setting to allow for model startup
"low_speed_timeout_in_seconds": 30,
}
}
```
Chat like usual
<img width="1840" alt="image"
src="https://github.com/zed-industries/zed/assets/836375/4e0af266-4c4f-4d9e-9d74-1a91f76a12fe">
Interact with any model from the [Ollama
Library](https://ollama.com/library)
<img width="587" alt="image"
src="https://github.com/zed-industries/zed/assets/836375/87433ac6-bf87-4a99-89e1-96a93bf8de8a">
Open up the terminal to download new models via `ollama pull`:

This PR adds some ergonomic improvements when working with GPUI
`Global`s.
Two new traits have been added—`ReadGlobal` and `UpdateGlobal`—that
provide associated functions on any type that implements `Global` for
accessing and updating the global without needing to call the methods on
the `cx` directly (which generally involves qualifying the type).
I looked into adding `ObserveGlobal` as well, but this seems a bit
trickier to implement as the signatures of `cx.observe_global` vary
slightly between the different contexts.
Release Notes:
- N/A
Release Notes:
- Added support for interacting with Claude in the assistant panel. You
can enable it by adding the following to your `settings.json`:
```json
"assistant": {
"version": "1",
"provider": {
"name": "anthropic"
}
}
```
This PR adds a setting to allow configuring the low-speed timeout for
the Assistant when using the OpenAI provider.
The `low_speed_timeout_in_seconds` accepts a number of seconds that the
HTTP client can go below a minimum speed limit (currently set to 100
bytes/second) before it times out.
```json
{
"assistant": {
"version": "1",
"provider": { "name": "openai", "low_speed_timeout_in_seconds": 60 }
},
}
```
This should help the case where the `openai` provider is being used with
a local model that requires higher timeouts.
Issue: https://github.com/zed-industries/zed/issues/9913
Release Notes:
- Added a `low_speed_timeout_in_seconds` setting to the Assistant's
OpenAI provider
([#9913](https://github.com/zed-industries/zed/issues/9913)).
This PR adds the ability for extensions to provide certain language
settings via the language `config.toml`.
These settings are then merged in with the rest of the settings when the
language is loaded from the extension.
The language settings that are available are:
- `tab_size`
- `hard_tabs`
- `soft_wrap`
Additionally, for bundled languages we moved these settings out of the
`settings/default.json` and into their respective `config.toml`s .
For languages currently provided by extensions, we are leaving the
values in the `settings/default.json` temporarily until all released
versions of Zed are able to load these settings from the extension.
---
Along the way we ended up refactoring the `Settings::load` method
slightly, introducing a new `SettingsSources` struct to better convey
where the settings are being loaded from.
This makes it easier to load settings from specific locations/sets of
locations in an explicit way.
Release Notes:
- N/A
---------
Co-authored-by: Max <max@zed.dev>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
This pull request introduces a new `InlineCompletionProvider` trait,
which enables making `Editor` copilot-agnostic and lets us push all the
copilot functionality into the `copilot_ui` module. Long-term, I would
like to merge `copilot` and `copilot_ui`, but right now `project`
depends on `copilot`, which makes this impossible.
The reason for adding this new trait is so that we can experiment with
other inline completion providers and swap them at runtime using config
settings.
Please, note also that we renamed some of the existing copilot actions
to be more agnostic (see release notes below). We still kept the old
actions bound for backwards-compatibility, but we should probably remove
them at some later version.
Also, as a drive-by, we added new methods to the `Global` trait that let
you read or mutate a global directly, e.g.:
```rs
MyGlobal::update(cx, |global, cx| {
});
```
Release Notes:
- Renamed the `copilot::Suggest` action to
`editor::ShowInlineCompletion`
- Renamed the `copilot::NextSuggestion` action to
`editor::NextInlineCompletion`
- Renamed the `copilot::PreviousSuggestion` action to
`editor::PreviousInlineCompletion`
- Renamed the `editor::AcceptPartialCopilotSuggestion` action to
`editor::AcceptPartialInlineCompletion`
---------
Co-authored-by: Nathan <nathan@zed.dev>
Co-authored-by: Kyle <kylek@zed.dev>
Co-authored-by: Kyle Kelley <rgbkrk@gmail.com>
This PR adds a new `assistant.enabled` setting that controls whether the
Zed Assistant is enabled.
Some users have requested the ability to disable the AI-related features
in Zed if they don't use them. Changing `assistant.enabled` to `false`
will hide the Assistant icon in the status bar (taking priority over the
`assistant.button` setting) as well as filter out the `assistant:`
actions.
The Assistant is enabled by default.
Release Notes:
- Added an `assistant.enabled` setting to control whether the Assistant
is enabled.
Co-authored-by: Antonio <antonio@zed.dev>
Resurrected this from some assistant work I did in Spring of 2023.
- [x] Resurrect streaming responses
- [x] Use streaming responses to enable AI via Zed's servers by default
(but preserve API key option for now)
- [x] Simplify protobuf
- [x] Proxy to OpenAI on zed.dev
- [x] Proxy to Gemini on zed.dev
- [x] Improve UX for switching between openAI and google models
- We current disallow cycling when setting a custom model, but we need a
better solution to keep OpenAI models available while testing the google
ones
- [x] Show remaining tokens correctly for Google models
- [x] Remove semantic index
- [x] Delete `ai` crate
- [x] Cloud front so we can ban abuse
- [x] Rate-limiting
- [x] Fix panic when using inline assistant
- [x] Double check the upgraded `AssistantSettings` are
backwards-compatible
- [x] Add hosted LLM interaction behind a `language-models` feature
flag.
Release Notes:
- We are temporarily removing the semantic index in order to redesign it
from scratch.
---------
Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Thorsten <thorsten@zed.dev>
Co-authored-by: Max <max@zed.dev>
This PR wires up support for [Azure
OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview)
as an alternative AI provider in the assistant panel.
This can be configured using the following in the settings file:
```json
{
"assistant": {
"provider": {
"type": "azure_openai",
"api_url": "https://{your-resource-name}.openai.azure.com",
"deployment_id": "gpt-4",
"api_version": "2023-05-15"
}
},
}
```
You will need to deploy a model within Azure and update the settings
accordingly.
Release Notes:
- N/A
Partially fixes#4321, since Azure OpenAI API can be converted to OpenAI
API.
Release Notes:
- Added `assistant.openai_api_url` setting to allow OpenAI API URL to be
configured.
---------
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>