This PR adds a new `language_models` crate to house the various language
model providers.
By extracting the provider definitions out of `language_model`, we're
able to remove `language_model`'s dependency on `editor`, which improves
incremental compilation when changing `editor`.
Release Notes:
- N/A
This removes the `low_speed_timeout` setting from all providers as a
response to issue #19509.
Reason being that the original `low_speed_timeout` was only as part of
#9913 because users wanted to _get rid of timeouts_. They wanted to bump
the default timeout from 5sec to a lot more.
Then, in the meantime, the meaning of `low_speed_timeout` changed in
#19055 and was changed to a normal `timeout`, which is a different thing
and breaks slower LLMs that don't reply with a complete response in the
configured timeout.
So we figured: let's remove the whole thing and replace it with a
default _connect_ timeout to make sure that we can connect to a server
in 10s, but then give the server as long as it wants to complete its
response.
Closes#19509
Release Notes:
- Removed the `low_speed_timeout` setting from LLM provider settings,
since it was only used to _increase_ the timeout to give LLMs more time,
but since we don't have any other use for it, we simply remove the
setting to give LLMs as long as they need.
---------
Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Peter Tripp <peter@zed.dev>
This PR does a little bit of a touch-up on the copywriting on the
assistant config UI. I had friends reporting to me that some of the
writing could be clearer, and hopefully, this goes into that direction!
Release Notes:
- N/A
Release Notes:
- Added support for OpenAI o1-mini and o1-preview models.
---------
Co-authored-by: Jason Mancuso <7891333+jvmncs@users.noreply.github.com>
Co-authored-by: Bennet <bennet@zed.dev>
This is a barebones modification of the OpenAI provider code to
accommodate non-streaming completions. This is specifically for the o1
models, which do not support streaming. Tested that this is working by
running a `/workflow` with the following (arbitrarily chosen) settings:
```json
{
"language_models": {
"openai": {
"version": "1",
"available_models": [
{
"name": "o1-preview",
"display_name": "o1-preview",
"max_tokens": 128000,
"max_completion_tokens": 30000
},
{
"name": "o1-mini",
"display_name": "o1-mini",
"max_tokens": 128000,
"max_completion_tokens": 20000
}
]
}
},
}
```
Release Notes:
- Changed `low_speed_timeout_in_seconds` option to `600` for OpenAI
provider to accommodate recent o1 model release.
---------
Co-authored-by: Peter <peter@zed.dev>
Co-authored-by: Bennet <bennet@zed.dev>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
Add `/auto` behind a feature flag that's disabled for now, even for
staff.
We've decided on a different design for context inference, but there are
parts of /auto that will be useful for that, so we want them in the code
base even if they're unused for now.
Release Notes:
- N/A
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
This PR adjusts the approach we use to encoding tool uses in the
completion response to use a structured format rather than simply
injecting it into the response stream as text.
In #17170 we would encode the tool uses as XML and insert them as text.
This would require then re-parsing the tool uses out of the buffer in
order to use them.
The approach taken in this PR is to make `stream_completion` return a
stream of `LanguageModelCompletionEvent`s. Each of these events can be
either text, or a tool use.
A new `stream_completion_text` method has been added to `LanguageModel`
for scenarios where we only care about textual content (currently,
everywhere that isn't the Assistant context editor).
Release Notes:
- N/A
### Pull Request Title
Introduce `max_output_tokens` Field for OpenAI Models
https://platform.deepseek.com/api-docs/news/news0725/#4-8k-max_tokens-betarelease-longer-possibilities
### Description
This commit introduces a new field `max_output_tokens` to the OpenAI
models, which allows specifying the maximum number of tokens that can be
generated in the output. This field is now integrated into the request
handling across multiple crates, ensuring that the output token limit is
respected during language model completions.
Changes include:
- Adding `max_output_tokens` to the `Custom` variant of the
`open_ai::Model` enum.
- Updating the `into_open_ai` method in `LanguageModelRequest` to accept
and use `max_output_tokens`.
- Modifying the `OpenAiLanguageModel` and `CloudLanguageModel`
implementations to pass `max_output_tokens` when converting requests.
- Ensuring that the `max_output_tokens` field is correctly serialized
and deserialized in relevant structures.
This enhancement provides more control over the output length of OpenAI
model responses, improving the flexibility and accuracy of language
model interactions.
### Changes
- Added `max_output_tokens` to the `Custom` variant of the
`open_ai::Model` enum.
- Updated the `into_open_ai` method in `LanguageModelRequest` to accept
and use `max_output_tokens`.
- Modified the `OpenAiLanguageModel` and `CloudLanguageModel`
implementations to pass `max_output_tokens` when converting requests.
- Ensured that the `max_output_tokens` field is correctly serialized and
deserialized in relevant structures.
### Related Issue
https://github.com/zed-industries/zed/pull/16358
### Screenshots / Media
N/A
### Checklist
- [x] Code compiles correctly.
- [x] All tests pass.
- [ ] Documentation has been updated accordingly.
- [ ] Additional tests have been added to cover new functionality.
- [ ] Relevant documentation has been updated or added.
### Release Notes
- Added `max_output_tokens` field to OpenAI models for controlling
output token length.
This makes it easier to debug why resetting a key doesn't work. We now
show when the key is set via an env var and if so, we disable the
reset-key button and instead give instructions.

Release Notes:
- N/A
Co-authored-by: Bennet <bennet@zed.dev>
This PR is a refactor to pave the way for allowing the user to view and
edit workflow step resolutions. I've made tool calls work more like
normal streaming completions for all providers. The `use_any_tool`
method returns a stream of strings (which contain chunks of JSON). I've
also done some minor cleanup of language model providers in general,
removing the duplication around handling streaming responses.
Release Notes:
- N/A
For future reference: WIP branch of copy/pasting a mixture of images and
text: https://github.com/zed-industries/zed/tree/copy-paste-images -
we'll come back to that one after landing this one.
Release Notes:
- You can now paste images into the Assistant Panel to include them as
context. Currently works only on Mac, and with Anthropic models. Future
support is planned for more models, operating systems, and image
clipboard operations.
---------
Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Mikayla <mikayla@zed.dev>
Co-authored-by: Jason <jason@zed.dev>
Co-authored-by: Kyle <kylek@zed.dev>
This PR polishes elements around setting up LLM providers on the
Assistant panel, including:
- [x] Adding banners for promoting Zed AI and to deal with the "No
provider set up" scenario
- [x] Tweaking the error popover whenever there's no API key added
- [ ] Making configuration panel scrollable
---
Release Notes:
- N/A
---------
Co-authored-by: Thorsten Ball <mrnugget@gmail.com>
Co-authored-by: Bennet Bo Fenner <53836821+bennetbo@users.noreply.github.com>
Co-authored-by: Marshall Bowers <1486634+maxdeviant@users.noreply.github.com>
- [x] OpenAI
- [ ] ~Google~ Moved into a separate branch at:
https://github.com/zed-industries/zed/tree/tool-calls-in-google-ai I've
ran into issues with having the API digest our schema without tripping
over itself - the function call parameters are malformed and whatnot. We
can resume from that branch if needed.
- [x] Ollama
- [x] Cloud
- [ ] ~Copilot Chat (?)~
Release Notes:
- Added tool calling capabilities to OpenAI and Ollama models.
TODOs for follow-up:
- [ ] When opening panel: nudge user to sign in if they're not signed-in
and have no provider configured (or if they're not signed-in and have
Zed AI configured)
- [ ] Configuration page is not scrollable
- [ ] Design tweaks
Current status:
https://github.com/user-attachments/assets/d26d65ea-43e8-481b-81a3-b3cba01704a8
Release Notes:
- N/A
This is an intermediate fix for the focus problems in the assistant
panel. Intermediate because I'm going to shred the whole
ConfigurationView now and replace the tabs inside with a list.
Release Notes:
- N/A
- [x] bug: setting a key doesn't update anything
- [x] show high-level text on configuration page to explain what it is
- [x] show "everything okay!" status when credentials are set
- [x] maybe: add "verify" button to check credentials
- [x] open configuration page when opening panel for first time and
nothing is configured
- [x] BUG: need to fix empty assistant panel if provider is `zed.dev`
but not logged in
Co-Authored-By: Thorsten <thorsten@zed.dev>
Release Notes:
- N/A
---------
Co-authored-by: Thorsten <thorsten@zed.dev>
Co-authored-by: Nate Butler <iamnbutler@gmail.com>
Co-authored-by: Thorsten Ball <mrnugget@gmail.com>
This is the revised version of #15527.
We also added new events to notify subscribers when new providers are
added or removed.
Co-Authored-by: Thorsten <thorsten@zed.dev>
Release Notes:
- N/A
---------
Co-authored-by: Thorsten <thorsten@zed.dev>
Co-authored-by: Thorsten Ball <mrnugget@gmail.com>
We also eliminate the `completion` crate and moved its logic into
`LanguageModelRegistry`.
Release Notes:
- N/A
---------
Co-authored-by: Nathan <nathan@zed.dev>
In this pull request, we change the zed.dev protocol so that we pass the
raw JSON for the specified provider directly to our server. This avoids
the need to define a protobuf message that's a superset of all these
formats.
@bennetbo: We also changed the settings for available_models under
zed.dev to be a flat format, because the nesting seemed too confusing.
Can you help us upgrade the local provider configuration to be
consistent with this? We do whatever we need to do when parsing the
settings to make this simple for users, even if it's a bit more complex
on our end. We want to use versioning to avoid breaking existing users,
but need to keep making progress.
```json
"zed.dev": {
"available_models": [
{
"provider": "anthropic",
"name": "some-newly-released-model-we-havent-added",
"max_tokens": 200000
}
]
}
```
Release Notes:
- N/A
---------
Co-authored-by: Nathan <nathan@zed.dev>
Supersedes https://github.com/zed-industries/zed/pull/12090fixes#5180fixes#5055
See original PR for an example of the feature at work.
This PR changes the settings interface to be backwards compatible, and
adds the `ui_font_fallbacks`, `buffer_font_fallbacks`, and
`terminal.font_fallbacks` settings.
Release Notes:
- Added support for font fallbacks via three new settings:
`ui_font_fallbacks`, `buffer_font_fallbacks`, and
`terminal.font_fallbacks`.(#5180, #5055).
---------
Co-authored-by: Junkui Zhang <364772080@qq.com>
<img width="624" alt="image"
src="https://github.com/user-attachments/assets/f492b0bd-14c3-49e2-b2ff-dc78e52b0815">
- [x] Correctly set custom model token count
- [x] How to count tokens for Gemini models?
- [x] Feature flag zed.dev provider
- [x] Figure out how to configure custom models
- [ ] Update docs
Release Notes:
- Added support for quickly switching between multiple language model
providers in the assistant panel
---------
Co-authored-by: Antonio <antonio@zed.dev>
2024-07-23 19:48:41 +02:00
Renamed from crates/completion/src/open_ai.rs (Browse further)