This PR adjusts the approach we use to encoding tool uses in the
completion response to use a structured format rather than simply
injecting it into the response stream as text.
In #17170 we would encode the tool uses as XML and insert them as text.
This would require then re-parsing the tool uses out of the buffer in
order to use them.
The approach taken in this PR is to make `stream_completion` return a
stream of `LanguageModelCompletionEvent`s. Each of these events can be
either text, or a tool use.
A new `stream_completion_text` method has been added to `LanguageModel`
for scenarios where we only care about textual content (currently,
everywhere that isn't the Assistant context editor).
Release Notes:
- N/A
This PR updates the Assistant with support for receiving tool uses from
Anthropic models and capturing them as text in the context editor.
This is just laying the foundation for tool use. We don't yet fulfill
the tool uses yet, or define any tools for the model to use.
Here's an example of what it looks like using the example `get_weather`
tool from the Anthropic docs:
<img width="644" alt="Screenshot 2024-08-30 at 1 51 13 PM"
src="https://github.com/user-attachments/assets/3614f953-0689-423c-8955-b146729ea638">
Release Notes:
- N/A
Release Notes:
- Adds support for Prompt Caching in Anthropic. For models that support
it this can dramatically lower cost while improving performance.
For future reference: WIP branch of copy/pasting a mixture of images and
text: https://github.com/zed-industries/zed/tree/copy-paste-images -
we'll come back to that one after landing this one.
Release Notes:
- You can now paste images into the Assistant Panel to include them as
context. Currently works only on Mac, and with Anthropic models. Future
support is planned for more models, operating systems, and image
clipboard operations.
---------
Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Mikayla <mikayla@zed.dev>
Co-authored-by: Jason <jason@zed.dev>
Co-authored-by: Kyle <kylek@zed.dev>
I need this to refine our prompts on the fly as I work.
Release Notes:
- Templates for prompts driving inline transformation in editors and the
terminal can now be overridden in the `~/.config/zed/prompts/templates`
directory. This is an advanced feature, and prevents you from getting
upstream changes. It's intended for use by Zed developers.
We also eliminate the `completion` crate and moved its logic into
`LanguageModelRegistry`.
Release Notes:
- N/A
---------
Co-authored-by: Nathan <nathan@zed.dev>
Supersedes https://github.com/zed-industries/zed/pull/12090fixes#5180fixes#5055
See original PR for an example of the feature at work.
This PR changes the settings interface to be backwards compatible, and
adds the `ui_font_fallbacks`, `buffer_font_fallbacks`, and
`terminal.font_fallbacks` settings.
Release Notes:
- Added support for font fallbacks via three new settings:
`ui_font_fallbacks`, `buffer_font_fallbacks`, and
`terminal.font_fallbacks`.(#5180, #5055).
---------
Co-authored-by: Junkui Zhang <364772080@qq.com>
This PR increases the size of the the icon buttons within the inline
editors, both within the buffer and on the terminal. I also added
properties to make sure they always render as a square, as well as
tweaking the stop icon SVG, adding an alternative sparkle icon that fit
the same grid as the close (14x14) icon, and adding a bit more right
padding on the buffer's case so it doesn't collide with the scrollbar.
End result is that they have a bit of an easier target space area and
normalized sizes.
---
Release Notes:
- N/A
This PR mostly refines the model selector popover design by formatting
the models names' and adjusting spacing/alignment in the list-related
items. The list component changes could've been made in a separate PR
but it was also very practical to do it here as I was already
in-context. Either way, I'm happy to separate if that's better!
One thing I couldn't necessarily figure out, though, is why the order
changed (e.g., Anthropic at last ). I wonder if that was because of the
separator logic somehow? I'd love guidance here—new to Rust!
| Before | After |
|--------|--------|
| <img width="228" alt="Screenshot 2024-07-23 at 21 02 33"
src="https://github.com/user-attachments/assets/3372c6c9-08dc-4d71-9265-26f015e2dbc2">
| <img width="228" alt="Screenshot 2024-07-23 at 21 01 45"
src="https://github.com/user-attachments/assets/624cc7db-a3d9-48e3-99d7-c29829501130">
|
---
Release Notes:
- N/A
---------
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
<img width="624" alt="image"
src="https://github.com/user-attachments/assets/f492b0bd-14c3-49e2-b2ff-dc78e52b0815">
- [x] Correctly set custom model token count
- [x] How to count tokens for Gemini models?
- [x] Feature flag zed.dev provider
- [x] Figure out how to configure custom models
- [ ] Update docs
Release Notes:
- Added support for quickly switching between multiple language model
providers in the assistant panel
---------
Co-authored-by: Antonio <antonio@zed.dev>
This PR updates a number of spots where we were setting all of the
`TextStyle` fields even if we were not changing the values from the
defaults.
We now use `..Default::default()`.
Release Notes:
- N/A
We will soon need `semantic_index` to be able to use
`CompletionProvider`. This is currently impossible due to a cyclic crate
dependency, because `CompletionProvider` lives in the `assistant` crate,
which depends on `semantic_index`.
This PR breaks the dependency cycle by extracting two crates out of
`assistant`: `language_model` and `completion`.
Only one piece of logic changed: [this
code](922fcaf5a6 (diff-3857b3707687a4d585f1200eec4c34a7a079eae8d303b4ce5b4fce46234ace9fR61-R69)).
* As of https://github.com/zed-industries/zed/pull/13276, whenever we
ask a given completion provider for its available models, OpenAI
providers would go and ask the global assistant settings whether the
user had configured an `available_models` setting, and if so, return
that.
* This PR changes it so that instead of eagerly asking the assistant
settings for this info (the new crate must not depend on `assistant`, or
else the dependency cycle would be back), OpenAI completion providers
now store the user-configured settings as part of their struct, and
whenever the settings change, we update the provider.
In theory, this change should not change user-visible behavior...but
since it's the only change in this large PR that's more than just moving
code around, I'm mentioning it here in case there's an unexpected
regression in practice! (cc @amtoaer in case you'd like to try out this
branch and verify that the feature is still working the way you expect.)
Release Notes:
- N/A
---------
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
Note that this shouldn't have any visible user-facing behavior yet. The
feature is incomplete but we wanna merge early to avoid a long-running
branch.
Release Notes:
- N/A
---------
Co-authored-by: Nathan <nathan@zed.dev>
This PR refactors the completion providers to only process a maximum
amount of completion requests at a time.
Also started refactoring language model providers to use traits, so it's
easier to allow specifying multiple providers in the future.
Release Notes:
- N/A