Commit graph

6 commits

Author SHA1 Message Date
Marshall Bowers
0acd556106
language_model: Remove dependencies on individual model provider crates (#25503)
This PR removes the dependencies on the individual model provider crates
from the `language_model` crate.

The various conversion methods for converting a `LanguageModelRequest`
into its provider-specific request type have been inlined into the
various provider modules in the `language_models` crate.

The model providers we provide via Zed's cloud offering get to stay, for
now.

Release Notes:

- N/A
2025-02-24 16:41:35 -05:00
邻二氮杂菲
29bfb56739
Add DeepSeek support (#23551)
- Added support for DeepSeek as a new language model provider in Zed
Assistant
- Implemented streaming API support for real-time responses from
DeepSeek models.
- Added a configuration UI for DeepSeek API key management and settings.
- Updated documentation with detailed setup instructions for DeepSeek
integration.
- Added DeepSeek-specific icons and model definitions for seamless
integration into the Zed UI.
- Integrated DeepSeek into the language model registry, making it
available alongside other providers like OpenAI and Anthropic.

Release Notes:

- Added support for DeepSeek to the Assistant.

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-01-27 13:40:59 -05:00
Yagil Burowski
c038696aa8
Add LM Studio support to the Assistant (#23097)
#### Release Notes:

- Added support for [LM Studio](https://lmstudio.ai/) to the Assistant.

#### Quick demo:


https://github.com/user-attachments/assets/af58fc13-1abc-4898-9747-3511016da86a

#### Future enhancements:
- wire up tool calling (new in [LM Studio
0.3.6](https://lmstudio.ai/blog/lmstudio-v0.3.6))

---------

Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2025-01-14 20:41:58 +00:00
Antonio Scandurra
0ec29d6866
Restructure workflow step resolution and fix inserting newlines (#15720)
Release Notes:

- N/A

---------

Co-authored-by: Nathan <nathan@zed.dev>
2024-08-05 09:18:06 +02:00
Antonio Scandurra
d6bdaa8a91
Simplify LLM protocol (#15366)
In this pull request, we change the zed.dev protocol so that we pass the
raw JSON for the specified provider directly to our server. This avoids
the need to define a protobuf message that's a superset of all these
formats.

@bennetbo: We also changed the settings for available_models under
zed.dev to be a flat format, because the nesting seemed too confusing.
Can you help us upgrade the local provider configuration to be
consistent with this? We do whatever we need to do when parsing the
settings to make this simple for users, even if it's a bit more complex
on our end. We want to use versioning to avoid breaking existing users,
but need to keep making progress.

```json
"zed.dev": {
  "available_models": [
    {
      "provider": "anthropic",
        "name": "some-newly-released-model-we-havent-added",
        "max_tokens": 200000
      }
  ]
}
```

Release Notes:

- N/A

---------

Co-authored-by: Nathan <nathan@zed.dev>
2024-07-28 11:07:10 +02:00
Richard Feldman
ec487d8f64
Extract completion provider crate (#14823)
We will soon need `semantic_index` to be able to use
`CompletionProvider`. This is currently impossible due to a cyclic crate
dependency, because `CompletionProvider` lives in the `assistant` crate,
which depends on `semantic_index`.

This PR breaks the dependency cycle by extracting two crates out of
`assistant`: `language_model` and `completion`.

Only one piece of logic changed: [this
code](922fcaf5a6 (diff-3857b3707687a4d585f1200eec4c34a7a079eae8d303b4ce5b4fce46234ace9fR61-R69)).
* As of https://github.com/zed-industries/zed/pull/13276, whenever we
ask a given completion provider for its available models, OpenAI
providers would go and ask the global assistant settings whether the
user had configured an `available_models` setting, and if so, return
that.
* This PR changes it so that instead of eagerly asking the assistant
settings for this info (the new crate must not depend on `assistant`, or
else the dependency cycle would be back), OpenAI completion providers
now store the user-configured settings as part of their struct, and
whenever the settings change, we update the provider.

In theory, this change should not change user-visible behavior...but
since it's the only change in this large PR that's more than just moving
code around, I'm mentioning it here in case there's an unexpected
regression in practice! (cc @amtoaer in case you'd like to try out this
branch and verify that the feature is still working the way you expect.)

Release Notes:

- N/A

---------

Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-07-19 13:35:34 -04:00