Commit graph

13 commits

Author SHA1 Message Date
jvmncs
c71f052276
Add ability to use o1-preview and o1-mini as custom models (#17804)
This is a barebones modification of the OpenAI provider code to
accommodate non-streaming completions. This is specifically for the o1
models, which do not support streaming. Tested that this is working by
running a `/workflow` with the following (arbitrarily chosen) settings:

```json
{
  "language_models": {
    "openai": {
      "version": "1",
      "available_models": [
        {
          "name": "o1-preview",
          "display_name": "o1-preview",
          "max_tokens": 128000,
          "max_completion_tokens": 30000
        },
        {
          "name": "o1-mini",
          "display_name": "o1-mini",
          "max_tokens": 128000,
          "max_completion_tokens": 20000
        }
      ]
    }
  },
}
```

Release Notes:

- Changed  `low_speed_timeout_in_seconds` option to `600` for OpenAI
provider to accommodate recent o1 model release.

---------

Co-authored-by: Peter <peter@zed.dev>
Co-authored-by: Bennet <bennet@zed.dev>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-09-13 15:42:15 -04:00
Peter Tripp
fb9d01b0d5
assistant: Add display_name for OpenAI and Gemini (#17508) 2024-09-10 13:41:06 -04:00
Peter Tripp
b62e63349b
Ollama max_tokens settings (#17025)
- Support `available_models` for Ollama
- Clamp default max tokens (context length) to 16384.
- Add documentation for ollama context configuration.
2024-08-30 08:52:00 -04:00
邻二氮杂菲
f1778dd9de
Add max_output_tokens to OpenAI models and integrate into requests (#16381)
### Pull Request Title
Introduce `max_output_tokens` Field for OpenAI Models


https://platform.deepseek.com/api-docs/news/news0725/#4-8k-max_tokens-betarelease-longer-possibilities

### Description
This commit introduces a new field `max_output_tokens` to the OpenAI
models, which allows specifying the maximum number of tokens that can be
generated in the output. This field is now integrated into the request
handling across multiple crates, ensuring that the output token limit is
respected during language model completions.

Changes include:
- Adding `max_output_tokens` to the `Custom` variant of the
`open_ai::Model` enum.
- Updating the `into_open_ai` method in `LanguageModelRequest` to accept
and use `max_output_tokens`.
- Modifying the `OpenAiLanguageModel` and `CloudLanguageModel`
implementations to pass `max_output_tokens` when converting requests.
- Ensuring that the `max_output_tokens` field is correctly serialized
and deserialized in relevant structures.

This enhancement provides more control over the output length of OpenAI
model responses, improving the flexibility and accuracy of language
model interactions.

### Changes
- Added `max_output_tokens` to the `Custom` variant of the
`open_ai::Model` enum.
- Updated the `into_open_ai` method in `LanguageModelRequest` to accept
and use `max_output_tokens`.
- Modified the `OpenAiLanguageModel` and `CloudLanguageModel`
implementations to pass `max_output_tokens` when converting requests.
- Ensured that the `max_output_tokens` field is correctly serialized and
deserialized in relevant structures.

### Related Issue
https://github.com/zed-industries/zed/pull/16358

### Screenshots / Media
N/A

### Checklist
- [x] Code compiles correctly.
- [x] All tests pass.
- [ ] Documentation has been updated accordingly.
- [ ] Additional tests have been added to cover new functionality.
- [ ] Relevant documentation has been updated or added.

### Release Notes

- Added `max_output_tokens` field to OpenAI models for controlling
output token length.
2024-08-21 00:39:10 -04:00
Nathan Sobo
907d76208d
Allow display name of custom Anthropic models to be customized (#16376)
Also added some docs for our settings.

Release Notes:

- N/A
2024-08-16 14:02:37 -06:00
Roy Williams
b4f5f5024e
Support 8192 output tokens for Claude Sonnet 3.5 (#16358)
Release Notes:

- Added support for 8192 output tokens from Claude Sonnet 3.5
(https://x.com/alexalbert__/status/1812921642143900036)
2024-08-16 11:47:39 -04:00
Roy Williams
46fb917e02
Implement Anthropic prompt caching (#16274)
Release Notes:

- Adds support for Prompt Caching in Anthropic. For models that support
it this can dramatically lower cost while improving performance.
2024-08-15 22:21:06 -05:00
Antonio Scandurra
99bc90a372
Allow customization of the model used for tool calling (#15479)
We also eliminate the `completion` crate and moved its logic into
`LanguageModelRegistry`.

Release Notes:

- N/A

---------

Co-authored-by: Nathan <nathan@zed.dev>
2024-07-30 16:18:53 +02:00
Bennet Bo Fenner
2ada2964c5
assistant: Make it easier to define custom models (#15442)
This PR makes it easier to specify custom models for the Google, OpenAI,
and Anthropic provider:

Before (google):

```json
{
  "language_models": {
    "google": {
      "available_models": [
        {
          "custom": {
            "name": "my-custom-google-model",
            "max_tokens": 12345
          }
        }
      ]
    }
  }
}
```

After (google):

```json
{
  "language_models": {
    "google": {
      "available_models": [
        {
          "name": "my-custom-google-model",
          "max_tokens": 12345
        }
      ]
    }
  }
}
```

Before (anthropic):

```json
{
  "language_models": {
    "anthropic": {
      "available_models": [
        {
          "custom": {
            "name": "my-custom-anthropic-model",
            "max_tokens": 12345
          }
        }
      ]
    }
  }
}
```

After (anthropic):

```json
{
  "language_models": {
    "anthropic": {
      "version": "1",
      "available_models": [
        {
          "name": "my-custom-anthropic-model",
          "max_tokens": 12345
        }
      ]
    }
  }
}

```

The settings will be auto-upgraded so the old versions will continue to
work (except for Google since that one has not been released).

/cc @as-cii 

Release Notes:

- N/A

---------

Co-authored-by: Thorsten <thorsten@zed.dev>
2024-07-30 15:46:39 +02:00
Ryan Hawkins
6f0655810e
Add GitHub Copilot Chat Support (#14842)
# Summary

This commit implements Github Copilot Chat support within the existing
Assistant panel/framework. It required a little bit of trickery and
internal API modification, as Copilot doesn't use the same
authentication-style as all of the existing providers, opting to use
OAuth and a short lived API key instead of a straight API key. All
existing Assistant features should work.

Release Notes:
- Added Github Copilot Chat support
([#4673](https://github.com/zed-industries/zed/issues/4673)).

## Screenshots
<img width="1552" alt="A screenshot showing a conversation between a
user and Github Copilot Chat within the Zed editor."
src="https://github.com/user-attachments/assets/73eaf6a2-792b-4c40-a7fe-f763bd6417d7">

---------

Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
2024-07-30 09:32:58 +02:00
Antonio Scandurra
d6bdaa8a91
Simplify LLM protocol (#15366)
In this pull request, we change the zed.dev protocol so that we pass the
raw JSON for the specified provider directly to our server. This avoids
the need to define a protobuf message that's a superset of all these
formats.

@bennetbo: We also changed the settings for available_models under
zed.dev to be a flat format, because the nesting seemed too confusing.
Can you help us upgrade the local provider configuration to be
consistent with this? We do whatever we need to do when parsing the
settings to make this simple for users, even if it's a bit more complex
on our end. We want to use versioning to avoid breaking existing users,
but need to keep making progress.

```json
"zed.dev": {
  "available_models": [
    {
      "provider": "anthropic",
        "name": "some-newly-released-model-we-havent-added",
        "max_tokens": 200000
      }
  ]
}
```

Release Notes:

- N/A

---------

Co-authored-by: Nathan <nathan@zed.dev>
2024-07-28 11:07:10 +02:00
Bennet Bo Fenner
af4b9805c9
assistant: Fix issues when configuring different providers (#15072)
Release Notes:

- N/A

---------

Co-authored-by: Antonio Scandurra <me@as-cii.com>
2024-07-24 11:21:31 +02:00
Bennet Bo Fenner
d0f52e90e6
assistant: Overhaul provider infrastructure (#14929)
<img width="624" alt="image"
src="https://github.com/user-attachments/assets/f492b0bd-14c3-49e2-b2ff-dc78e52b0815">

- [x] Correctly set custom model token count
- [x] How to count tokens for Gemini models?
- [x] Feature flag zed.dev provider
- [x] Figure out how to configure custom models
- [ ] Update docs

Release Notes:

- Added support for quickly switching between multiple language model
providers in the assistant panel

---------

Co-authored-by: Antonio <antonio@zed.dev>
2024-07-23 19:48:41 +02:00