assistant: Remove low_speed_timeout (#20681)

This removes the `low_speed_timeout` setting from all providers as a
response to issue #19509.

Reason being that the original `low_speed_timeout` was only as part of
#9913 because users wanted to _get rid of timeouts_. They wanted to bump
the default timeout from 5sec to a lot more.

Then, in the meantime, the meaning of `low_speed_timeout` changed in
#19055 and was changed to a normal `timeout`, which is a different thing
and breaks slower LLMs that don't reply with a complete response in the
configured timeout.

So we figured: let's remove the whole thing and replace it with a
default _connect_ timeout to make sure that we can connect to a server
in 10s, but then give the server as long as it wants to complete its
response.

Closes #19509

Release Notes:

- Removed the `low_speed_timeout` setting from LLM provider settings,
since it was only used to _increase_ the timeout to give LLMs more time,
but since we don't have any other use for it, we simply remove the
setting to give LLMs as long as they need.

---------

Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Peter Tripp <peter@zed.dev>
This commit is contained in:
Thorsten Ball 2024-11-15 07:37:31 +01:00 committed by GitHub
parent c9546070ac
commit aee01f2c50
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
19 changed files with 109 additions and 345 deletions

View file

@ -124,8 +124,6 @@ Download and install Ollama from [ollama.com/download](https://ollama.com/downlo
3. In the assistant panel, select one of the Ollama models using the model dropdown.
4. (Optional) Specify an [`api_url`](#custom-endpoint) or [`low_speed_timeout_in_seconds`](#provider-timeout) if required.
#### Ollama Context Length {#ollama-context}
Zed has pre-configured maximum context lengths (`max_tokens`) to match the capabilities of common models. Zed API requests to Ollama include this as `num_ctx` parameter, but the default values do not exceed `16384` so users with ~16GB of ram are able to use most models out of the box. See [get_max_tokens in ollama.rs](https://github.com/zed-industries/zed/blob/main/crates/ollama/src/ollama.rs) for a complete set of defaults.
@ -139,7 +137,6 @@ Depending on your hardware or use-case you may wish to limit or increase the con
"language_models": {
"ollama": {
"api_url": "http://localhost:11434",
"low_speed_timeout_in_seconds": 120,
"available_models": [
{
"name": "qwen2.5-coder",
@ -233,22 +230,6 @@ To do so, add the following to your Zed `settings.json`:
Where `some-provider` can be any of the following values: `anthropic`, `google`, `ollama`, `openai`.
#### Custom timeout {#provider-timeout}
You can customize the timeout that's used for LLM requests, by adding the following to your Zed `settings.json`:
```json
{
"language_models": {
"some-provider": {
"low_speed_timeout_in_seconds": 10
}
}
}
```
Where `some-provider` can be any of the following values: `anthropic`, `copilot_chat`, `google`, `ollama`, `openai`.
#### Configuring the default model {#default-model}
The default model can be set via the model dropdown in the assistant panel's top-right corner. Selecting a model saves it as the default.