![]() If you have the VRAM you can increase the context by adding this to your settings.json: ```json "language_models": { "ollama": { "available_models": [ { "max_tokens": 65536, "name": "qwen3", "display_name": "Qwen3-64k" } ] } }, ``` Release Notes: - ollama: Add support for Qwen3. Defaults to 16K token context. See: [Assistant Configuration Docs](https://zed.dev/docs/assistant/configuration#ollama-context) to increase. |
||
---|---|---|
.. | ||
src | ||
Cargo.toml | ||
LICENSE-GPL |