Add LM Studio support to the Assistant (#23097)
#### Release Notes: - Added support for [LM Studio](https://lmstudio.ai/) to the Assistant. #### Quick demo: https://github.com/user-attachments/assets/af58fc13-1abc-4898-9747-3511016da86a #### Future enhancements: - wire up tool calling (new in [LM Studio 0.3.6](https://lmstudio.ai/blog/lmstudio-v0.3.6)) --------- Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
This commit is contained in:
parent
4445679f3c
commit
c038696aa8
24 changed files with 1153 additions and 2 deletions
|
@ -8,7 +8,7 @@ This section covers various aspects of the Assistant:
|
|||
|
||||
- [Inline Assistant](./inline-assistant.md): Discover how to use the Assistant to power inline transformations directly within your code editor and terminal.
|
||||
|
||||
- [Providers & Configuration](./configuration.md): Configure the Assistant, and set up different language model providers like Anthropic, OpenAI, Ollama, Google Gemini, and GitHub Copilot Chat.
|
||||
- [Providers & Configuration](./configuration.md): Configure the Assistant, and set up different language model providers like Anthropic, OpenAI, Ollama, LM Studio, Google Gemini, and GitHub Copilot Chat.
|
||||
|
||||
- [Introducing Contexts](./contexts.md): Learn about contexts (similar to conversations), and learn how they power your interactions between you, your project, and the assistant/model.
|
||||
|
||||
|
|
|
@ -10,6 +10,7 @@ The following providers are supported:
|
|||
- [Google AI](#google-ai) [^1]
|
||||
- [Ollama](#ollama)
|
||||
- [OpenAI](#openai)
|
||||
- [LM Studio](#lmstudio)
|
||||
|
||||
To configure different providers, run `assistant: show configuration` in the command palette, or click on the hamburger menu at the top-right of the assistant panel and select "Configure".
|
||||
|
||||
|
@ -236,6 +237,25 @@ Example configuration for using X.ai Grok with Zed:
|
|||
}
|
||||
```
|
||||
|
||||
### LM Studio {#lmstudio}
|
||||
|
||||
1. Download and install the latest version of LM Studio from https://lmstudio.ai/download
|
||||
2. In the app press ⌘/Ctrl + Shift + M and download at least one model, e.g. qwen2.5-coder-7b
|
||||
|
||||
You can also get models via the LM Studio CLI:
|
||||
|
||||
```sh
|
||||
lms get qwen2.5-coder-7b
|
||||
```
|
||||
|
||||
3. Make sure the LM Studio API server by running:
|
||||
|
||||
```sh
|
||||
lms server start
|
||||
```
|
||||
|
||||
Tip: Set [LM Studio as a login item](https://lmstudio.ai/docs/advanced/headless#run-the-llm-service-on-machine-login) to automate running the LM Studio server.
|
||||
|
||||
#### Custom endpoints {#custom-endpoint}
|
||||
|
||||
You can use a custom API endpoint for different providers, as long as it's compatible with the providers API structure.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue