parent
c2feffac9d
commit
72007c9a62
12 changed files with 245 additions and 113 deletions
|
@ -27,7 +27,7 @@ enable = false
|
|||
"/conversations.html" = "/community-links"
|
||||
"/ai.html" = "/docs/ai/overview.html"
|
||||
"/assistant/assistant.html" = "/docs/ai/overview.html"
|
||||
"/assistant/configuration.html" = "/docs/ai/custom-api-keys.html"
|
||||
"/assistant/configuration.html" = "/docs/ai/configuration.html"
|
||||
"/assistant/assistant-panel.html" = "/docs/ai/agent-panel.html"
|
||||
"/assistant/contexts.html" = "/docs/ai/text-threads.html"
|
||||
"/assistant/inline-assistant.html" = "/docs/ai/inline-assistant.html"
|
||||
|
|
|
@ -48,11 +48,11 @@
|
|||
- [Text Threads](./ai/text-threads.md)
|
||||
- [Rules](./ai/rules.md)
|
||||
- [Model Context Protocol](./ai/mcp.md)
|
||||
- [Configuration](./ai/configuration.md)
|
||||
- [Subscription](./ai/subscription.md)
|
||||
- [Plans and Usage](./ai/plans-and-usage.md)
|
||||
- [Billing](./ai/billing.md)
|
||||
- [Models](./ai/models.md)
|
||||
- [Use Your Own API Keys](./ai/custom-api-keys.md)
|
||||
- [Privacy and Security](./ai/privacy-and-security.md)
|
||||
- [AI Improvement](./ai/ai-improvement.md)
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ Signing in to Zed is not a requirement. You can use most features you'd expect i
|
|||
## What Features Require Signing In?
|
||||
|
||||
1. All real-time [collaboration features](./collaboration.md).
|
||||
2. [LLM-powered features](./ai/overview.md), if you are using Zed as the provider of your LLM models. Alternatively, you can [bring and configure your own API keys](./ai/custom-api-keys.md) if you'd prefer, and avoid having to sign in.
|
||||
2. [LLM-powered features](./ai/overview.md), if you are using Zed as the provider of your LLM models. Alternatively, you can [bring and configure your own API keys](./ai/configuration.md#use-your-own-keys) if you'd prefer, and avoid having to sign in.
|
||||
|
||||
## Signing In
|
||||
|
||||
|
|
|
@ -5,16 +5,14 @@ You can use it for various tasks, such as generating code, asking questions abou
|
|||
|
||||
To open the Agent Panel, use the `agent: new thread` action in [the Command Palette](../getting-started.md#command-palette) or click the ✨ (sparkles) icon in the status bar.
|
||||
|
||||
If you're using the Agent Panel for the first time, you'll need to [configure at least one LLM provider](./custom-api-keys.md#providers).
|
||||
If you're using the Agent Panel for the first time, you'll need to [configure at least one LLM provider](./ai/configuration.md).
|
||||
|
||||
## Overview {#overview}
|
||||
|
||||
After you've configured some LLM providers, you're ready to start working with the Agent Panel.
|
||||
|
||||
Type at the message editor and hit `enter` to submit your prompt to the LLM.
|
||||
After you've configured a LLM provider, type at the message editor and hit `enter` to submit your prompt.
|
||||
If you need extra room to type, you can expand the message editor with {#kb agent::ExpandMessageEditor}.
|
||||
|
||||
You should start to see the responses stream in with indications of which [tools](./tools.md) the AI is using to fulfill your prompt.
|
||||
You should start to see the responses stream in with indications of [which tools](./tools.md) the AI is using to fulfill your prompt.
|
||||
For example, if the AI chooses to perform an edit, you will see a card with the diff.
|
||||
|
||||
### Editing Messages {#editing-messages}
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
# Overview
|
|
@ -1,10 +1,13 @@
|
|||
# Configuring Custom API Keys
|
||||
# Configuration
|
||||
|
||||
While Zed offers hosted versions of models through our various plans, we're always happy to support users wanting to supply their own API keys for LLM providers.
|
||||
There are various aspects about the Agent Panel that you can customize.
|
||||
All of them can be seen by either visiting [the Configuring Zed page](/configuring-zed.md#agent) or by running the `zed: open default settings` action and searching for `"agent"`.
|
||||
Alternatively, you can also visit the panel's Settings view by running the `agent: open configuration` action or going to the top-right menu and hitting "Settings".
|
||||
|
||||
> Using your own API keys is **_free_** - you do not need to subscribe to a Zed plan to use our AI features with your own keys.
|
||||
## LLM Providers
|
||||
|
||||
## Supported LLM Providers
|
||||
Zed supports multiple large language model providers.
|
||||
Here's an overview of the supported providers and tool call support:
|
||||
|
||||
| Provider | Tool Use Supported |
|
||||
| ----------------------------------------------- | ------------------ |
|
||||
|
@ -17,21 +20,21 @@ While Zed offers hosted versions of models through our various plans, we're alwa
|
|||
| [OpenAI API Compatible](#openai-api-compatible) | 🚫 |
|
||||
| [LM Studio](#lmstudio) | 🚫 |
|
||||
|
||||
## Providers {#providers}
|
||||
## Use Your Own Keys {#use-your-own-keys}
|
||||
|
||||
To access the Assistant configuration view, run `assistant: show configuration` in the command palette, or click on the hamburger menu at the top-right of the Assistant Panel and select "Configure".
|
||||
While Zed offers hosted versions of models through [our various plans](/ai/plans-and-usage), we're always happy to support users wanting to supply their own API keys for LLM providers. Below, you can learn how to do that for each provider.
|
||||
|
||||
Below you can find all the supported providers available so far.
|
||||
> Using your own API keys is _free_—you do not need to subscribe to a Zed plan to use our AI features with your own keys.
|
||||
|
||||
### Anthropic {#anthropic}
|
||||
|
||||
> 🔨Supports tool use
|
||||
> ✅ Supports tool use
|
||||
|
||||
You can use Anthropic models with the Zed assistant by choosing it via the model dropdown in the assistant panel.
|
||||
You can use Anthropic models by choosing it via the model dropdown in the Agent Panel.
|
||||
|
||||
1. Sign up for Anthropic and [create an API key](https://console.anthropic.com/settings/keys)
|
||||
2. Make sure that your Anthropic account has credits
|
||||
3. Open the configuration view (`assistant: show configuration`) and navigate to the Anthropic section
|
||||
3. Open the settings view (`agent: open configuration`) and go to the Anthropic section
|
||||
4. Enter your Anthropic API key
|
||||
|
||||
Even if you pay for Claude Pro, you will still have to [pay for additional credits](https://console.anthropic.com/settings/plans) to use it via the API.
|
||||
|
@ -65,7 +68,7 @@ You can add custom models to the Anthropic provider by adding the following to y
|
|||
}
|
||||
```
|
||||
|
||||
Custom models will be listed in the model dropdown in the assistant panel.
|
||||
Custom models will be listed in the model dropdown in the Agent Panel.
|
||||
|
||||
You can configure a model to use [extended thinking](https://docs.anthropic.com/en/docs/about-claude/models/extended-thinking-models) (if it supports it),
|
||||
by changing the mode in of your models configuration to `thinking`, for example:
|
||||
|
@ -84,19 +87,19 @@ by changing the mode in of your models configuration to `thinking`, for example:
|
|||
|
||||
### GitHub Copilot Chat {#github-copilot-chat}
|
||||
|
||||
> 🔨Supports tool use in some cases.
|
||||
> See [here](https://github.com/zed-industries/zed/blob/9e0330ba7d848755c9734bf456c716bddf0973f3/crates/language_models/src/provider/copilot_chat.rs#L189-L198) for the supported subset
|
||||
> ✅ Supports tool use in some cases.
|
||||
> Visit [the Copilot Chat code](https://github.com/zed-industries/zed/blob/9e0330ba7d848755c9734bf456c716bddf0973f3/crates/language_models/src/provider/copilot_chat.rs#L189-L198) for the supported subset.
|
||||
|
||||
You can use GitHub Copilot chat with the Zed assistant by choosing it via the model dropdown in the assistant panel.
|
||||
You can use GitHub Copilot chat with the Zed assistant by choosing it via the model dropdown in the Agent Panel.
|
||||
|
||||
### Google AI {#google-ai}
|
||||
|
||||
> 🔨Supports tool use
|
||||
> ✅ Supports tool use
|
||||
|
||||
You can use Gemini 1.5 Pro/Flash with the Zed assistant by choosing it via the model dropdown in the assistant panel.
|
||||
You can use Gemini 1.5 Pro/Flash with the Zed assistant by choosing it via the model dropdown in the Agent Panel.
|
||||
|
||||
1. Go the Google AI Studio site and [create an API key](https://aistudio.google.com/app/apikey).
|
||||
2. Open the configuration view (`assistant: show configuration`) and navigate to the Google AI section
|
||||
2. Open the settings view (`agent: open configuration`) and go to the Google AI section
|
||||
3. Enter your Google AI API key and press enter.
|
||||
|
||||
The Google AI API key will be saved in your keychain.
|
||||
|
@ -123,11 +126,11 @@ By default Zed will use `stable` versions of models, but you can use specific ve
|
|||
}
|
||||
```
|
||||
|
||||
Custom models will be listed in the model dropdown in the assistant panel.
|
||||
Custom models will be listed in the model dropdown in the Agent Panel.
|
||||
|
||||
### Ollama {#ollama}
|
||||
|
||||
> 🔨Supports tool use
|
||||
> ✅ Supports tool use
|
||||
|
||||
Download and install Ollama from [ollama.com/download](https://ollama.com/download) (Linux or macOS) and ensure it's running with `ollama --version`.
|
||||
|
||||
|
@ -137,19 +140,21 @@ Download and install Ollama from [ollama.com/download](https://ollama.com/downlo
|
|||
ollama pull mistral
|
||||
```
|
||||
|
||||
2. Make sure that the Ollama server is running. You can start it either via running Ollama.app (MacOS) or launching:
|
||||
2. Make sure that the Ollama server is running. You can start it either via running Ollama.app (macOS) or launching:
|
||||
|
||||
```sh
|
||||
ollama serve
|
||||
```
|
||||
|
||||
3. In the assistant panel, select one of the Ollama models using the model dropdown.
|
||||
3. In the Agent Panel, select one of the Ollama models using the model dropdown.
|
||||
|
||||
#### Ollama Context Length {#ollama-context}
|
||||
|
||||
Zed has pre-configured maximum context lengths (`max_tokens`) to match the capabilities of common models. Zed API requests to Ollama include this as `num_ctx` parameter, but the default values do not exceed `16384` so users with ~16GB of ram are able to use most models out of the box. See [get_max_tokens in ollama.rs](https://github.com/zed-industries/zed/blob/main/crates/ollama/src/ollama.rs) for a complete set of defaults.
|
||||
Zed has pre-configured maximum context lengths (`max_tokens`) to match the capabilities of common models.
|
||||
Zed API requests to Ollama include this as `num_ctx` parameter, but the default values do not exceed `16384` so users with ~16GB of ram are able to use most models out of the box.
|
||||
See [get_max_tokens in ollama.rs](https://github.com/zed-industries/zed/blob/main/crates/ollama/src/ollama.rs) for a complete set of defaults.
|
||||
|
||||
**Note**: Tokens counts displayed in the assistant panel are only estimates and will differ from the models native tokenizer.
|
||||
> **Note**: Tokens counts displayed in the Agent Panel are only estimates and will differ from the models native tokenizer.
|
||||
|
||||
Depending on your hardware or use-case you may wish to limit or increase the context length for a specific model via settings.json:
|
||||
|
||||
|
@ -176,11 +181,11 @@ You may also optionally specify a value for `keep_alive` for each available mode
|
|||
|
||||
### OpenAI {#openai}
|
||||
|
||||
> 🔨Supports tool use
|
||||
> ✅ Supports tool use
|
||||
|
||||
1. Visit the OpenAI platform and [create an API key](https://platform.openai.com/account/api-keys)
|
||||
2. Make sure that your OpenAI account has credits
|
||||
3. Open the configuration view (`assistant: show configuration`) and navigate to the OpenAI section
|
||||
3. Open the settings view (`agent: open configuration`) and go to the OpenAI section
|
||||
4. Enter your OpenAI API key
|
||||
|
||||
The OpenAI API key will be saved in your keychain.
|
||||
|
@ -214,14 +219,14 @@ The Zed Assistant comes pre-configured to use the latest version for common mode
|
|||
}
|
||||
```
|
||||
|
||||
You must provide the model's Context Window in the `max_tokens` parameter, this can be found [OpenAI Model Docs](https://platform.openai.com/docs/models). OpenAI `o1` models should set `max_completion_tokens` as well to avoid incurring high reasoning token costs. Custom models will be listed in the model dropdown in the assistant panel.
|
||||
You must provide the model's Context Window in the `max_tokens` parameter, this can be found [OpenAI Model Docs](https://platform.openai.com/docs/models). OpenAI `o1` models should set `max_completion_tokens` as well to avoid incurring high reasoning token costs. Custom models will be listed in the model dropdown in the Agent Panel.
|
||||
|
||||
### DeepSeek {#deepseek}
|
||||
|
||||
> 🚫 Does not support tool use 🚫
|
||||
> 🚫 Does not support tool use
|
||||
|
||||
1. Visit the DeepSeek platform and [create an API key](https://platform.deepseek.com/api_keys)
|
||||
2. Open the configuration view (`assistant: show configuration`) and navigate to the DeepSeek section
|
||||
2. Open the settings view (`agent: open configuration`) and go to the DeepSeek section
|
||||
3. Enter your DeepSeek API key
|
||||
|
||||
The DeepSeek API key will be saved in your keychain.
|
||||
|
@ -255,7 +260,7 @@ The Zed Assistant comes pre-configured to use the latest version for common mode
|
|||
}
|
||||
```
|
||||
|
||||
Custom models will be listed in the model dropdown in the assistant panel. You can also modify the `api_url` to use a custom endpoint if needed.
|
||||
Custom models will be listed in the model dropdown in the Agent Panel. You can also modify the `api_url` to use a custom endpoint if needed.
|
||||
|
||||
### OpenAI API Compatible{#openai-api-compatible}
|
||||
|
||||
|
@ -283,7 +288,7 @@ Example configuration for using X.ai Grok with Zed:
|
|||
|
||||
### LM Studio {#lmstudio}
|
||||
|
||||
> 🚫 Does not support tool use 🚫
|
||||
> 🚫 Does not support tool use
|
||||
|
||||
1. Download and install the latest version of LM Studio from https://lmstudio.ai/download
|
||||
2. In the app press ⌘/Ctrl + Shift + M and download at least one model, e.g. qwen2.5-coder-7b
|
||||
|
@ -301,3 +306,102 @@ Example configuration for using X.ai Grok with Zed:
|
|||
```
|
||||
|
||||
Tip: Set [LM Studio as a login item](https://lmstudio.ai/docs/advanced/headless#run-the-llm-service-on-machine-login) to automate running the LM Studio server.
|
||||
|
||||
## Advanced Configuration {#advanced-configuration}
|
||||
|
||||
### Custom Provider Endpoints {#custom-provider-endpoint}
|
||||
|
||||
You can use a custom API endpoint for different providers, as long as it's compatible with the providers API structure.
|
||||
To do so, add the following to your `settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"language_models": {
|
||||
"some-provider": {
|
||||
"api_url": "http://localhost:11434"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Where `some-provider` can be any of the following values: `anthropic`, `google`, `ollama`, `openai`.
|
||||
|
||||
### Default Model {#default-model}
|
||||
|
||||
Zed's hosted LLM service sets `claude-3-7-sonnet-latest` as the default model.
|
||||
However, you can change it either via the model dropdown in the Agent Panel's bottom-right corner or by manually editing the `default_model` object in your settings:
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"version": "2",
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "gpt-4o"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Feature-specific Models {#feature-specific-models}
|
||||
|
||||
If a feature-specific model is not set, it will fall back to using the default model, which is the one you set on the Agent Panel.
|
||||
|
||||
You can configure the following feature-specific models:
|
||||
|
||||
- Thread summary model: Used for generating thread summaries
|
||||
- Inline assistant model: Used for the inline assistant feature
|
||||
- Commit message model: Used for generating Git commit messages
|
||||
|
||||
Example configuration:
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"version": "2",
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "claude-3-7-sonnet"
|
||||
},
|
||||
"inline_assistant_model": {
|
||||
"provider": "anthropic",
|
||||
"model": "claude-3-5-sonnet"
|
||||
},
|
||||
"commit_message_model": {
|
||||
"provider": "openai",
|
||||
"model": "gpt-4o-mini"
|
||||
},
|
||||
"thread_summary_model": {
|
||||
"provider": "google",
|
||||
"model": "gemini-2.0-flash"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Alternative Models for Inline Assists {#alternative-assists}
|
||||
|
||||
You can configure additional models that will be used to perform inline assists in parallel.
|
||||
When you do this, the inline assist UI will surface controls to cycle between the alternatives generated by each model.
|
||||
|
||||
The models you specify here are always used in _addition_ to your [default model](./ai/configuration.md#default-model).
|
||||
For example, the following configuration will generate two outputs for every assist.
|
||||
One with Claude 3.7 Sonnet, and one with GPT-4o.
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "claude-3-7-sonnet"
|
||||
},
|
||||
"inline_alternatives": [
|
||||
{
|
||||
"provider": "zed.dev",
|
||||
"model": "gpt-4o"
|
||||
}
|
||||
],
|
||||
"version": "2"
|
||||
}
|
||||
}
|
||||
```
|
|
@ -276,4 +276,4 @@ You should be able to sign-in to Supermaven by clicking on the Supermaven icon i
|
|||
|
||||
## See also
|
||||
|
||||
You may also use the [Agent Panel](./agent-panel.md) or the [Inline Assistant](./inline-assistant.md) to interact with language models, see the [AI documentation](./ai.md) for more information on the other AI features in Zed.
|
||||
You may also use the [Agent Panel](./agent-panel.md) or the [Inline Assistant](./inline-assistant.md) to interact with language models, see the [AI documentation](./overview.md) for more information on the other AI features in Zed.
|
||||
|
|
|
@ -6,9 +6,12 @@ Zed uses the [Model Context Protocol](https://modelcontextprotocol.io/) to inter
|
|||
|
||||
Check out the [Anthropic news post](https://www.anthropic.com/news/model-context-protocol) and the [Zed blog post](https://zed.dev/blog/mcp) for an introduction to MCP.
|
||||
|
||||
## Try it out
|
||||
## MCP Servers as Extensions
|
||||
|
||||
Want to try it for yourself? Here are some MCP servers available as Zed extensions:
|
||||
Zed supports exposing MCP servers as extensions.
|
||||
You can check which servers are currently available in a few ways: through [the Zed website](https://zed.dev/extensions?filter=context-servers) or directly through the app by running the `zed: extensions` action or by going to the Agent Panel's top-right menu and looking for "View Server Extensions".
|
||||
|
||||
In any case, here are some of the ones available:
|
||||
|
||||
- [Postgres](https://github.com/zed-extensions/postgres-context-server)
|
||||
- [GitHub](https://github.com/LoamStudios/zed-mcp-server-github)
|
||||
|
@ -19,13 +22,11 @@ Want to try it for yourself? Here are some MCP servers available as Zed extensio
|
|||
- [Framelink Figma](https://github.com/LoamStudios/zed-mcp-server-figma)
|
||||
- [Linear](https://github.com/LoamStudios/zed-mcp-server-linear)
|
||||
|
||||
Browse all available MCP extensions either on [Zed's website](https://zed.dev/extensions?filter=context-servers) or directly in Zed via the `zed: extensions` action in the Command Palette.
|
||||
|
||||
If there's an existing MCP server you'd like to bring to Zed, check out the [context server extension docs](../extensions/context-servers.md) for how to make it available as an extension.
|
||||
|
||||
## Bring your own context server
|
||||
## Bring your own MCP server
|
||||
|
||||
You can bring your own context server by adding something like this to your settings:
|
||||
You can bring your own MCP server by adding something like this to your settings:
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
|
@ -6,9 +6,7 @@ Zed offers various features that integrate LLMs smoothly into the editor.
|
|||
|
||||
- [Models](./models.md): Information about the various language models available in Zed.
|
||||
|
||||
- [Configuration](./custom-api-keys.md): Configure the Agent, and set up different language model providers like Anthropic, OpenAI, Ollama, Google AI, and more.
|
||||
|
||||
- [Custom API Keys](./custom-api-keys.md): How to use your own API keys with the AI features.
|
||||
- [Configuration](./configuration.md): Configure the Agent, and set up different language model providers like Anthropic, OpenAI, Ollama, Google AI, and more.
|
||||
|
||||
- [Subscription](./subscription.md): Information about Zed's subscriptions and other billing related information.
|
||||
|
||||
|
@ -22,7 +20,7 @@ Zed offers various features that integrate LLMs smoothly into the editor.
|
|||
|
||||
- [Tools](./tools.md): Explore the tools that enhance the AI's capabilities to interact with your codebase.
|
||||
|
||||
- [Model Context Protocol](./mcp.md): Learn about context servers that enhance the Assistant's capabilities.
|
||||
- [Model Context Protocol](./mcp.md): Learn about context servers that enhance the Agent's capabilities.
|
||||
|
||||
- [Inline Assistant](./inline-assistant.md): Discover how to use the agent to power inline transformations directly within your code editor and terminal.
|
||||
|
||||
|
|
|
@ -1,23 +1,12 @@
|
|||
# Using Rules {#using-rules}
|
||||
|
||||
Rules are an essential part of interacting with AI assistants in Zed. They help guide the AI's responses and ensure you get the most relevant and useful information.
|
||||
|
||||
Every new chat will start with the [default rules](#default-rules), which can be customized and is where your model prompting will stored.
|
||||
|
||||
Remember that effective prompting is an iterative process. Experiment with different prompt structures and wordings to find what works best for your specific needs and the model you're using.
|
||||
|
||||
Here are some tips for creating effective rules:
|
||||
|
||||
1. Be specific: Clearly state what you want the AI to do or explain.
|
||||
2. Provide context: Include relevant information about your project or problem.
|
||||
3. Use examples: If applicable, provide examples to illustrate your request.
|
||||
4. Break down complex tasks: For multi-step problems, consider breaking them into smaller, more manageable rules.
|
||||
A rule is essentially a prompt that is inserted at the beginning of each interaction with the Agent.
|
||||
Currently, Zed supports `.rules` files at the directory's root and the Rules Library, which allows you to store multiple rules for on-demand usage.
|
||||
|
||||
## `.rules` files
|
||||
|
||||
Zed supports including `.rules` files at the top level of worktrees. Here, you can include project-level instructions you'd like to have included in all of your interactions with the agent panel. Other names for this file are also supported - the first file which matches in this list will be used: `.rules`, `.cursorrules`, `.windsurfrules`, `.clinerules`, `.github/copilot-instructions.md`, or `CLAUDE.md`.
|
||||
|
||||
Zed also supports creating rules (`Rules Library`) that can be included in any interaction with the agent panel.
|
||||
Zed supports including `.rules` files at the top level of worktrees, and act as project-level instructions you'd like to have included in all of your interactions with the Agent Panel.
|
||||
Other names for this file are also supported—the first file which matches in this list will be used: `.rules`, `.cursorrules`, `.windsurfrules`, `.clinerules`, `.github/copilot-instructions.md`, or `CLAUDE.md`.
|
||||
|
||||
## Rules Library {#rules-library}
|
||||
|
||||
|
@ -27,11 +16,11 @@ You can use the inline assistant right in the rules editor, allowing you to auto
|
|||
|
||||
### Opening the Rules Library
|
||||
|
||||
1. Open the agent panel.
|
||||
2. Click on the `Agent Menu` (`...`) in the top right corner.
|
||||
1. Open the Agent Panel.
|
||||
2. Click on the Agent menu (`...`) in the top right corner.
|
||||
3. Select `Rules...` from the dropdown.
|
||||
|
||||
You can also use the `assistant: open rules library` command while in the agent panel.
|
||||
You can also use the `agent: open rules library` command while in the Agent Panel.
|
||||
|
||||
### Managing Rules
|
||||
|
||||
|
@ -39,50 +28,38 @@ Once a rules file is selected, you can edit it directly in the built-in editor.
|
|||
|
||||
Rules can be duplicated, deleted, or added to the default rules using the buttons in the rules editor.
|
||||
|
||||
## Creating Rules {#creating-rules}
|
||||
### Creating Rules {#creating-rules}
|
||||
|
||||
To create a rule file, simply open the `Rules Library` and click the `+` button. Rules files are stored locally and can be accessed from the library at any time.
|
||||
|
||||
Having a series of rules files specifically tailored to prompt engineering can also help you write consistent and effective rules.
|
||||
|
||||
The process of writing and refining prompts is commonly referred to as "prompt engineering."
|
||||
|
||||
More on rule engineering:
|
||||
Here are a couple of helpful resources for writing better rules:
|
||||
|
||||
- [Anthropic: Prompt Engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview)
|
||||
- [OpenAI: Prompt Engineering](https://platform.openai.com/docs/guides/prompt-engineering)
|
||||
|
||||
## Editing the Default Rules {#default-rules}
|
||||
### Editing the Default Rules {#default-rules}
|
||||
|
||||
Zed allows you to customize the default rules used when interacting with LLMs. Or to be more precise, it uses a series of rules that are combined to form the default rules.
|
||||
Zed allows you to customize the default rules used when interacting with LLMs.
|
||||
Or to be more precise, it uses a series of rules that are combined to form the default rules.
|
||||
|
||||
To edit rules, select `Rules...` from the `Agent Menu` icon (`...`) in the upper right hand corner or using the {#kb assistant::OpenRulesLibrary} keyboard shortcut.
|
||||
|
||||
A default set of rules might look something like:
|
||||
|
||||
```plaintext
|
||||
[-] Default
|
||||
[+] Today's date
|
||||
[+] You are an expert
|
||||
[+] Don't add comments
|
||||
```
|
||||
|
||||
Default rules are included in the context of new threads automatically.
|
||||
|
||||
Default rules will show at the top of the rules list, and will be included with every new conversation.
|
||||
|
||||
You can manually add other rules as context using the `@rule` command.
|
||||
|
||||
> **Note:** Remember, commands are only evaluated when the context is created, so a command like `@file` won't continuously update.
|
||||
Default rules are included in the context of every new thread automatically.
|
||||
You can also manually add other rules (that are not flagged as default) as context using the `@rule` command.
|
||||
|
||||
## Migrating from Prompt Library
|
||||
|
||||
Previously, the Rules Library was called the Prompt Library. The new rules system replaces the Prompt Library except in a few specific cases, which are outlined below.
|
||||
Previously, the Rules Library was called the "Prompt Library".
|
||||
The new rules system replaces the Prompt Library except in a few specific cases, which are outlined below.
|
||||
|
||||
### Slash Commands in Rules
|
||||
|
||||
Previously, it was possible to use slash commands (now @-mentions) in custom prompts (now rules). There is currently no support for using @-mentions in rules files, however, slash commands are supported in rules files when used with text threads. See the documentation for using [slash commands in rules](./text-threads.md#slash-commands-in-rules) for more information.
|
||||
Previously, it was possible to use slash commands (now @-mentions) in custom prompts (now rules).
|
||||
There is currently no support for using @-mentions in rules files, however, slash commands are supported in rules files when used with text threads.
|
||||
See the documentation for using [slash commands in rules](./text-threads.md#slash-commands-in-rules) for more information.
|
||||
|
||||
### Prompt templates
|
||||
|
||||
Zed maintains backwards compatibility with its original template system, which allows you to customize prompts used throughout the application, including the inline assistant. While the Rules Library is now the primary way to manage prompts, you can still use these legacy templates to override default prompts. For more details, see the [Rules Templates](./text-threads.md#rule-templates) section under [Text Threads](./text-threads.md).
|
||||
Zed maintains backwards compatibility with its original template system, which allows you to customize prompts used throughout the application, including the inline assistant.
|
||||
While the Rules Library is now the primary way to manage prompts, you can still use these legacy templates to override default prompts.
|
||||
For more details, see the [Rules Templates](./text-threads.md#rule-templates) section under [Text Threads](./text-threads.md).
|
||||
|
|
|
@ -1,20 +1,75 @@
|
|||
# Tools
|
||||
|
||||
Zed's Agent has access to a variety of tools that allow it to interact with your codebase and perform tasks:
|
||||
Zed's Agent has access to a variety of tools that allow it to interact with your codebase and perform tasks.
|
||||
|
||||
- **`copy_path`**: Copies a file or directory recursively in the project, more efficient than manually reading and writing files when duplicating content.
|
||||
- **`create_directory`**: Creates a new directory at the specified path within the project, creating all necessary parent directories (similar to `mkdir -p`).
|
||||
- **`create_file`**: Creates a new file at a specified path with given text content, the most efficient way to create new files or completely replace existing ones.
|
||||
- **`delete_path`**: Deletes a file or directory (including contents recursively) at the specified path and confirms the deletion.
|
||||
- **`diagnostics`**: Gets errors and warnings for either a specific file or the entire project, useful after making edits to determine if further changes are needed.
|
||||
- **`edit_file`**: Edits files by replacing specific text with new content.
|
||||
- **`fetch`**: Fetches a URL and returns the content as Markdown. Useful for providing docs as context.
|
||||
- **`list_directory`**: Lists files and directories in a given path, providing an overview of filesystem contents.
|
||||
- **`move_path`**: Moves or renames a file or directory in the project, performing a rename if only the filename differs.
|
||||
- **`now`**: Returns the current date and time.
|
||||
- **`find_path`**: Quickly finds files by matching glob patterns (like "\*_/_.js"), returning matching file paths alphabetically.
|
||||
- **`read_file`**: Reads the content of a specified file in the project, allowing access to file contents.
|
||||
- **`grep`**: Searches file contents across the project using regular expressions, preferred for finding symbols in code without knowing exact file paths.
|
||||
- **`terminal`**: Executes shell commands and returns the combined output, creating a new shell process for each invocation.
|
||||
- **`thinking`**: Allows the Agent to work through problems, brainstorm ideas, or plan without executing actions, useful for complex problem-solving.
|
||||
- **`web_search`**: Searches the web for information, providing results with snippets and links from relevant web pages, useful for accessing real-time information.
|
||||
## Read & Search Tools
|
||||
|
||||
### `diagnostics`
|
||||
|
||||
Gets errors and warnings for either a specific file or the entire project, useful after making edits to determine if further changes are needed.
|
||||
|
||||
### `fetch`
|
||||
|
||||
Fetches a URL and returns the content as Markdown. Useful for providing docs as context.
|
||||
|
||||
### `find_path`
|
||||
|
||||
Quickly finds files by matching glob patterns (like "\*_/_.js"), returning matching file paths alphabetically.
|
||||
|
||||
### `grep`
|
||||
|
||||
Searches file contents across the project using regular expressions, preferred for finding symbols in code without knowing exact file paths.
|
||||
|
||||
### `list_directory`
|
||||
|
||||
Lists files and directories in a given path, providing an overview of filesystem contents.
|
||||
|
||||
### `now`
|
||||
|
||||
Returns the current date and time.
|
||||
|
||||
### `open`
|
||||
|
||||
Opens a file or URL with the default application associated with it on the user's operating system.
|
||||
|
||||
### `read_file`
|
||||
|
||||
Reads the content of a specified file in the project, allowing access to file contents.
|
||||
|
||||
### `thinking`
|
||||
|
||||
Allows the Agent to work through problems, brainstorm ideas, or plan without executing actions, useful for complex problem-solving.
|
||||
|
||||
### `web_search`
|
||||
|
||||
Searches the web for information, providing results with snippets and links from relevant web pages, useful for accessing real-time information.
|
||||
|
||||
## Edit Tools
|
||||
|
||||
### `copy_path`
|
||||
|
||||
Copies a file or directory recursively in the project, more efficient than manually reading and writing files when duplicating content.
|
||||
|
||||
### `create_directory`
|
||||
|
||||
Creates a new directory at the specified path within the project, creating all necessary parent directories (similar to `mkdir -p`).
|
||||
|
||||
### `create_file`
|
||||
|
||||
Creates a new file at a specified path with given text content, the most efficient way to create new files or completely replace existing ones.
|
||||
|
||||
### `delete_path`
|
||||
|
||||
Deletes a file or directory (including contents recursively) at the specified path and confirms the deletion.
|
||||
|
||||
### `edit_file`
|
||||
|
||||
Edits files by replacing specific text with new content.
|
||||
|
||||
### `move_path`
|
||||
|
||||
Moves or renames a file or directory in the project, performing a rename if only the filename differs.
|
||||
|
||||
### `terminal`
|
||||
|
||||
Executes shell commands and returns the combined output, creating a new shell process for each invocation.
|
||||
|
|
|
@ -74,7 +74,7 @@ In there, you can use the "Uncommit" button, which performs the `git reset HEAD
|
|||
Zed currently supports LLM-powered commit message generation.
|
||||
You can ask AI to generate a commit message by focusing on the message editor within the Git Panel and either clicking on the pencil icon in the bottom left, or reaching for the {#action git::GenerateCommitMessage} ({#kb git::GenerateCommitMessage}) keybinding.
|
||||
|
||||
> Note that you need to have an LLM provider configured. Visit [the Assistant configuration page](./ai/custom-api-keys.md) to learn how to do so.
|
||||
> Note that you need to have an LLM provider configured. Visit [the AI configuration page](./ai/configuration.md) to learn how to do so.
|
||||
|
||||
<!-- Add media -->
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue