docs: Refine a few AI pages (#31381)

Mostly sharpening the words, simplifying, and removing duplicate
content.

Release Notes:

- N/A
This commit is contained in:
Danilo Leal 2025-05-25 11:03:14 -03:00 committed by GitHub
parent 4c28d2c2e2
commit 1b3f20bdf4
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
9 changed files with 62 additions and 50 deletions

View file

@ -21,9 +21,7 @@ You can click on the card that contains your message and re-submit it with an ad
### Checkpoints {#checkpoints}
Every time the AI performs an edit, you should see a "Restore Checkpoint" button to the top of your message.
This allows you to return your codebase to the state it was in prior to that message.
This is usually valuable if the AI's edit doesn't go in the right direction.
Every time the AI performs an edit, you should see a "Restore Checkpoint" button to the top of your message, allowing you to return your codebase to the state it was in prior to that message.
### Navigating History {#navigating-history}
@ -31,27 +29,27 @@ To quickly navigate through recently opened threads, use the {#kb agent::ToggleN
The items in this menu function similarly to tabs, and closing them doesnt delete the thread; instead, it simply removes them from the recent list.
You can also view all historical conversations with the `View All` option from within the same menu or by reaching for the {#kb agent::OpenHistory} binding.
To view all historical conversations, reach for the `View All` option from within the same menu or via the {#kb agent::OpenHistory} binding.
### Following the Agent {#following-the-agent}
Zed is built with collaboration natively integrated.
This approach extends to collaboration with AI as well.
To follow the agent navigating across your codebase and performing edits, click on the "crosshair" icon button at the bottom left of the panel.
To follow the agent reading through your codebase and performing edits, click on the "crosshair" icon button at the bottom left of the panel.
### Get Notified {#get-notified}
If you send a prompt to the Agent and then move elsewhere, thus putting Zed in the background, a notification will pop up at the top right of your monitor indicating that the Agent has completed its work.
If you send a prompt to the Agent and then move elsewhere, thus putting Zed in the background, a notification will pop up at the top right of your screen indicating that the Agent has completed its work.
You can customize the notification behavior or turn it off entirely by using the `agent.notify_when_agent_waiting` settings key.
You can customize the notification behavior, including the option to turn it off entirely, by using the `agent.notify_when_agent_waiting` settings key.
### Reviewing Changes {#reviewing-changes}
If you are using a profile that includes write tools, and the agent has made changes to your project, you'll notice the Agent Panel surfaces the fact that edits (and how many of them) have been applied.
Once the agent has made changes to your project, the panel will surface which files, and how many of them, have been edited.
To see which files have been edited, expand the accordion bar that shows up right above the message editor or click the `Review Changes` button ({#kb agent::OpenAgentDiff}), which opens a multi-buffer tab with all changes.
To see which files specifically have been edited, expand the accordion bar that shows up right above the message editor or click the `Review Changes` button ({#kb agent::OpenAgentDiff}), which opens a multi-buffer tab with all changes.
Reviewing includes the option to accept or reject each or all edits.
You're able to reject or accept each individual change hunk, or the whole set of changes made by the agent.
Edit diffs also appear in individual buffers.
So, if your active tab had edits made by the AI, you'll see diffs with the same accept/reject controls as in the multi-buffer.
@ -63,16 +61,16 @@ Although Zed's agent is very efficient at reading through your codebase to auton
If you have a tab open when opening the Agent Panel, that tab appears as a suggested context in form of a dashed button.
You can also add other forms of context by either mentioning them with `@` or hitting the `+` icon button.
You can even add previous threads as context by mentioning them with `@thread`, or by selecting the "Start New From Summary" option from the top-right menu to continue a longer conversation and keep it within the context window.
You can even add previous threads as context by mentioning them with `@thread`, or by selecting the "New From Summary" option from the top-right menu to continue a longer conversation, keeping it within the context window.
Images are also supported, and pasting them over in the panel's editor works.
Pasting images as context is also supported by the Agent Panel.
### Token Usage {#token-usage}
Zed surfaces how many tokens you are consuming for your currently active thread in the panel's toolbar.
Depending on how many pieces of context you add, your token consumption can grow rapidly.
With that in mind, once you get close to the model's context window, a banner appears on the bottom of the message editor suggesting to start a new thread with the current one summarized and added as context.
With that in mind, once you get close to the model's context window, a banner appears below the message editor suggesting to start a new thread with the current one summarized and added as context.
You can also do this at any time with an ongoing thread via the "Agent Options" menu on the top right.
## Changing Models {#changing-models}
@ -94,15 +92,15 @@ Zed offers three built-in profiles and you can create as many custom ones as you
#### Built-in Profiles {#built-in-profiles}
- `Write`: A profile with tools to allow the LLM to write to your files and run terminal commands. This one essentially has all built-in tools turned on.
- `Ask`: A profile with read-only tools. Best for asking questions about your code base without the fear of the agent making changes.
- `Minimal`: A profile with no tools. Best for general conversations with the LLM where no knowledge of your code is necessary.
- `Ask`: A profile with read-only tools. Best for asking questions about your code base without the concern of the agent making changes.
- `Minimal`: A profile with no tools. Best for general conversations with the LLM where no knowledge of your code base is necessary.
You can explore the exact tools enabled in each profile by clicking on the profile selector button > `Configure Profiles…` > the one you want to check out.
#### Custom Profiles {#custom-profiles}
You can create a custom profile via the `Configure Profiles…` option in the profile selector.
From here, you can choose to `Add New Profile` or fork an existing one with your choice of tools and a custom profile name.
From here, you can choose to `Add New Profile` or fork an existing one with a custom name and your preferred set of tools.
You can also override built-in profiles.
With a built-in profile selected, in the profile selector, navigate to `Configure Tools`, and select the tools you'd like.
@ -115,10 +113,10 @@ All custom profiles can be edited via the UI or by hand under the `assistant.pro
Tool calling needs to be individually supported by each model and model provider.
Therefore, despite the presence of tools, some models may not have the ability to pick them up yet in Zed.
You should see a "No tools" disabled button if you select a model that falls into this case.
You should see a "No tools" label if you select a model that falls into this case.
We want to support all of them, though!
We may prioritize which ones to focus on based on popularity and user feedback, so feel free to help and contribute.
We may prioritize which ones to focus on based on popularity and user feedback, so feel free to help and contribute to fast-track those that don't fit this bill.
All [Zed's hosted models](./models.md) support tool calling out-of-the-box.

View file

@ -4,7 +4,7 @@
### Opt-In
When using the Zed Agent Panel, whether through Zed's hosted AI service or via connecting a non-Zed AI service via API key, Zed does not persistently store user content or use user content to evaluate and/or improve our AI features, unless it is explicitly shared with Zed. Each share is opt-in, and sharing once will not cause future content or data to be shared again.
When using the Agent Panel, whether through Zed's hosted AI service or via connecting a non-Zed AI service via API key, Zed does not persistently store user content or use user content to evaluate and/or improve our AI features, unless it is explicitly shared with Zed. Each share is opt-in, and sharing once will not cause future content or data to be shared again.
> Note that rating responses will send your data related to that response to Zed's servers.
> **_If you don't want data persisted on Zed's servers, don't rate_**. We will not collect data for improving our Agentic offering without you explicitly rating responses.
@ -13,7 +13,8 @@ When using upstream services through Zed AI, we require assurances from our serv
> "Anthropic may not train models on Customer Content from paid Services."
When you directly connect the Zed Assistant with a non Zed AI service (e.g. via API key) Zed does not have control over how your data is used by that service provider. You should reference your agreement with each service provider to understand what terms and conditions apply.
When you directly connect Zed with a non Zed AI service (e.g., via API key) Zed does not have control over how your data is used by that service provider.
You should reference your agreement with each service provider to understand what terms and conditions apply.
### Data we collect
@ -90,7 +91,7 @@ This data includes:
### Data Handling
Collected data is stored in Snowflake, a private database where we track other metrics. We periodically review this data to select training samples for inclusion in our model training dataset. We ensure any included data is anonymized and contains no sensitive information (access tokens, user IDs, email addresses, etc). This training dataset is publicly available at: [huggingface.co/datasets/zed-industries/zeta](https://huggingface.co/datasets/zed-industries/zeta).
Collected data is stored in Snowflake, a private database where we track other metrics. We periodically review this data to select training samples for inclusion in our model training dataset. We ensure any included data is anonymized and contains no sensitive information (access tokens, user IDs, email addresses, etc). This training dataset is publicly available at [huggingface.co/datasets/zed-industries/zeta](https://huggingface.co/datasets/zed-industries/zeta).
### Model Output

View file

@ -23,7 +23,8 @@ Here's an overview of the supported providers and tool call support:
## Use Your Own Keys {#use-your-own-keys}
While Zed offers hosted versions of models through [our various plans](/ai/plans-and-usage), we're always happy to support users wanting to supply their own API keys for LLM providers. Below, you can learn how to do that for each provider.
While Zed offers hosted versions of models through [our various plans](/ai/plans-and-usage), we're always happy to support users wanting to supply their own API keys.
Below, you can learn how to do that for each provider.
> Using your own API keys is _free_—you do not need to subscribe to a Zed plan to use our AI features with your own keys.
@ -345,7 +346,7 @@ Example configuration for using X.ai Grok with Zed:
lms get qwen2.5-coder-7b
```
3. Make sure the LM Studio API server by running:
3. Make sure the LM Studio API server is running by executing:
```sh
lms server start

View file

@ -1,22 +1,20 @@
# Inline Assistant
## Using the Inline Assistant
## Usage Overview
You can use `ctrl-enter` to open the Inline Assistant nearly anywhere you can enter text: editors, the agent panel, the prompt library, channel notes, and even within the terminal panel.
Use `ctrl-enter` to open the Inline Assistant nearly anywhere you can enter text: editors, text threads, the rules library, channel notes, and even within the terminal panel.
The Inline Assistant allows you to send the current selection (or the current line) to a language model and modify the selection with the language model's response.
You can use `ctrl-enter` to open the inline assistant nearly anywhere you can write text: editors, the Agent Panel, the Rules Library, channel notes, and even within the terminal panel.
You can also perform multiple generation requests in parallel by pressing `ctrl-enter` with multiple cursors, or by pressing `ctrl-enter` with a selection that spans multiple excerpts in a multibuffer.
You can also perform multiple generation requests in parallel by pressing `ctrl-enter` with multiple cursors, or by pressing the same binding with a selection that spans multiple excerpts in a multibuffer.
## Context
You can give the Inline Assistant context the same way you can in the agent panel, allowing you to provide additional instructions or rules for code transformations with @-mentions.
Give the Inline Assistant context the same way you can in [the Agent Panel](./agent-panel.md), allowing you to provide additional instructions or rules for code transformations with @-mentions.
A useful pattern here is to create a thread in the [Agent Panel](./agent-panel.md), and then use the `@thread` command in the Inline Assistant to include the thread as context for the Inline Assistant transformation.
A useful pattern here is to create a thread in the Agent Panel, and then use the mention that thread with `@thread` in the Inline Assistant to include it as context.
The Inline Assistant is limited to normal mode context windows (see [Models](./models.md) for more).
> The Inline Assistant is limited to normal mode context windows ([see Models](./models.md) for more).
## Prefilling Prompts

View file

@ -8,8 +8,8 @@ Check out the [Anthropic news post](https://www.anthropic.com/news/model-context
## MCP Servers as Extensions
Zed supports exposing MCP servers as extensions.
You can check which servers are currently available in a few ways: through [the Zed website](https://zed.dev/extensions?filter=context-servers) or directly through the app by running the `zed: extensions` action or by going to the Agent Panel's top-right menu and looking for "View Server Extensions".
One of the ways you can use MCP servers in Zed is through exposing it as an extension.
Check the servers that are already available in Zed's extension store via either [the Zed website](https://zed.dev/extensions?filter=context-servers) or directly through the app by running the `zed: extensions` action or by going to the Agent Panel's top-right menu and looking for "View Server Extensions".
In any case, here are some of the ones available:
@ -26,7 +26,7 @@ If there's an existing MCP server you'd like to bring to Zed, check out the [con
## Bring your own MCP server
You can bring your own MCP server by adding something like this to your settings:
Alternatively, you can connect to MCP servers in Zed via adding their commands directly to your `settings.json`, like so:
```json
{
@ -43,4 +43,4 @@ You can bring your own MCP server by adding something like this to your settings
}
```
If you are interested in building your own MCP server, check out the [Model Context Protocol docs](https://modelcontextprotocol.io/introduction#get-started-with-mcp) to get started.
You can also add a custom server by reaching for the Agent Panel's Settings view (also accessible via the `agent: open configuration` action) and adding the desired server through the modal that appears when clicking the "Add Custom Server" button.

View file

@ -1,6 +1,7 @@
# Models
Zeds plans offer hosted versions of major LLMs, generally with higher rate limits than individual API keys. Were working hard to expand the models supported by Zeds subscription offerings, so please check back often.
Zeds plans offer hosted versions of major LLMs, generally with higher rate limits than individual API keys.
Were working hard to expand the models supported by Zeds subscription offerings, so please check back often.
| Model | Provider | Max Mode | Context Window | Price per Prompt | Price per Request |
| ----------------- | --------- | -------- | -------------- | ---------------- | ----------------- |
@ -16,15 +17,18 @@ The models above can be used with the prompts included in your plan. For models
If youve exceeded your limit for the month, and are on a paid plan, you can enable usage-based pricing to continue using models for the rest of the month. See [Plans and Usage](./plans-and-usage.md) for more information.
Non-[Max Mode](#max-mode) will use up to 25 tool calls per one prompt. If your prompt extends beyond 25 tool calls, Zed will ask if youd like to continue which will consume a second prompt. See [Max Mode](#max-mode) for more information on tool calls in [Max Mode](#max-mode).
Non-Max Mode usage will use up to 25 tool calls per one prompt. If your prompt extends beyond 25 tool calls, Zed will ask if youd like to continue, which will consume a second prompt.
## Max Mode {#max-mode}
In Max Mode, we enable models to use [large context windows](#context-windows), unlimited tool calls, and other capabilities for expanded reasoning, to allow an unfettered agentic experience. Because of the increased cost to Zed, each subsequent request beyond the initial user prompt in [Max Mode](#max-mode) models is counted as a prompt for metering. In addition, usage-based pricing per request is slightly more expensive for [Max Mode](#max-mode) models than usage-based pricing per prompt for regular models.
In Max Mode, we enable models to use [large context windows](#context-windows), unlimited tool calls, and other capabilities for expanded reasoning, to allow an unfettered agentic experience.
Note that the Agent Panel using a Max Mode model may consume a good bit of your monthly prompt capacity, if many tool calls are used. We encourage you to think through what model is best for your needs before leaving the Agent Panel to work.
Because of the increased cost to Zed, each subsequent request beyond the initial user prompt in Max Mode models is counted as a prompt for metering.
In addition, usage-based pricing per request is slightly more expensive for Max Mode models than usage-based pricing per prompt for regular models.
By default, all Agent threads start in normal mode, however you can use the agent setting `preferred_completion_mode` to start new Agent threads in max mode.
> Note that the Agent Panel using a Max Mode model may consume a good bit of your monthly prompt capacity, if many tool calls are used. We encourage you to think through what model is best for your needs before leaving the Agent Panel to work.
By default, all Agent threads start in normal mode, however you can use the agent setting `preferred_completion_mode` to start new Agent threads in Max Mode.
## Context Windows {#context-windows}
@ -32,10 +36,13 @@ A context window is the maximum span of text and code an LLM can consider at onc
In [Max Mode](#max-mode), we increase context window size to allow models to have enhanced reasoning capabilities.
Each Agent thread in Zed maintains its own context window. The more prompts, attached files, and responses included in a session, the larger the context window grows.
Each Agent thread in Zed maintains its own context window.
The more prompts, attached files, and responses included in a session, the larger the context window grows.
For best results, its recommended you take a purpose-based approach to Agent thread management, starting a new thread for each unique task.
## Tool Calls {#tool-calls}
Models can use [tools](./tools.md) to interface with your code, search the web, and perform other useful functions. In [Max Mode](#max-mode), models can use an unlimited number of tools per prompt, with each tool call counting as a prompt for metering purposes. For non-Max Mode models, you'll need to interact with the model every 25 tool calls to continue, at which point a new prompt will be counted against your plan limit.
Models can use [tools](./tools.md) to interface with your code, search the web, and perform other useful functions.
In [Max Mode](#max-mode), models can use an unlimited number of tools per prompt, with each tool call counting as a prompt for metering purposes.
For non-Max Mode models, you'll need to interact with the model every 25 tool calls to continue, at which point a new prompt will be counted against your plan limit.

View file

@ -1,14 +1,15 @@
# AI
Zed offers various features that integrate LLMs smoothly into the editor.
Zed smoothly integrates LLMs in multiple ways across the editor.
Learn how to get started with AI on Zed and all its capabilities.
## Setting up AI in Zed
- [Configuration](./configuration.md): Configure the Agent, and set up different language model providers like Anthropic, OpenAI, Ollama, Google AI, and more.
- [Configuration](./configuration.md): Learn how to set up different language model providers like Anthropic, OpenAI, Ollama, Google AI, and more.
- [Models](./models.md): Information about the various language models available in Zed.
- [Models](./models.md): Learn about the various language models available in Zed.
- [Subscription](./subscription.md): Information about Zed's subscriptions and other billing-related information.
- [Subscription](./subscription.md): Learn about Zed's subscriptions and other billing-related information.
- [Privacy and Security](./privacy-and-security.md): Understand how Zed handles privacy and security with AI features.
@ -22,11 +23,11 @@ Zed offers various features that integrate LLMs smoothly into the editor.
- [Model Context Protocol](./mcp.md): Learn about how to install and configure MCP servers.
- [Inline Assistant](./inline-assistant.md): Discover how to use the agent to power inline transformations directly within a file and terminal.
- [Inline Assistant](./inline-assistant.md): Discover how to use the agent to power inline transformations directly within a file or terminal.
## Edit Prediction
- [Edit Prediction](./edit-prediction.md): Learn about Zed's Edit Prediction feature that helps autocomplete your code.
- [Edit Prediction](./edit-prediction.md): Learn about Zed's AI prediction feature that helps autocomplete your code.
## Text Threads

View file

@ -2,7 +2,7 @@
Zed's hosted models are offered via subscription to Zed Pro or Zed Business.
> Using your own API keys is **_free_** - you do not need to subscribe to a Zed plan to use our AI features with your own keys.
> Using your own API keys is _free_you do not need to subscribe to a Zed plan to use our AI features with your own keys.
See the following pages for specific aspects of our subscription offering:

View file

@ -54,7 +54,13 @@ Any time you see instructions that include commands of the form `zed: ...` or `e
To open your custom settings to set things like fonts, formatting settings, per-language settings, and more, use the {#kb zed::OpenSettings} keybinding.
To see all available settings, open the Command Palette with {#kb command_palette::Toggle} and search for "zed: open default settings". You can also check them all out in the [Configuring Zed](./configuring-zed.md) documentation.
To see all available settings, open the Command Palette with {#kb command_palette::Toggle} and search for `zed: open default settings`.
You can also check them all out in the [Configuring Zed](./configuring-zed.md) documentation.
## Configure AI in Zed
Zed smoothly integrates LLMs in multiple ways across the editor.
Visit [the AI overview page](./ai/overview.md) to learn how to quickly get started with LLMs on Zed.
## Set up your key bindings