Because we instantiated `ContextServerManager` both in `agent` and
`assistant-context-editor`, and these two entities track the running MCP
servers separately, we were effectively running every MCP server twice.
This PR moves the `ContextServerManager` into the project crate (now
called `ContextServerStore`). The store can be accessed via a project
instance. This ensures that we only instantiate one `ContextServerStore`
per project.
Also, this PR adds a bunch of tests to ensure that the
`ContextServerStore` behaves correctly (Previously there were none).
Closes#28714Closes#29530
Release Notes:
- N/A
This change:
1. Catches attempts to use missing tools. If this happens, we now send
Agent a message listing available tools, after which Agent can
gracefully recover. Prior behavior: thread would stop in a broken state.
Example of a hallucinated call and a message we send back:

2. Adds evals for hallucinated tool use and imagined edits
3. Adds ability to configure a profile name in evals.
Release Notes:
- N/A
To-dos:
- [x] Expose the command to defend against cases where that's just super
long
- [x] Tackle the vertical scroll conflict with panel scroll
- [x] Reduce default font-size
Release Notes:
- N/A
---------
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
Co-authored-by: Agus Zubiaga <hi@aguz.me>
The goal of this PR is to support tool calls using ollama. A lot of the
serialization work was done in
https://github.com/zed-industries/zed/pull/15803 however the abstraction
over language models always disables tools.
## Changelog:
- Use `serde_json::Value` inside `OllamaFunctionCall` just as it's used
in `OllamaFunctionCall`. This fixes deserialization of ollama tool
calls.
- Added deserialization tests using json from official ollama api docs.
- Fetch model capabilities during model enumeration from ollama provider
- Added `supports_tools` setting to manually configure if a model
supports tools
## TODO:
- [x] Fix tool call serialization/deserialization
- [x] Fetch model capabilities from ollama api
- [x] Add tests for parsing model capabilities
- [ ] Documentation for `supports_tools` field for ollama language model
config
- [ ] Convert between generic language model types
- [x] Pass tools to ollama
Release Notes:
- N/A
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Nathan Sobo <nathan@zed.dev>
Completely subjective, but I just like it better.
Release Notes:
- N/A
---------
Co-authored-by: Danilo Leal <67129314+danilo-leal@users.noreply.github.com>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Release Notes:
- `r enter` now maintains indentation, matching vim
Useful info for this implementation can be found here:
c3f48e3a76/src/normal.c (L4865)
Closes#29725
Adds 3 more tests for Rust `into` and `await` cases, and Python
`__init__` case. Tweaks sort logic to accommodate them.
Release Notes:
- Improved code completion sort order, handling more cases with Rust and
Python.
This PR adds prompt usage information, and easy access to managing your
account, to the agent overflow menu:

Currently this UI will only show after making a request. We'll work on
eagerly getting the usage info later.
Release Notes:
- Added current prompt usage information to the agent menu (`...`) for
Zed AI users
Bypass our terminal subsystem and just run a shell in a pty.
- [x] make sure we use the same working directory
- [x] strip control chars from the pty output (?)
- [x] tests
Release Notes:
- N/A
Otherwise the panel keeps scrolling as the new token comes in and it is
almost impossible to keep the scroll position in the right place.
Also, if the user is editing, it is likely that the current generated
tokens will need to be regenerated anyway, so we may as well stop the
current progress.
Release Notes:
- Agent Beta: Stop generating tokens if previous messages are edited.
Github Copilot currently supports following models for agent mode with
tool calls. Currently we are only supporting anthropic models and not
openai and gemini. This PR add support for the openai models. I have
tested it and it works for all of them. For gemini models it seems there
is a issues from copilot side so not adding that in this PR as enabling
gemini model breaks it in the ask mode as well.
<img width="392" alt="image"
src="https://github.com/user-attachments/assets/fb7a4148-e48c-45c5-9ff9-c02f71217dfb"
/>
- [x] GPT-4.1
- [x] GPT-4.0
- [x] o4-mini
Release Notes:
- agent: Add tool calling support for gpt-4.1, gpt-4o, o4-mini when
using Copilot Chat as a provider
Signed-off-by: Umesh Yadav <umesh4257@gmail.com>
Closes#29858
This PR fixes the alignment-issue for the project saerch for cases where
the horizontally available space is large.
The issue arose because the two smaller editors within one line were
allowed to grow as much as the other editors on separate lines, up to
1200 pixels. However, these two editors should together only take up
1200 pixels at maximum, including the gap between them. To fix this, the
editors now live within one container element that grows at the same
rate as the other editors whilst allowing both editors to flex grow as
needed in the available space.
Current main:
https://github.com/user-attachments/assets/622016dc-70e5-455f-a7ba-5b69405d7e1e
This PR:
https://github.com/user-attachments/assets/5244abf7-f0c0-4781-acb7-b774638d8a17
Release Notes:
- Improved project search input field alignment.
This PR improves the `GET /billing/usage` endpoint.
We now return the usage with the default plan limits when there is no
usage record.
Release Notes:
- N/A
Kinda feel like the way that makes the most sense to sort profiles in
the dropdown is by relevance/impact. "Write" is the default profile and
contains all built-in tools turned on by default, thus it should be the
first. "Ask" contains read-only tools, one step down from Write. And
"Manual" is totally empty, the least "powerful" profile, thus the last.
Release Notes:
- N/A
I am currently setting the font size corrrectly by using a custom
EditorStyle and building an element. However I need to use the same
properties as a normal editor for everything but font size.
Release Notes:
- N/A
* `CountTokensRequest` now takes a full `GenerateContentRequest` instead
of just content.
* Fixes use of `models/` prefix in `model` field of
`GenerateContentRequest`, since that's required for use in
`CountTokensRequest`. This didn't cause issues before because it was
always cleared and used in the path.
Release Notes:
- N/A
This PR sets the billing-related fields in the LLM token claims for Zed
staff.
Staff members are automatically in the Zed Pro plan with a subscription
periods that spans the entirety of each month.
Release Notes:
- N/A
I don't think this makes much of a difference in current use, but this
more closely matches other providers and cleans up the "Response"
section of eval markdown output
Release Notes:
- N/A
This is purely a cosmetic change, renamed `@rules` to `@rule` which
unifies the @mention experience (for files, threads etc. we also use
`@file`, `@thread` not `@files`, `@thread`). Would also make sense to
rename the rules picker to rule picker, but i do not wanna introduce
conflicts just for the purpose of re-naming.
Release Notes:
- N/A
This PR adds a migration to drop the `subscription_usages` and
`subscription_usage_meters` tables from the database.
We're now using `subscription_usages_v2` and
`subscription_usage_meters_v2` everywhere.
Release Notes:
- N/A
I also introduced a new eval to prove the encouragement actually makes a
difference.
Release Notes:
- Improved agent behavior when streaming edits, encouraging it to
editing files as opposed to creating them from scratch