Release Notes:
- Added channel reordering for administrators (use `cmd-up` and
`cmd-down` on macOS or `ctrl-up` `ctrl-down` on Linux to move channels
up or down within their parent)
## Summary
This PR introduces the ability for channel administrators to reorder
channels within their parent context, providing better organizational
control over channel hierarchies. Users can now move channels up or down
relative to their siblings using keyboard shortcuts.
## Problem
Previously, channels were displayed in alphabetical order with no way to
customize their arrangement. This made it difficult for teams to
organize channels in a logical order that reflected their workflow or
importance, forcing users to prefix channel names with numbers or
special characters as a workaround.
## Solution
The implementation adds a persistent `channel_order` field to channels
that determines their display order within their parent. Channels with
the same parent are sorted by this field rather than alphabetically.
## Implementation Details
### Database Schema
Added a new column and index to support efficient ordering:
```sql
-- crates/collab/migrations/20250530175450_add_channel_order.sql
ALTER TABLE channels ADD COLUMN channel_order INTEGER NOT NULL DEFAULT 1;
CREATE INDEX CONCURRENTLY "index_channels_on_parent_path_and_order" ON "channels" ("parent_path", "channel_order");
```
### RPC Protocol
Extended the channel proto with ordering support:
```proto
// crates/proto/proto/channel.proto
message Channel {
uint64 id = 1;
string name = 2;
ChannelVisibility visibility = 3;
int32 channel_order = 4;
repeated uint64 parent_path = 5;
}
message ReorderChannel {
uint64 channel_id = 1;
enum Direction {
Up = 0;
Down = 1;
}
Direction direction = 2;
}
```
### Server-side Logic
The reordering is handled by swapping `channel_order` values between
adjacent channels:
```rust
// crates/collab/src/db/queries/channels.rs
pub async fn reorder_channel(
&self,
channel_id: ChannelId,
direction: proto::reorder_channel::Direction,
user_id: UserId,
) -> Result<Vec<Channel>> {
// Find the sibling channel to swap with
let sibling_channel = match direction {
proto::reorder_channel::Direction::Up => {
// Find channel with highest order less than current
channel::Entity::find()
.filter(
channel::Column::ParentPath
.eq(&channel.parent_path)
.and(channel::Column::ChannelOrder.lt(channel.channel_order)),
)
.order_by_desc(channel::Column::ChannelOrder)
.one(&*tx)
.await?
}
// Similar logic for Down...
};
// Swap the channel_order values
let temp_order = channel.channel_order;
channel.channel_order = sibling_channel.channel_order;
sibling_channel.channel_order = temp_order;
}
```
### Client-side Sorting
Optimized the sorting algorithm to avoid O(n²) complexity:
```rust
// crates/collab/src/db/queries/channels.rs
// Pre-compute sort keys for efficient O(n log n) sorting
let mut channels_with_keys: Vec<(Vec<i32>, Channel)> = channels
.into_iter()
.map(|channel| {
let mut sort_key = Vec::with_capacity(channel.parent_path.len() + 1);
// Build sort key from parent path orders
for parent_id in &channel.parent_path {
sort_key.push(channel_order_map.get(parent_id).copied().unwrap_or(i32::MAX));
}
sort_key.push(channel.channel_order);
(sort_key, channel)
})
.collect();
channels_with_keys.sort_by(|a, b| a.0.cmp(&b.0));
```
### User Interface
Added keyboard shortcuts and proper context handling:
```json
// assets/keymaps/default-macos.json
{
"context": "CollabPanel && not_editing",
"bindings": {
"cmd-up": "collab_panel::MoveChannelUp",
"cmd-down": "collab_panel::MoveChannelDown"
}
}
```
The CollabPanel now properly sets context to distinguish between editing
and navigation modes:
```rust
// crates/collab_ui/src/collab_panel.rs
fn dispatch_context(&self, window: &Window, cx: &Context<Self>) -> KeyContext {
let mut dispatch_context = KeyContext::new_with_defaults();
dispatch_context.add("CollabPanel");
dispatch_context.add("menu");
let identifier = if self.channel_name_editor.focus_handle(cx).is_focused(window) {
"editing"
} else {
"not_editing"
};
dispatch_context.add(identifier);
dispatch_context
}
```
## Testing
Comprehensive tests were added to verify:
- Basic reordering functionality (up/down movement)
- Boundary conditions (first/last channels)
- Permission checks (non-admins cannot reorder)
- Ordering persistence across server restarts
- Correct broadcasting of changes to channel members
## Migration Strategy
Existing channels are assigned initial `channel_order` values based on
their current alphabetical sorting to maintain the familiar order users
expect:
```sql
UPDATE channels
SET channel_order = (
SELECT ROW_NUMBER() OVER (
PARTITION BY parent_path
ORDER BY name, id
)
FROM channels c2
WHERE c2.id = channels.id
);
```
## Future Enhancements
While this PR provides basic reordering functionality, potential future
improvements could include:
- Drag-and-drop reordering in the UI
- Bulk reordering operations
- Custom sorting strategies (by activity, creation date, etc.)
## Checklist
- [x] Database migration included
- [x] Tests added for new functionality
- [x] Keybindings work on macOS and Linux
- [x] Permissions properly enforced
- [x] Error handling implemented throughout
- [x] Manual testing completed
- [x] Documentation updated
---------
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
https://github.com/user-attachments/assets/78db908e-cfe5-4803-b0dc-4f33bc457840
* starts to extract usernames out of `users/` GitHub API responses, and
pass those along with e-mails in the collab sessions as part of the
`User` data
* adjusts various prefill and seed test methods so that the new data can
be retrieved from GitHub properly
* if there's an active call, where guests have write permissions and
e-mails, allow to trigger `FillCoAuthors` action in the context of the
git panel, that will fill in `co-authored-by:` lines, using e-mail and
names (or GitHub handle names if name is absent)
* the action tries to not duplicate such entries, if any are present
already, and adds those below the rest of the commit input's text
Concerns:
* users with write permissions and no e-mails will be silently omitted
— adding odd entries that try to indicate this or raising pop-ups is
very intrusive (maybe, we can add `#`-prefixed comments?), logging seems
pointless
* it's not clear whether the data prefill will run properly on the
existing users — seems tolerable now, as it seems that we get e-mails
properly already, so we'll see GitHub handles instead of names in the
worst case. This can be prefilled better later.
* e-mails and names for a particular project may be not what the user
wants.
E.g. my `.gitconfig` has
```
[user]
email = mail4score@gmail.com
# .....snip
[includeif "gitdir:**/work/zed/**/.git"]
path = ~/.gitconfig.work
```
and that one has
```
[user]
email = kirill@zed.dev
```
while my GitHub profile is configured so, that `mail4score@gmail.com` is
the public, commit e-mail.
So, when I'm a participant in a Zed session, wrong e-mail will be
picked.
The problem is, it's impossible for a host to get remote's collaborator
git metadata for a particular project, as that might not even exist on
disk for the client.
Seems that we might want to add some "project git URL <-> user name and
email" mapping in the settings(?).
The design of this is not very clear, so the PR concentrates on the
basics for now.
When https://github.com/zed-industries/zed/pull/23308 lands, most of the
issues can be solved by collaborators manually, before committing.
Release Notes:
- N/A
This adds the requirement for users to accept the terms of service the
first time they send a message with the Cloud provider.
Once this is out and in a nightly, we need to add the check to the
server side too, to authenticate access to the models.
Demo:
https://github.com/user-attachments/assets/0edebf74-8120-4fa2-b801-bb76f04e8a17
Release Notes:
- N/A
This PR removes the unused `ignore_checksum_mismatch` parameter to
`run_database_migrations`.
We were always passing `false`, which meant the behavior didn't need to
be parameterized.
Release Notes:
- N/A
This PR puts the initial infrastructure for the LLM service's database
in place.
The LLM service will be using a separate Postgres database, with its own
set of migrations.
Currently we only connect to the database in development, as we don't
yet have the database setup for the staging/production environments.
Release Notes:
- N/A
This PR reworks how we process Stripe events for reconciliation
purposes.
The previous approach in #15480 turns out to not be workable, on account
of the Stripe event IDs not being strictly in order. This meant that we
couldn't reliably compare two arbitrary event IDs and determine which
one was more recent.
This new approach leans on the guidance that Stripe provides for
webhooks events:
> Webhook endpoints might occasionally receive the same event more than
once. You can guard against duplicated event receipts by logging the
[event IDs](https://docs.stripe.com/api/events/object#event_object-id)
you’ve processed, and then not processing already-logged events.
>
> https://docs.stripe.com/webhooks#handle-duplicate-events
We now record processed Stripe events in the `processed_stripe_events`
table and use this to filter out events that have already been
processed, so we do not process them again.
When retrieving events from the Stripe events API we now buffer the
unprocessed events so that we can sort them by their `created` timestamp
and process them in (roughly) the order they occurred.
Release Notes:
- N/A
This PR adds a new `billing_subscriptions` table to the database, as
well as some accompanying models/queries.
In this table we store a minimal amount of data from Stripe:
- The Stripe customer ID
- The Stripe subscription ID
- The status of the Stripe subscription
This should be enough for interactions with the Stripe API (e.g., to
[create a customer portal
session](https://docs.stripe.com/api/customer_portal/sessions/create)),
as well as determine whether a subscription is active (based on the
`status`).
Release Notes:
- N/A
Note:
- We have disabled all tests that rely on Postgres in the Linux CI. We
only really need to test these once, and as macOS is our team's primary
platform, we'll only enable them on macOS for local reproduction.
- We have disabled all tests that rely on the font metrics. We
standardized on Zed Mono in many fonts, but our CoreText Text System and
Cosmic Text System proved to be very different in effect. We should
revisit if we decide to standardize our text system across platforms
(e.g. using Harfbuzz everywhere)
- Extended the condition timeout significantly. Our CI machines are slow
enough that this is causing spurious errors in random tests.
Release Notes:
- N/A
---------
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
This introduces semantic indexing in Zed based on chunking text from
files in the developer's workspace and creating vector embeddings using
an embedding model. As part of this, we've created an embeddings
provider trait that allows us to work with OpenAI, a local Ollama model,
or a Zed hosted embedding.
The semantic index is built by breaking down text for known
(programming) languages into manageable chunks that are smaller than the
max token size. Each chunk is then fed to a language model to create a
high dimensional vector which is then normalized to a unit vector to
allow fast comparison with other vectors with a simple dot product.
Alongside the vector, we store the path of the file and the range within
the document where the vector was sourced from.
Zed will soon grok contextual similarity across different text snippets,
allowing for natural language search beyond keyword matching. This is
being put together both for human-based search as well as providing
results to Large Language Models to allow them to refine how they help
developers.
Remaining todo:
* [x] Change `provider` to `model` within the zed hosted embeddings
database (as its currently a combo of the provider and the model in one
name)
Release Notes:
- N/A
---------
Co-authored-by: Nathan Sobo <nathan@zed.dev>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Conrad Irwin <conrad@zed.dev>
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
Co-authored-by: Antonio <antonio@zed.dev>
This PR adds a REST API to the collab server for searching and
downloading extensions. Previously, we had implemented this API in
zed.dev directly, but this implementation is better, because we use the
collab database to store the download counts for extensions.
Release Notes:
- N/A
---------
Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
Co-authored-by: Marshall <marshall@zed.dev>
Co-authored-by: Conrad <conrad@zed.dev>