
Release Notes: - Added channel reordering for administrators (use `cmd-up` and `cmd-down` on macOS or `ctrl-up` `ctrl-down` on Linux to move channels up or down within their parent) ## Summary This PR introduces the ability for channel administrators to reorder channels within their parent context, providing better organizational control over channel hierarchies. Users can now move channels up or down relative to their siblings using keyboard shortcuts. ## Problem Previously, channels were displayed in alphabetical order with no way to customize their arrangement. This made it difficult for teams to organize channels in a logical order that reflected their workflow or importance, forcing users to prefix channel names with numbers or special characters as a workaround. ## Solution The implementation adds a persistent `channel_order` field to channels that determines their display order within their parent. Channels with the same parent are sorted by this field rather than alphabetically. ## Implementation Details ### Database Schema Added a new column and index to support efficient ordering: ```sql -- crates/collab/migrations/20250530175450_add_channel_order.sql ALTER TABLE channels ADD COLUMN channel_order INTEGER NOT NULL DEFAULT 1; CREATE INDEX CONCURRENTLY "index_channels_on_parent_path_and_order" ON "channels" ("parent_path", "channel_order"); ``` ### RPC Protocol Extended the channel proto with ordering support: ```proto // crates/proto/proto/channel.proto message Channel { uint64 id = 1; string name = 2; ChannelVisibility visibility = 3; int32 channel_order = 4; repeated uint64 parent_path = 5; } message ReorderChannel { uint64 channel_id = 1; enum Direction { Up = 0; Down = 1; } Direction direction = 2; } ``` ### Server-side Logic The reordering is handled by swapping `channel_order` values between adjacent channels: ```rust // crates/collab/src/db/queries/channels.rs pub async fn reorder_channel( &self, channel_id: ChannelId, direction: proto::reorder_channel::Direction, user_id: UserId, ) -> Result<Vec<Channel>> { // Find the sibling channel to swap with let sibling_channel = match direction { proto::reorder_channel::Direction::Up => { // Find channel with highest order less than current channel::Entity::find() .filter( channel::Column::ParentPath .eq(&channel.parent_path) .and(channel::Column::ChannelOrder.lt(channel.channel_order)), ) .order_by_desc(channel::Column::ChannelOrder) .one(&*tx) .await? } // Similar logic for Down... }; // Swap the channel_order values let temp_order = channel.channel_order; channel.channel_order = sibling_channel.channel_order; sibling_channel.channel_order = temp_order; } ``` ### Client-side Sorting Optimized the sorting algorithm to avoid O(n²) complexity: ```rust // crates/collab/src/db/queries/channels.rs // Pre-compute sort keys for efficient O(n log n) sorting let mut channels_with_keys: Vec<(Vec<i32>, Channel)> = channels .into_iter() .map(|channel| { let mut sort_key = Vec::with_capacity(channel.parent_path.len() + 1); // Build sort key from parent path orders for parent_id in &channel.parent_path { sort_key.push(channel_order_map.get(parent_id).copied().unwrap_or(i32::MAX)); } sort_key.push(channel.channel_order); (sort_key, channel) }) .collect(); channels_with_keys.sort_by(|a, b| a.0.cmp(&b.0)); ``` ### User Interface Added keyboard shortcuts and proper context handling: ```json // assets/keymaps/default-macos.json { "context": "CollabPanel && not_editing", "bindings": { "cmd-up": "collab_panel::MoveChannelUp", "cmd-down": "collab_panel::MoveChannelDown" } } ``` The CollabPanel now properly sets context to distinguish between editing and navigation modes: ```rust // crates/collab_ui/src/collab_panel.rs fn dispatch_context(&self, window: &Window, cx: &Context<Self>) -> KeyContext { let mut dispatch_context = KeyContext::new_with_defaults(); dispatch_context.add("CollabPanel"); dispatch_context.add("menu"); let identifier = if self.channel_name_editor.focus_handle(cx).is_focused(window) { "editing" } else { "not_editing" }; dispatch_context.add(identifier); dispatch_context } ``` ## Testing Comprehensive tests were added to verify: - Basic reordering functionality (up/down movement) - Boundary conditions (first/last channels) - Permission checks (non-admins cannot reorder) - Ordering persistence across server restarts - Correct broadcasting of changes to channel members ## Migration Strategy Existing channels are assigned initial `channel_order` values based on their current alphabetical sorting to maintain the familiar order users expect: ```sql UPDATE channels SET channel_order = ( SELECT ROW_NUMBER() OVER ( PARTITION BY parent_path ORDER BY name, id ) FROM channels c2 WHERE c2.id = channels.id ); ``` ## Future Enhancements While this PR provides basic reordering functionality, potential future improvements could include: - Drag-and-drop reordering in the UI - Bulk reordering operations - Custom sorting strategies (by activity, creation date, etc.) ## Checklist - [x] Database migration included - [x] Tests added for new functionality - [x] Keybindings work on macOS and Linux - [x] Permissions properly enforced - [x] Error handling implemented throughout - [x] Manual testing completed - [x] Documentation updated --------- Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
234 lines
7.1 KiB
Rust
234 lines
7.1 KiB
Rust
mod billing_subscription_tests;
|
|
mod buffer_tests;
|
|
mod channel_tests;
|
|
mod contributor_tests;
|
|
mod db_tests;
|
|
// we only run postgres tests on macos right now
|
|
#[cfg(target_os = "macos")]
|
|
mod embedding_tests;
|
|
mod extension_tests;
|
|
mod feature_flag_tests;
|
|
mod message_tests;
|
|
mod processed_stripe_event_tests;
|
|
mod user_tests;
|
|
|
|
use crate::migrations::run_database_migrations;
|
|
|
|
use super::*;
|
|
use gpui::BackgroundExecutor;
|
|
use parking_lot::Mutex;
|
|
use sea_orm::ConnectionTrait;
|
|
use sqlx::migrate::MigrateDatabase;
|
|
use std::sync::{
|
|
Arc,
|
|
atomic::{AtomicI32, AtomicU32, Ordering::SeqCst},
|
|
};
|
|
|
|
pub struct TestDb {
|
|
pub db: Option<Arc<Database>>,
|
|
pub connection: Option<sqlx::AnyConnection>,
|
|
}
|
|
|
|
impl TestDb {
|
|
pub fn sqlite(executor: BackgroundExecutor) -> Self {
|
|
let url = "sqlite::memory:";
|
|
let runtime = tokio::runtime::Builder::new_current_thread()
|
|
.enable_io()
|
|
.enable_time()
|
|
.build()
|
|
.unwrap();
|
|
|
|
let mut db = runtime.block_on(async {
|
|
let mut options = ConnectOptions::new(url);
|
|
options.max_connections(5);
|
|
let mut db = Database::new(options, Executor::Deterministic(executor.clone()))
|
|
.await
|
|
.unwrap();
|
|
let sql = include_str!(concat!(
|
|
env!("CARGO_MANIFEST_DIR"),
|
|
"/migrations.sqlite/20221109000000_test_schema.sql"
|
|
));
|
|
db.pool
|
|
.execute(sea_orm::Statement::from_string(
|
|
db.pool.get_database_backend(),
|
|
sql,
|
|
))
|
|
.await
|
|
.unwrap();
|
|
db.initialize_notification_kinds().await.unwrap();
|
|
db
|
|
});
|
|
|
|
db.test_options = Some(DatabaseTestOptions {
|
|
runtime,
|
|
query_failure_probability: parking_lot::Mutex::new(0.0),
|
|
});
|
|
|
|
Self {
|
|
db: Some(Arc::new(db)),
|
|
connection: None,
|
|
}
|
|
}
|
|
|
|
pub fn postgres(executor: BackgroundExecutor) -> Self {
|
|
static LOCK: Mutex<()> = Mutex::new(());
|
|
|
|
let _guard = LOCK.lock();
|
|
let mut rng = StdRng::from_entropy();
|
|
let url = format!(
|
|
"postgres://postgres@localhost/zed-test-{}",
|
|
rng.r#gen::<u128>()
|
|
);
|
|
let runtime = tokio::runtime::Builder::new_current_thread()
|
|
.enable_io()
|
|
.enable_time()
|
|
.build()
|
|
.unwrap();
|
|
|
|
let mut db = runtime.block_on(async {
|
|
sqlx::Postgres::create_database(&url)
|
|
.await
|
|
.expect("failed to create test db");
|
|
let mut options = ConnectOptions::new(url);
|
|
options
|
|
.max_connections(5)
|
|
.idle_timeout(Duration::from_secs(0));
|
|
let mut db = Database::new(options, Executor::Deterministic(executor.clone()))
|
|
.await
|
|
.unwrap();
|
|
let migrations_path = concat!(env!("CARGO_MANIFEST_DIR"), "/migrations");
|
|
run_database_migrations(db.options(), migrations_path)
|
|
.await
|
|
.unwrap();
|
|
db.initialize_notification_kinds().await.unwrap();
|
|
db
|
|
});
|
|
|
|
db.test_options = Some(DatabaseTestOptions {
|
|
runtime,
|
|
query_failure_probability: parking_lot::Mutex::new(0.0),
|
|
});
|
|
|
|
Self {
|
|
db: Some(Arc::new(db)),
|
|
connection: None,
|
|
}
|
|
}
|
|
|
|
pub fn db(&self) -> &Arc<Database> {
|
|
self.db.as_ref().unwrap()
|
|
}
|
|
|
|
pub fn set_query_failure_probability(&self, probability: f64) {
|
|
let database = self.db.as_ref().unwrap();
|
|
let test_options = database.test_options.as_ref().unwrap();
|
|
*test_options.query_failure_probability.lock() = probability;
|
|
}
|
|
}
|
|
|
|
#[macro_export]
|
|
macro_rules! test_both_dbs {
|
|
($test_name:ident, $postgres_test_name:ident, $sqlite_test_name:ident) => {
|
|
#[cfg(target_os = "macos")]
|
|
#[gpui::test]
|
|
async fn $postgres_test_name(cx: &mut gpui::TestAppContext) {
|
|
let test_db = $crate::db::TestDb::postgres(cx.executor().clone());
|
|
$test_name(test_db.db()).await;
|
|
}
|
|
|
|
#[gpui::test]
|
|
async fn $sqlite_test_name(cx: &mut gpui::TestAppContext) {
|
|
let test_db = $crate::db::TestDb::sqlite(cx.executor().clone());
|
|
$test_name(test_db.db()).await;
|
|
}
|
|
};
|
|
}
|
|
|
|
impl Drop for TestDb {
|
|
fn drop(&mut self) {
|
|
let db = self.db.take().unwrap();
|
|
if let sea_orm::DatabaseBackend::Postgres = db.pool.get_database_backend() {
|
|
db.test_options.as_ref().unwrap().runtime.block_on(async {
|
|
use util::ResultExt;
|
|
let query = "
|
|
SELECT pg_terminate_backend(pg_stat_activity.pid)
|
|
FROM pg_stat_activity
|
|
WHERE
|
|
pg_stat_activity.datname = current_database() AND
|
|
pid <> pg_backend_pid();
|
|
";
|
|
db.pool
|
|
.execute(sea_orm::Statement::from_string(
|
|
db.pool.get_database_backend(),
|
|
query,
|
|
))
|
|
.await
|
|
.log_err();
|
|
sqlx::Postgres::drop_database(db.options.get_url())
|
|
.await
|
|
.log_err();
|
|
})
|
|
}
|
|
}
|
|
}
|
|
|
|
#[track_caller]
|
|
fn assert_channel_tree_matches(actual: Vec<Channel>, expected: Vec<Channel>) {
|
|
let expected_channels = expected.into_iter().collect::<HashSet<_>>();
|
|
let actual_channels = actual.into_iter().collect::<HashSet<_>>();
|
|
pretty_assertions::assert_eq!(expected_channels, actual_channels);
|
|
}
|
|
|
|
fn channel_tree(channels: &[(ChannelId, &[ChannelId], &'static str)]) -> Vec<Channel> {
|
|
use std::collections::HashMap;
|
|
|
|
let mut result = Vec::new();
|
|
let mut order_by_parent: HashMap<Vec<ChannelId>, i32> = HashMap::new();
|
|
|
|
for (id, parent_path, name) in channels {
|
|
let parent_key = parent_path.to_vec();
|
|
let order = if parent_key.is_empty() {
|
|
1
|
|
} else {
|
|
*order_by_parent
|
|
.entry(parent_key.clone())
|
|
.and_modify(|e| *e += 1)
|
|
.or_insert(1)
|
|
};
|
|
|
|
result.push(Channel {
|
|
id: *id,
|
|
name: name.to_string(),
|
|
visibility: ChannelVisibility::Members,
|
|
parent_path: parent_key,
|
|
channel_order: order,
|
|
});
|
|
}
|
|
|
|
result
|
|
}
|
|
|
|
static GITHUB_USER_ID: AtomicI32 = AtomicI32::new(5);
|
|
|
|
async fn new_test_user(db: &Arc<Database>, email: &str) -> UserId {
|
|
db.create_user(
|
|
email,
|
|
None,
|
|
false,
|
|
NewUserParams {
|
|
github_login: email[0..email.find('@').unwrap()].to_string(),
|
|
github_user_id: GITHUB_USER_ID.fetch_add(1, SeqCst),
|
|
},
|
|
)
|
|
.await
|
|
.unwrap()
|
|
.user_id
|
|
}
|
|
|
|
static TEST_CONNECTION_ID: AtomicU32 = AtomicU32::new(1);
|
|
fn new_test_connection(server: ServerId) -> ConnectionId {
|
|
ConnectionId {
|
|
id: TEST_CONNECTION_ID.fetch_add(1, SeqCst),
|
|
owner_id: server.0 as u32,
|
|
}
|
|
}
|