Increase the number of parallel request handlers per connection (#35046)

Discussed this with @ConradIrwin. The problem we're having with collab
is that a bunch of LSP requests take a really long time to resolve,
particularly `RefreshCodeLens`. But Those requests are sharing a limited
amount of concurrency that we've allocated for all message traffic on
one connection. That said, there's not _that_ many concurrent requests
made at any one time. The burst traffic seems to top out in the low
hundreds for these requests. So let's just expand the amount of space in
the queue to accommodate these long-running requests while we work on
upgrading our cloud infrastructure.

Release Notes:

- N/A

Co-authored-by: finn <finn@zed.dev>
This commit is contained in:
Mikayla Maki 2025-07-24 11:44:26 -07:00 committed by GitHub
parent 1f7ff956bc
commit 13df1dd5ff
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -829,7 +829,7 @@ impl Server {
// This arrangement ensures we will attempt to process earlier messages first, but fall
// back to processing messages arrived later in the spirit of making progress.
let mut foreground_message_handlers = FuturesUnordered::new();
let concurrent_handlers = Arc::new(Semaphore::new(256));
let concurrent_handlers = Arc::new(Semaphore::new(512));
loop {
let next_message = async {
let permit = concurrent_handlers.clone().acquire_owned().await.unwrap();