Use an unbounded channel for peer's outgoing messages
Using a bounded channel may have blocked the collaboration server from making progress handling RPC traffic. There's no need to apply backpressure to calling code within the same process - suspending a task that is attempting to call `send` has an even greater memory cost than just buffering a protobuf message. We do still want a bounded channel for incoming messages, so that we provide backpressure to noisy peers - blocking their writes as opposed to allowing them to buffer arbitrarily many messages in our server. Co-Authored-By: Antonio Scandurra <me@as-cii.com> Co-Authored-By: Nathan Sobo <nathan@zed.dev>
This commit is contained in:
parent
82afacd33d
commit
d4fe1115e7
7 changed files with 341 additions and 472 deletions
|
@ -118,8 +118,8 @@ impl FakeServer {
|
|||
self.forbid_connections.store(false, SeqCst);
|
||||
}
|
||||
|
||||
pub async fn send<T: proto::EnvelopedMessage>(&self, message: T) {
|
||||
self.peer.send(self.connection_id(), message).await.unwrap();
|
||||
pub fn send<T: proto::EnvelopedMessage>(&self, message: T) {
|
||||
self.peer.send(self.connection_id(), message).unwrap();
|
||||
}
|
||||
|
||||
pub async fn receive<M: proto::EnvelopedMessage>(&self) -> Result<TypedEnvelope<M>> {
|
||||
|
@ -148,7 +148,7 @@ impl FakeServer {
|
|||
receipt: Receipt<T>,
|
||||
response: T::Response,
|
||||
) {
|
||||
self.peer.respond(receipt, response).await.unwrap()
|
||||
self.peer.respond(receipt, response).unwrap()
|
||||
}
|
||||
|
||||
fn connection_id(&self) -> ConnectionId {
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue