The local server that we spin up to receive OAuth callbacks isn't
called when an error occurs and it is non-trivial to do so with
next-auth. Besides, there could be cases where the user explicitly
closes the browser window before the callback can be invoked.
With this commit, the user can sign in even while an authentication
is still in progress. As opposed to waiting for at most 10 minutes
before killing the local HTTP server if we haven't received the callback,
we will repeatedly check for a response every second for 100 seconds.
This gives us a chance to determine whether a new authentication has started
in the meantime and, if so, abort the current authentication flow.
* Be sure we send updates to multiple clients for the same user
* Be sure we send a full contacts update on initial connection
As part of this commit, I fixed an issue where we couldn't disconnect and reconnect in tests. The first disconnect would cause the I/O future to terminate asynchronously, which caused us to sign out even though the active connection didn't belong to that future. I added a guard to ensure that we only sign out if the I/O future is associated with the current connection.
Also, allow arbitrary types to be used as Actions via the impl_actions macro
Co-authored-by: Nathan Sobo <nathan@zed.dev>
Co-authored-by: Keith Simmons <keith@zed.dev>
Previously, we were achieving this by deleting the keychain item, but this can sometimes fail which leads to an infinite loop. Now, we explicitly never try the keychain when reattempting authentication after authentication fails.
Co-Authored-By: Max Brunsfeld <maxbrunsfeld@gmail.com>
* Make advance_clock more realistic by waking timers in order,
instead of all at once.
* Don't advance the clock when simulating random delays.
Co-Authored-By: Keith Simmons <keith@zed.dev>
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
This allows us to drop the context *after* we ran all futures to
completion and that's crucial otherwise we'll never drop entities
and/or flush effects.
Using a bounded channel may have blocked the collaboration server
from making progress handling RPC traffic.
There's no need to apply backpressure to calling code within the
same process - suspending a task that is attempting to call `send` has
an even greater memory cost than just buffering a protobuf message.
We do still want a bounded channel for incoming messages, so that
we provide backpressure to noisy peers - blocking their writes as opposed
to allowing them to buffer arbitrarily many messages in our server.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
We hold these locks for a short amount of time anyway, and using an
async lock could cause parallel sends to happen in an order different
than the order in which `send`/`request` was called.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>