![]() This PR upgrades `async-tungstenite` to v17.0.3. We previously attempted upgrading `async-tungstenite` in #15039, but broke authentication with collab in the process. Upon further investigation, I determined that the root cause is due to this change in `tungstenite` v0.17.0: > Overhaul of the client's request generation process. Now the users are able to pass the constructed `http::Request` "as is" to `tungstenite-rs`, letting the library to check the correctness of the request and specifying their own headers (including its own key if necessary). No changes for those ones who used the client in a normal way by connecting using a URL/URI (most common use-case). We _were_ relying on passing an `http::Request` directly to `tungstenite`, meaning we did not benefit from the changes to the common path (of passing a URL/URI). This meant that—due to changes in `tungstenite`—we were now missing the `Sec-WebSocket-Key` header that `tungstenite` would otherwise set for us. Since we were only passing a custom `http::Request` to set headers, our approach has been adjusted to construct the initial WebSocket request using `tungstenite`'s `IntoClientRequest::into_client_request` and then modifying the request to set our additional desired headers. Release Notes: - N/A |
||
---|---|---|
.. | ||
k8s | ||
migrations | ||
migrations.sqlite | ||
src | ||
.env.toml | ||
admin_api.conf | ||
Cargo.toml | ||
LICENSE-AGPL | ||
README.md | ||
seed.default.json |
Zed Server
This crate is what we run at https://collab.zed.dev.
It contains our back-end logic for collaboration, to which we connect from the Zed client via a websocket after authenticating via https://zed.dev, which is a separate repo running on Vercel.
Local Development
Database setup
Before you can run the collab server locally, you'll need to set up a zed Postgres database.
script/bootstrap
This script will set up the zed
Postgres database, and populate it with some users. It requires internet access, because it fetches some users from the GitHub API.
The script will create several admin users, who you'll sign in as by default when developing locally. The GitHub logins for the default users are specified in the seed.default.json
file.
To use a different set of admin users, create crates/collab/seed.json
.
{
"admins": ["yourgithubhere"],
"channels": ["zed"],
"number_of_users": 20
}
Testing collaborative features locally
In one terminal, run Zed's collaboration server and the livekit dev server:
foreman start
In a second terminal, run two or more instances of Zed.
script/zed-local -2
This script starts one to four instances of Zed, depending on the -2
, -3
or -4
flags. Each instance will be connected to the local collab
server, signed in as a different user from seed.json
or seed.default.json
.
Deployment
We run two instances of collab:
- Staging (https://staging-collab.zed.dev)
- Production (https://collab.zed.dev)
Both of these run on the Kubernetes cluster hosted in Digital Ocean.
Deployment is triggered by pushing to the collab-staging
(or collab-production
) tag in Github. The best way to do this is:
./script/deploy-collab staging
./script/deploy-collab production
You can tell what is currently deployed with ./script/what-is-deployed
.
Database Migrations
To create a new migration:
./script/create-migration <name>
Migrations are run automatically on service start, so run foreman start
again. The service will crash if the migrations fail.
When you create a new migration, you also need to update the SQLite schema that is used for testing.