ZIm/crates/collab
Marshall Bowers 9d736fe80c
Upgrade async-tungstenite to v17 and update usage accordingly (#15219)
This PR upgrades `async-tungstenite` to v17.0.3.

We previously attempted upgrading `async-tungstenite` in #15039, but
broke authentication with collab in the process.

Upon further investigation, I determined that the root cause is due to
this change in `tungstenite` v0.17.0:

> Overhaul of the client's request generation process. Now the users are
able to pass the constructed `http::Request` "as is" to
`tungstenite-rs`, letting the library to check the correctness of the
request and specifying their own headers (including its own key if
necessary). No changes for those ones who used the client in a normal
way by connecting using a URL/URI (most common use-case).

We _were_ relying on passing an `http::Request` directly to
`tungstenite`, meaning we did not benefit from the changes to the common
path (of passing a URL/URI).

This meant that—due to changes in `tungstenite`—we were now missing the
`Sec-WebSocket-Key` header that `tungstenite` would otherwise set for
us.

Since we were only passing a custom `http::Request` to set headers, our
approach has been adjusted to construct the initial WebSocket request
using `tungstenite`'s `IntoClientRequest::into_client_request` and then
modifying the request to set our additional desired headers.

Release Notes:

- N/A
2024-07-25 15:53:22 -04:00
..
k8s Use - instead of _ in secret name 2024-06-23 15:32:47 -06:00
migrations remoting: Allow Add/Remove remote folder (#14532) 2024-07-16 12:01:59 -06:00
migrations.sqlite remoting: Allow Add/Remove remote folder (#14532) 2024-07-16 12:01:59 -06:00
src Upgrade async-tungstenite to v17 and update usage accordingly (#15219) 2024-07-25 15:53:22 -04:00
.env.toml WIP: remoting (#10085) 2024-04-11 15:36:35 -06:00
admin_api.conf Run postgrest as part of foreman 2023-09-13 12:32:15 -07:00
Cargo.toml Update http crate name (#15041) 2024-07-23 15:01:05 -07:00
LICENSE-AGPL chore: Add crate licenses. (#4158) 2024-01-23 16:56:22 +01:00
README.md Update documentation and handling to use a crates/collab/seed.json (#10874) 2024-04-23 05:41:31 -07:00
seed.default.json New revision of the Assistant Panel (#10870) 2024-04-23 16:23:26 -07:00

Zed Server

This crate is what we run at https://collab.zed.dev.

It contains our back-end logic for collaboration, to which we connect from the Zed client via a websocket after authenticating via https://zed.dev, which is a separate repo running on Vercel.

Local Development

Database setup

Before you can run the collab server locally, you'll need to set up a zed Postgres database.

script/bootstrap

This script will set up the zed Postgres database, and populate it with some users. It requires internet access, because it fetches some users from the GitHub API.

The script will create several admin users, who you'll sign in as by default when developing locally. The GitHub logins for the default users are specified in the seed.default.json file.

To use a different set of admin users, create crates/collab/seed.json.

{
  "admins": ["yourgithubhere"],
  "channels": ["zed"],
  "number_of_users": 20
}

Testing collaborative features locally

In one terminal, run Zed's collaboration server and the livekit dev server:

foreman start

In a second terminal, run two or more instances of Zed.

script/zed-local -2

This script starts one to four instances of Zed, depending on the -2, -3 or -4 flags. Each instance will be connected to the local collab server, signed in as a different user from seed.json or seed.default.json.

Deployment

We run two instances of collab:

Both of these run on the Kubernetes cluster hosted in Digital Ocean.

Deployment is triggered by pushing to the collab-staging (or collab-production) tag in Github. The best way to do this is:

  • ./script/deploy-collab staging
  • ./script/deploy-collab production

You can tell what is currently deployed with ./script/what-is-deployed.

Database Migrations

To create a new migration:

./script/create-migration <name>

Migrations are run automatically on service start, so run foreman start again. The service will crash if the migrations fail.

When you create a new migration, you also need to update the SQLite schema that is used for testing.