The major change here is a refactoring to allow controling the save
behaviour when closing items, which is pre-work needed for vim command
palette.
For zed-industries/community#1868
We fixed this while brainstorming a better approach to handle server
binaries and if we already have a fix for this one then we might as well
have this not be broken while the new mechanism is being built
Release Notes:
- Fixed Python language server not launching without a network
connection.
This fixes a bug that was preventing keystrokes from being shown on tooltips
for the "Buffer Search" and "Inline Assist" buttons in the toolbar.
This commit makes the behavior of `keystrokes_for_action` more consistent with
the behavior of `available_actions`. It seems reasonable that, if a child view
defines a keystroke for an action and that action is handled on a parent, we
should show the child's keystroke.
* increase horizontal padding of toolbar itself, remove padding
that was added to individual toolbar items like feedback button.
* make feedback info text and breadcrumbs have the same additional padding as
quick action buttons.
Allows us to trigger Tailwind completions within template literals in
JSX elements
Release Notes:
- Fixed Tailwind autocomplete not appearing in template literals.
* [x] Re-send operations that weren't sent while disconnected
* [x] Apply other clients' operations that were missed while
disconnected
* [x] Update collaborators that joined / left while disconnected
* [x] Inform current collaborators that your peer id has changed
* [x] Refresh channel buffer collaborators on server restart
* [x] randomized test
This PR updates our client code to connect to preview whenever we're not
on stable. This will make it more likely that we'll be able to
collaborate on a dev build, but obviously won't work if there's a
protocol change on main that hasn't made its way to preview yet.
This PR ships a series of optimizations for the semantic search engine.
Mostly focused on removing invalid states, optimizing requests to
OpenAI, and reducing token usage.
Release Notes (Preview-Only):
- Added eager incremental indexing in the background on a debounce.
- Added a local embeddings cache for reducing redundant calls to OpenAI.
- Moved to an Embeddings Queue model which ensures optimal batch sizes
at the token level, and atomic file & document writes.
- Adjusted OpenAI Embedding API requests to use provided backoff delays
during Rate Limiting.
- Removed flush races between parsing files step and embedding queue
steps.
- Moved truncation to parsing step reducing the probability that OpenAI
encounters bad data.