Compare commits

...
Sign in to create a new pull request.

51 commits

Author SHA1 Message Date
gcp-cherry-pick-bot[bot]
53bc5714d6
Fix racy leaked extension server adapters handling (cherry-pick #35319) (#35321)
Cherry-picked Kb/wasm panics (#35319)

Follow-up of https://github.com/zed-industries/zed/pull/34208
Closes https://github.com/zed-industries/zed/issues/35185

Previous code assumed that extensions' language server wrappers may leak
only in static data (e.g. fields that were not cleared on deinit), but
we seem to have a race that breaks this assumption.

1. We do clean `all_lsp_adapters` field after
https://github.com/zed-industries/zed/pull/34334 and it's called for
every extension that is unregistered.
2. `LspStore::maintain_workspace_config` ->
`LspStore::refresh_workspace_configurations` chain is triggered
independently, apparently on `ToolchainStoreEvent::ToolchainActivated`
event which means somewhere behind there's potentially a Python code
that gets executed to activate the toolchian, making
`refresh_workspace_configurations` start timings unpredictable.
3. Seems that toolchain activation overlaps with plugin reload, as 
`2025-07-28T12:16:19+03:00 INFO [extension_host] extensions updated.
loading 0, reloading 1, unloading 0` suggests in the issue logs.

The plugin reload seem to happen faster than workspace configuration
refresh in



c65da547c9/crates/project/src/lsp_store.rs (L7426-L7456)

as the language servers are just starting and take extra time to respond
to the notification.

At least one of the `.clone()`d `adapter`s there is the adapter that got
removed during plugin reload and has its channel closed, which causes a
panic later.

----------------------------

A good fix would be to re-architect the workspace refresh approach, same
as other accesses to the language server collections.
One way could be to use `Weak`-based structures instead, as definitely
the extension server data belongs to extension, not the `LspStore`.
This is quite a large undertaking near the extension core though, so is
not done yet.

Currently, to stop the excessive panics, no more `.expect` is done on
the channel result, as indeed, it now can be closed very dynamically.
This will result in more errors (and backtraces, presumably) printed in
the logs and no panics.

More logging and comments are added, and workspace querying is replaced
to the concurrent one: no need to wait until a previous server had
processed the notification to send the same to the next one.

Release Notes:

- Fixed warm-related panic happening during startup

Co-authored-by: Kirill Bulatov <kirill@zed.dev>
2025-07-30 12:56:03 +03:00
Joseph T. Lyons
e1ae2a5334 zed 0.196.7 2025-07-29 14:38:01 -04:00
Kirill Bulatov
ff9da9d2d9 Add more data to see which extension got leaked (#35272)
Part of https://github.com/zed-industries/zed/issues/35185

Release Notes:

- N/A
2025-07-29 14:32:46 -04:00
gcp-cherry-pick-bot[bot]
271055a9bf
client: Send User-Agent header on WebSocket connection requests (cherry-pick #35280) (#35284)
Cherry-picked client: Send `User-Agent` header on WebSocket connection
requests (#35280)

This PR makes it so we send the `User-Agent` header on the WebSocket
connection requests when connecting to Collab.

We use the user agent set on the parent HTTP client.

Release Notes:

- N/A

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-07-29 13:26:02 -04:00
gcp-cherry-pick-bot[bot]
a9d8558b63
Cache LSP code lens requests (cherry-pick #35207) (#35258) 2025-07-29 10:21:51 +03:00
gcp-cherry-pick-bot[bot]
6589ce9b9b
Fix tasks leaked despite workspace window close (cherry-pick #35246) (#35251) 2025-07-29 10:21:25 +03:00
gcp-cherry-pick-bot[bot]
43fc3bdaa0
keymap_ui: Fix bug introduced in #35208 (cherry-pick #35237) (#35239)
Cherry-picked keymap_ui: Fix bug introduced in #35208 (#35237)

Closes #ISSUE

Fixes a bug that was cherry picked onto stable and preview branches
introduced in #35208 whereby modifier keys would show up and not be
removable when editing a keybind

Release Notes:

- (preview only) Keymap Editor: Fixed an issue introduced in v0.197.2
whereby modifier keys would show up and not be removable while recording
keystrokes in the keybind edit modal

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-28 18:02:05 -04:00
gcp-cherry-pick-bot[bot]
b4d6629de2
keymap_ui: Additional keystroke input polish (cherry-pick #35208) (#35219)
Cherry-picked keymap_ui: Additional keystroke input polish (#35208)

Closes #ISSUE

Fixed various issues and improved UX around the keystroke input
primarily when used for keystroke search.

Release Notes:

- Keymap Editor: FIxed an issue where the modifiers used to activate
keystroke search would appear in the keystroke search
- Keymap Editor: Made it possible to search for repeat modifiers, such
as a binding with `cmd-shift cmd`
- Keymap Editor: Made keystroke search matches match based on ordered
(not necessarily contiguous) runs. For example, searching for `cmd
shift-j` will match `cmd-k cmd-shift-j alt-q` and `cmd-i g shift-j` but
not `alt-k shift-j` or `cmd-k alt-j`
- Keymap Editor: Fixed the clear keystrokes binding (`delete` by
default) not working in the keystroke input

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-28 16:11:28 -04:00
Joseph T. Lyons
f80bd3a66b zed 0.196.6 2025-07-24 11:54:19 -04:00
Richard Feldman
6f04cb7296 Don't auto-retry in certain circumstances (#35037)
Someone encountered this in production, which should not happen:

<img width="1266" height="623" alt="Screenshot 2025-07-24 at 10 38
40 AM"
src="https://github.com/user-attachments/assets/40f3f977-5110-4808-a456-7e708d953b3b"
/>

This moves certain errors into the category of "never retry" and reduces
the number of retries for some others. Also it adds some diagnostic
logging for retry policy.

It's not a complete fix for the above, because the underlying issues is
that the server is sending a HTTP 403 response and although we were
already treating 403s as "do not retry" it was deciding to retry with 2
attempts anyway. So further debugging is needed to figure out why it
wasn't going down the 403 branch by the time the request got here.

Release Notes:

- N/A
2025-07-24 11:52:10 -04:00
Richard Feldman
92e7d84710 Auto-retry agent errors by default (#34842)
Now we explicitly carve out exceptions for which HTTP responses we do
*not* retry for, and retry at least once on all others.

Release Notes:

- The Agent panel now automatically retries failed requests under more
circumstances.
2025-07-24 11:51:49 -04:00
Oleksiy Syvokon
1c0bc89664
linux: Fix ctrl-0..9, ctrl-[, ctrl-^ (#35028)
There were two different underlying reasons for the issues with
ctrl-number and ctrl-punctuation:

1. Some keys in the ctrl-0..9 range send codes in the `\1b`..`\1f`
range. For example, `ctrl-2` sends keycode for `ctrl-[` (0x1b), but we
want to map it to `2`, not to `[`.

2. `ctrl-[` and four other ctrl-punctuation were incorrectly mapped,
since the expected conversion is by adding 0x40

Closes #35012

Release Notes:

- N/A
2025-07-24 10:11:04 -04:00
gcp-cherry-pick-bot[bot]
3720b6f908
agent: Fix double-lease panic when clicking on thread to jump (cherry-pick #34843) (#34874)
Cherry-picked agent: Fix double-lease panic when clicking on thread to
jump (#34843)

Release Notes:

- N/A

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-07-24 00:30:07 +02:00
Joseph T. Lyons
cc82f1eacd v0.196.x stable 2025-07-23 13:48:22 -04:00
Anthony Eid
acd9ab460c keymap ui: Improve resize columns on double click (#34961)
This PR splits the resize logic into separate left/right propagation
methods and improve code organization around column width adjustments.
It also allows resize to work for both the left and right sides as well,
instead of only checking the right side for room

Release Notes:

- N/A *or* Added/Fixed/Improved ...

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-23 13:46:22 -04:00
gcp-cherry-pick-bot[bot]
6badbf0369
keymap ui: Resizable column follow up (cherry-pick #34955) (#34956)
Cherry-picked keymap ui: Resizable column follow up (#34955)

I cherry picked a small fix that didn't get into the original column
resizable branch PR because I turned on auto merge.

Release Notes:

- N/A

Co-authored-by: Anthony Eid <56899983+Anthony-Eid@users.noreply.github.com>
2025-07-23 18:44:48 +02:00
Mikayla Maki
5ed98bfd9d gpui: Add use state APIs (#34741)
This PR adds a component level state API to GPUI, as well as a few
utilities for simplified interactions with entities

Release Notes:

- N/A
2025-07-23 12:16:25 -04:00
Finn Evers
12d6ddef16 keymap_ui: Dim keybinds that are overridden by other keybinds (#34952)
This change dims rows in the keymap editor for which the corresponding
keybind is overridden by other keybinds coming from higher priority
sources.

Release Notes:

- N/A
2025-07-23 12:08:24 -04:00
Mikayla Maki
b7fb970929 Resizable columns (#34794)
This PR adds resizable columns to the keymap editor and the ability to
double-click on a resizable column to set a column back to its default
size.

The table uses a column's width to calculate what position it should be
laid out at. So `column[i]` x position is calculated by the summation of
`column[..i]`. When resizing `column[i]`, `column[i+1]`’s size is
adjusted to keep all columns’ relative positions the same. If
`column[i+1]` is at its minimum size, we keep seeking to the right to
find a column with space left to take.

An improvement to resizing behavior and double-clicking could be made by
checking both column ranges `0..i-1` and `i+1..COLS`, since only one
range of columns is checked for resize capacity.

Release Notes:

- N/A

---------

Co-authored-by: Anthony <anthony@zed.dev>
Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-07-23 12:08:20 -04:00
gcp-cherry-pick-bot[bot]
780db4ce76
Fix redo after noop format (cherry-pick #34898) (#34903)
Cherry-picked Fix redo after noop format (#34898)

Closes #31917

Previously, as of #28457 we used a hack, creating an empty transaction
in the history that we then merged formatting changes into in order to
correctly identify concurrent edits to the buffer while formatting was
happening. This caused issues with noop formatting however, as using the
normal API of the buffer history (in an albeit weird way) resulted in
the redo stack being cleared, regardless of whether the formatting
transaction included edits or not, which is the correct behavior in all
other contexts.

This PR fixes the redo issue by codifying the behavior formatting wants,
that being the ability to push an empty transaction to the history with
no other side-effects (i.e. clearing the redo stack) to detect
concurrent edits, with the tradeoff being that it must then manually
remove the transaction later if no changes occurred from the formatting.
The redo stack is still cleared when there are formatting edits, as the
individual format steps use the normal `{start,end}_transaction` methods
which clear the redo stack if the finished transaction isn't empty.

Release Notes:

- Fixed an issue where redo would not work after buffer formatting
(including formatting on save) when the formatting did not result in any
changes

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-22 13:56:45 -04:00
gcp-cherry-pick-bot[bot]
dfcf9a2b16
keymap_ui: Fix panic in clear keystrokes (cherry-pick #34909) (#34913)
Cherry-picked keymap_ui: Fix panic in clear keystrokes (#34909)

Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-22 13:56:37 -04:00
gcp-cherry-pick-bot[bot]
9eeb7a325e
theme: Add panel.overlay_background and panel.overlay_hover (cherry-pick #34655) (#34878)
Cherry-picked theme: Add `panel.overlay_background` and
`panel.overlay_hover` (#34655)

In https://github.com/zed-industries/zed/pull/33994 sticky scroll was
added to project_panel.

I love this feature! 

This introduces a new element layering not seen before. On themes that
use transparency, the overlapping elements can make it difficult to read
project panel entries. This PR introduces a new selector:
~~panel.sticky_entry.background~~ `panel.overlay_background` This
selector lets you set the background of entries when they become sticky.

Closes https://github.com/zed-industries/zed/issues/34654

Before:

<img width="373" height="104" alt="Screenshot 2025-07-17 at 10 19 11 AM"

src="https://github.com/user-attachments/assets/d5bab065-53ca-4b27-b5d8-3b3f8d1f7a81"
/>

After:

<img width="292" height="445" alt="Screenshot 2025-07-17 at 11 46 57 AM"

src="https://github.com/user-attachments/assets/4cd2b87b-2989-4489-972f-872d2dc13a33"
/>

<img width="348" height="390" alt="Screenshot 2025-07-17 at 11 39 57 AM"

src="https://github.com/user-attachments/assets/49c0757f-2c50-4e01-92c6-2ae7e4132a53"
/>

<img width="668" height="187" alt="Screenshot 2025-07-17 at 11 39 29 AM"

src="https://github.com/user-attachments/assets/167536c2-5872-4306-90c6-c6b68276b618"
/>

Release Notes:

- Add `panel.sticky_entry.background` theme selector for modifying
project panel entries when they become sticky when scrolling and overlap
with entries below them.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>

Co-authored-by: Bret Comnes <166301+bcomnes@users.noreply.github.com>
Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-07-22 20:55:09 +05:30
gcp-cherry-pick-bot[bot]
dc4fc962f0
Fix an issue where xkb defined hotkeys for arrows would not work (cherry-pick #34823) (#34858)
Cherry-picked Fix an issue where xkb defined hotkeys for arrows would
not work (#34823)

Addresses
https://github.com/zed-industries/zed/pull/34053#issuecomment-3096447601
where custom-defined arrows would stop working in Zed.

How to reproduce:

1. Define custom keyboard layout

```bash
cd /usr/share/X11/xkb/symbols/
sudo nano mykbd
```

```
default partial alphanumeric_keys
xkb_symbols "custom" {

    name[Group1]= "Custom Layout";

    key <AD01> { [ q,  Q,  Escape,     Escape      ] };
    key <AD02> { [ w,  W,  Home,       Home        ] };
    key <AD03> { [ e,  E,  Up,         Up          ] };
    key <AD04> { [ r,  R,  End,        End         ] };
    key <AD05> { [ t,  T,  Tab,        Tab         ] };

    key <AC01> { [ a,  A,  Return,     Return      ] };
    key <AC02> { [ s,  S,  Left,       Left        ] };
    key <AC03> { [ d,  D,  Down,       Down        ] };
    key <AC04> { [ f,  F,  Right,      Right       ] };
    key <AC05> { [ g,  G,  BackSpace,  BackSpace   ] };

    // include a base layout to inherit the rest
    include "us(basic)"
};
```

2. Activate custom layout with win-key as AltGr

```bash
setxkbmap mykbd -variant custom -option lv3:win_switch
```

3. Now Win-S should produce left arrow, Win-F right arrow
4. Test whether it works in Zed

Release Notes:

 - linux: xkb-defined hotkeys for arrow keys should behave as expected.

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>

Co-authored-by: Sergei Surovtsev <97428129+stillonearth@users.noreply.github.com>
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-07-21 19:19:17 -06:00
Peter Tripp
fab9da0b93
zed 0.196.5 2025-07-21 10:04:09 -04:00
Oleksandr Mykhailenko
44946231aa
agent: Fix Mistral tool use error message (#34692)
Closes #32675

Exactly the same changes as in #33640 by @sviande

The PR has been in WIP state for 3 weeks with no activity, and the issue
basically makes Mistral models unusable. I have tested the changes
locally, and it does indeed work. Full credit goes to @sviande, I just
want this feature to be finished.

Release Notes:

- agent: Fixed an issue with tool calling with the Mistral provider
(thanks [@sviande](https://github.com/sviande) and
[@armyhaylenko](https://github.com/armyhaylenko))

Co-authored-by: sviande <sviande@gmail.com>
2025-07-21 09:22:18 -04:00
gcp-cherry-pick-bot[bot]
8da6604165
keymap_ui: Auto complete action arguments (cherry-pick #34785) (#34790)
Cherry-picked keymap_ui: Auto complete action arguments (#34785)

Supersedes: #34242

Creates an `ActionArgumentsEditor` that implements the required logic to
have a JSON language server run when editing keybinds so that there is
auto-complete for action arguments.

This is the first time action argument schemas are required by
themselves rather than inlined in the keymap schema. Rather than add all
action schemas to the configuration options we send to the JSON LSP on
startup, this PR implements support for the
`vscode-json-language-server` extension to the LSP whereby the server
will request the client (Zed) to resolve URLs with URI schemes it does
not recognize, in our case `zed://`. This limits the impact on the size
of the configuration options to ~1KB as we send URLs for the language
server to resolve on demand rather than the schema itself. My
understanding is that this is how VSCode handles JSON schemas as well. I
plan to investigate converting the rest of our schema generation logic
to this method in a follow up PR.

Co-Authored-By: Cole <cole@zed.dev>

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-20 17:07:39 -04:00
Vitaly Slobodin
1c95a2ccee
Fix Tailwind support for HTML/ERB files (#34743)
Closes #27118
Closes #34165

Fix a small issue after we landed
https://github.com/zed-extensions/ruby/pull/113+ where we introduced
`HTML/ERB` and `YAML/ERB` language IDs to improve user experience. Sorry
about that. Thanks!

Release Notes:

- N/A
2025-07-19 11:08:41 -04:00
gcp-cherry-pick-bot[bot]
234a4f86ba
keymap ui: Fix remove key mapping bug (cherry-pick #34683) (#34730)
Cherry-picked keymap ui: Fix remove key mapping bug (#34683)

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>

Co-authored-by: Anthony Eid <56899983+Anthony-Eid@users.noreply.github.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-18 15:52:36 -04:00
Peter Tripp
3f305fa805
ci: Skip generating Windows release artifacts (#34704)
Release Notes:

- N/A
2025-07-18 14:32:01 -04:00
Finn Evers
b83965285d editor: Ensure topmost buffer header can be properly folded (#34721)
This PR fixes an issue where the topmost header in a multibuffer would
jump when the corresponding buffer was folded.
The issue arose because for the topmost header, the offset within the
scroll anchor is negative, as the corresponding buffer only starts below
the header itself and thus the offset for the scroll position has to be
negative.
However, upon collapsing that buffer, we end up with a negative vertical
scroll position, which causes all kinds of different problems. The issue
has been present for a long time, but became more visible after
https://github.com/zed-industries/zed/pull/34295 landed, as that change
removed the case distinction for buffers scrolled all the way to the
top.

This PR fixes this by clamping just the vertical scroll position upon
return, which ensures the negative offset works as expected when the
buffer is expanded, but the vertical scroll position does not turn
negative once the buffer is folded.

Release Notes:

- Fixed an issue where folding the topmost buffer in a multibuffer would
cause the header to jump slightly.
2025-07-18 13:38:32 -04:00
Danilo Leal
0acd108e7f keymap_ui: Add some design refinements (#34673)
Mostly small stuff over here.

Release Notes:

- N/A
2025-07-18 13:04:46 -04:00
Joseph T. Lyons
5deb404135 zed 0.196.4 2025-07-18 12:40:43 -04:00
Joseph T. Lyons
619282a8ed Revert "gpui: Improve path rendering & global multisample anti-aliasing" (#34722)
Reverts zed-industries/zed#29718

We've noticed some issues with Zed on Intel-based Macs where typing has
become sluggish, and git bisect has seemed to point towards this PR.
Reverting for now, until we can understand why it is causing this issue.
2025-07-18 12:36:45 -04:00
gcp-cherry-pick-bot[bot]
3f32020785
keymap_ui: Don't panic on KeybindSource::from_meta (cherry-pick #34652) (#34677)
Cherry-picked keymap_ui: Don't panic on `KeybindSource::from_meta`
(#34652)

Closes #ISSUE

Log error instead of panicking when `from_meta` is passed an invalid
value

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-18 12:45:44 +03:00
gcp-cherry-pick-bot[bot]
ce0de10147
keymap_ui: Fix various keymap editor issues (cherry-pick #34647) (#34670)
Cherry-picked keymap_ui: Fix various keymap editor issues (#34647)

This PR tackles miscellaneous nits for the new keymap editor UI.

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>

Co-authored-by: Finn Evers <finn@zed.dev>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-17 20:03:03 -04:00
Richard Feldman
c9b9b3194e
zed 0.196.3 2025-07-17 19:25:14 -04:00
Richard Feldman
eeb9e242b4
Retry on burn mode (#34669)
Now we only auto-retry if burn mode is enabled. We also show a "Retry"
button (so you don't have to type "continue") if you think that's the
right remedy, and additionally we show a "Retry and Enable Burn Mode"
button if you don't have it enabled.

<img width="484" height="260" alt="Screenshot 2025-07-17 at 6 25 27 PM"
src="https://github.com/user-attachments/assets/dc5bf1f6-8b11-4041-87aa-4f37c95ea9f0"
/>

<img width="478" height="307" alt="Screenshot 2025-07-17 at 6 22 36 PM"
src="https://github.com/user-attachments/assets/1ed6578a-1696-449d-96d1-e447d11959fa"
/>


Release Notes:

- Only auto-retry Agent requests when Burn Mode is enabled
2025-07-17 19:23:38 -04:00
Richard Feldman
f9c498318d
Improve upstream error reporting (#34668)
Now we handle more upstream error cases using the same auto-retry logic.

Release Notes:

- N/A
2025-07-17 19:23:34 -04:00
gcp-cherry-pick-bot[bot]
cb40bb755e
keymap ui: Fix keymap editor search bugs (cherry-pick #34579) (#34588)
Cherry-picked keymap ui: Fix keymap editor search bugs (#34579)

Keystroke input now gets cleared when toggling to normal search mode
Main search bar is focused when toggling to normal search mode

This also gets rid of highlight on focus from keystroke_editor because
it also matched the search bool field and was redundant

Release Notes:

- N/A

Co-authored-by: Anthony Eid <56899983+Anthony-Eid@users.noreply.github.com>
2025-07-17 18:16:33 -04:00
Umesh Yadav
991887a3ea keymap_ui: Open Keymap editor from settings dropdown (#34576)
@probably-neb I guess we should be opening the keymap editor from title
bar and menu as well. I believe this got missed in this: #34568.

Release Notes:

- Open Keymap editor from settings from menu and title bar.
2025-07-17 13:37:58 -04:00
Anthony Eid
f249ee481d keymap ui: Fix keymap editor search bugs (#34579)
Keystroke input now gets cleared when toggling to normal search mode
Main search bar is focused when toggling to normal search mode

This also gets rid of highlight on focus from keystroke_editor because
it also matched the search bool field and was redundant

Release Notes:

- N/A
2025-07-17 13:37:43 -04:00
gcp-cherry-pick-bot[bot]
484e39dcba
keymap_ui: Show edit icon on hovered and selected row (cherry-pick #34630) (#34635)
Cherry-picked keymap_ui: Show edit icon on hovered and selected row
(#34630)

Closes #ISSUE

Improves the behavior of the edit icon in the far left column of the
keymap UI table. It is now shown in both the selected and the hovered
row as an indicator that the row is editable in this configuration. When
hovered a row can be double clicked or the edit icon can be clicked, and
when selected it can be edited via keyboard shortcuts. Additionally, the
edit icon and all other hover tooltips will now disappear when the table
is navigated via keyboard shortcuts.

<details><summary>Video</summary>




https://github.com/user-attachments/assets/6584810f-4c6d-4e6f-bdca-25b16c920cfc

</details>

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-17 12:21:49 -04:00
gcp-cherry-pick-bot[bot]
ec7d6631a4
Add keymap editor UI telemetry events (cherry-pick #34571) (#34589)
Cherry-picked Add keymap editor UI telemetry events (#34571)

- Search queries
- Keybinding update or removed
- Copy action name
- Copy context name

cc @katie-z-geer 

Release Notes:

- N/A

Co-authored-by: Ben Kunkle <ben@zed.dev>

Co-authored-by: Anthony Eid <56899983+Anthony-Eid@users.noreply.github.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-17 11:58:31 -04:00
gcp-cherry-pick-bot[bot]
27691613c1
keymap_ui: Improve keybind display in menus (cherry-pick #34587) (#34632)
Cherry-picked keymap_ui: Improve keybind display in menus (#34587)

Closes #ISSUE

Defines keybindings for `keymap_editor::EditBinding` and
`keymap_editor::CreateBinding`, making sure those actions are used in
tooltips.

Release Notes:

- N/A *or* Added/Fixed/Improved ...

---------

Co-authored-by: Finn <dev@bahn.sh>

Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Finn <dev@bahn.sh>
2025-07-17 11:34:44 -04:00
Zed Bot
5f11e09a4b Bump to 0.196.2 for @osyvokon 2025-07-17 12:14:36 +00:00
gcp-cherry-pick-bot[bot]
34e63f9e55
agent: Disable project_notifications by default (cherry-pick #34615) (#34619)
Cherry-picked agent: Disable `project_notifications` by default (#34615)

This tool needs more polishing before being generally available.

Release Notes:

- agent: Disabled `project_notifications` tool by default for the time
being

Co-authored-by: Oleksiy Syvokon <oleksiy@zed.dev>
2025-07-17 15:09:12 +03:00
gcp-cherry-pick-bot[bot]
cbdca4e090
Fix shortcuts with Shift (cherry-pick #34614) (#34616)
Cherry-picked Fix shortcuts with `Shift` (#34614)

Closes #34605, #34606, #34609

Release Notes:

- (Preview only) Fixed shortcuts involving Shift

Co-authored-by: Oleksiy Syvokon <oleksiy@zed.dev>
2025-07-17 14:31:57 +03:00
Conrad Irwin
92105e92c3 Fix ctrl-q on AZERTY on Linux (#34597)
Closes #ISSUE

Release Notes:

- N/A
2025-07-16 21:28:51 -06:00
Zed Bot
632f09efd6 Bump to 0.196.1 for @ConradIrwin 2025-07-17 02:18:34 +00:00
gcp-cherry-pick-bot[bot]
192e0e32dd
Don't override ascii graphical shortcuts (cherry-pick #34592) (#34595)
Cherry-picked Don't override ascii graphical shortcuts (#34592)

Closes #34536

Release Notes:

- (preview only) Fix shortcuts on Extended Latin keyboards on Linux

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-07-16 20:16:34 -06:00
Joseph T. Lyons
30cc8bd824 v0.196.x preview 2025-07-16 14:30:48 -04:00
73 changed files with 4413 additions and 1542 deletions

View file

@ -748,7 +748,7 @@ jobs:
timeout-minutes: 120
name: Create a Windows installer
runs-on: [self-hosted, Windows, X64]
if: ${{ startsWith(github.ref, 'refs/tags/v') || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
if: false && (startsWith(github.ref, 'refs/tags/v') || contains(github.event.pull_request.labels.*.name, 'run-bundling'))
needs: [windows_tests]
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
@ -787,7 +787,7 @@ jobs:
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
# Re-enable when we are ready to publish windows preview releases
if: false && ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) && env.RELEASE_CHANNEL == 'preview' }} # upload only preview
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) && env.RELEASE_CHANNEL == 'preview' }} # upload only preview
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}

12
Cargo.lock generated
View file

@ -2148,7 +2148,7 @@ dependencies = [
[[package]]
name = "blade-graphics"
version = "0.6.0"
source = "git+https://github.com/kvark/blade?rev=416375211bb0b5826b3584dccdb6a43369e499ad#416375211bb0b5826b3584dccdb6a43369e499ad"
source = "git+https://github.com/kvark/blade?rev=e0ec4e720957edd51b945b64dd85605ea54bcfe5#e0ec4e720957edd51b945b64dd85605ea54bcfe5"
dependencies = [
"ash",
"ash-window",
@ -2181,7 +2181,7 @@ dependencies = [
[[package]]
name = "blade-macros"
version = "0.3.0"
source = "git+https://github.com/kvark/blade?rev=416375211bb0b5826b3584dccdb6a43369e499ad#416375211bb0b5826b3584dccdb6a43369e499ad"
source = "git+https://github.com/kvark/blade?rev=e0ec4e720957edd51b945b64dd85605ea54bcfe5#e0ec4e720957edd51b945b64dd85605ea54bcfe5"
dependencies = [
"proc-macro2",
"quote",
@ -2191,7 +2191,7 @@ dependencies = [
[[package]]
name = "blade-util"
version = "0.2.0"
source = "git+https://github.com/kvark/blade?rev=416375211bb0b5826b3584dccdb6a43369e499ad#416375211bb0b5826b3584dccdb6a43369e499ad"
source = "git+https://github.com/kvark/blade?rev=e0ec4e720957edd51b945b64dd85605ea54bcfe5#e0ec4e720957edd51b945b64dd85605ea54bcfe5"
dependencies = [
"blade-graphics",
"bytemuck",
@ -14709,6 +14709,7 @@ dependencies = [
"fs",
"fuzzy",
"gpui",
"itertools 0.14.0",
"language",
"log",
"menu",
@ -14720,6 +14721,8 @@ dependencies = [
"serde",
"serde_json",
"settings",
"telemetry",
"tempfile",
"theme",
"tree-sitter-json",
"tree-sitter-rust",
@ -16451,6 +16454,7 @@ dependencies = [
"schemars",
"serde",
"settings",
"settings_ui",
"smallvec",
"story",
"telemetry",
@ -20095,7 +20099,7 @@ dependencies = [
[[package]]
name = "zed"
version = "0.196.0"
version = "0.196.7"
dependencies = [
"activity_indicator",
"agent",

View file

@ -434,9 +434,9 @@ aws-smithy-runtime-api = { version = "1.7.4", features = ["http-1x", "client"] }
aws-smithy-types = { version = "1.3.0", features = ["http-body-1-x"] }
base64 = "0.22"
bitflags = "2.6.0"
blade-graphics = { git = "https://github.com/kvark/blade", rev = "416375211bb0b5826b3584dccdb6a43369e499ad" }
blade-macros = { git = "https://github.com/kvark/blade", rev = "416375211bb0b5826b3584dccdb6a43369e499ad" }
blade-util = { git = "https://github.com/kvark/blade", rev = "416375211bb0b5826b3584dccdb6a43369e499ad" }
blade-graphics = { git = "https://github.com/kvark/blade", rev = "e0ec4e720957edd51b945b64dd85605ea54bcfe5" }
blade-macros = { git = "https://github.com/kvark/blade", rev = "e0ec4e720957edd51b945b64dd85605ea54bcfe5" }
blade-util = { git = "https://github.com/kvark/blade", rev = "e0ec4e720957edd51b945b64dd85605ea54bcfe5" }
blake3 = "1.5.3"
bytes = "1.0"
cargo_metadata = "0.19"
@ -489,7 +489,7 @@ json_dotpath = "1.1"
jsonschema = "0.30.0"
jsonwebtoken = "9.3"
jupyter-protocol = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
jupyter-websocket-client = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
jupyter-websocket-client = { git = "https://github.com/ConradIrwin/runtimed" ,rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
libc = "0.2"
libsqlite3-sys = { version = "0.30.1", features = ["bundled"] }
linkify = "0.10.0"
@ -500,7 +500,7 @@ metal = "0.29"
moka = { version = "0.12.10", features = ["sync"] }
naga = { version = "25.0", features = ["wgsl-in"] }
nanoid = "0.4"
nbformat = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
nbformat = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
nix = "0.29"
num-format = "0.4.4"
objc = "0.2"
@ -541,7 +541,7 @@ reqwest = { git = "https://github.com/zed-industries/reqwest.git", rev = "951c77
"stream",
] }
rsa = "0.9.6"
runtimelib = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734", default-features = false, features = [
runtimelib = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734", default-features = false, features = [
"async-dispatcher-runtime",
] }
rust-embed = { version = "8.4", features = ["include-exclude"] }

View file

@ -1118,7 +1118,12 @@
"ctrl-f": "search::FocusSearch",
"alt-find": "keymap_editor::ToggleKeystrokeSearch",
"alt-ctrl-f": "keymap_editor::ToggleKeystrokeSearch",
"alt-c": "keymap_editor::ToggleConflictFilter"
"alt-c": "keymap_editor::ToggleConflictFilter",
"enter": "keymap_editor::EditBinding",
"alt-enter": "keymap_editor::CreateBinding",
"ctrl-c": "keymap_editor::CopyAction",
"ctrl-shift-c": "keymap_editor::CopyContext",
"ctrl-t": "keymap_editor::ShowMatchingKeybinds"
}
},
{

View file

@ -1216,8 +1216,14 @@
"context": "KeymapEditor",
"use_key_equivalents": true,
"bindings": {
"cmd-f": "search::FocusSearch",
"cmd-alt-f": "keymap_editor::ToggleKeystrokeSearch",
"cmd-alt-c": "keymap_editor::ToggleConflictFilter"
"cmd-alt-c": "keymap_editor::ToggleConflictFilter",
"enter": "keymap_editor::EditBinding",
"alt-enter": "keymap_editor::CreateBinding",
"cmd-c": "keymap_editor::CopyAction",
"cmd-shift-c": "keymap_editor::CopyContext",
"cmd-t": "keymap_editor::ShowMatchingKeybinds"
}
},
{

View file

@ -817,7 +817,7 @@
"edit_file": true,
"fetch": true,
"list_directory": true,
"project_notifications": true,
"project_notifications": false,
"move_path": true,
"now": true,
"find_path": true,
@ -837,7 +837,7 @@
"diagnostics": true,
"fetch": true,
"list_directory": true,
"project_notifications": true,
"project_notifications": false,
"now": true,
"find_path": true,
"read_file": true,

View file

@ -51,7 +51,7 @@ use util::{ResultExt as _, debug_panic, post_inc};
use uuid::Uuid;
use zed_llm_client::{CompletionIntent, CompletionRequestStatus, UsageLimit};
const MAX_RETRY_ATTEMPTS: u8 = 3;
const MAX_RETRY_ATTEMPTS: u8 = 4;
const BASE_RETRY_DELAY: Duration = Duration::from_secs(5);
#[derive(Debug, Clone)]
@ -396,6 +396,7 @@ pub struct Thread {
remaining_turns: u32,
configured_model: Option<ConfiguredModel>,
profile: AgentProfile,
last_error_context: Option<(Arc<dyn LanguageModel>, CompletionIntent)>,
}
#[derive(Clone, Debug)]
@ -489,10 +490,11 @@ impl Thread {
retry_state: None,
message_feedback: HashMap::default(),
last_auto_capture_at: None,
last_error_context: None,
last_received_chunk_at: None,
request_callback: None,
remaining_turns: u32::MAX,
configured_model,
configured_model: configured_model.clone(),
profile: AgentProfile::new(profile_id, tools),
}
}
@ -613,6 +615,7 @@ impl Thread {
feedback: None,
message_feedback: HashMap::default(),
last_auto_capture_at: None,
last_error_context: None,
last_received_chunk_at: None,
request_callback: None,
remaining_turns: u32::MAX,
@ -1264,9 +1267,58 @@ impl Thread {
self.flush_notifications(model.clone(), intent, cx);
let request = self.to_completion_request(model.clone(), intent, cx);
let _checkpoint = self.finalize_pending_checkpoint(cx);
self.stream_completion(
self.to_completion_request(model.clone(), intent, cx),
model,
intent,
window,
cx,
);
}
self.stream_completion(request, model, intent, window, cx);
pub fn retry_last_completion(
&mut self,
window: Option<AnyWindowHandle>,
cx: &mut Context<Self>,
) {
// Clear any existing error state
self.retry_state = None;
// Use the last error context if available, otherwise fall back to configured model
let (model, intent) = if let Some((model, intent)) = self.last_error_context.take() {
(model, intent)
} else if let Some(configured_model) = self.configured_model.as_ref() {
let model = configured_model.model.clone();
let intent = if self.has_pending_tool_uses() {
CompletionIntent::ToolResults
} else {
CompletionIntent::UserPrompt
};
(model, intent)
} else if let Some(configured_model) = self.get_or_init_configured_model(cx) {
let model = configured_model.model.clone();
let intent = if self.has_pending_tool_uses() {
CompletionIntent::ToolResults
} else {
CompletionIntent::UserPrompt
};
(model, intent)
} else {
return;
};
self.send_to_model(model, intent, window, cx);
}
pub fn enable_burn_mode_and_retry(
&mut self,
window: Option<AnyWindowHandle>,
cx: &mut Context<Self>,
) {
self.completion_mode = CompletionMode::Burn;
cx.emit(ThreadEvent::ProfileChanged);
self.retry_last_completion(window, cx);
}
pub fn used_tools_since_last_user_message(&self) -> bool {
@ -1987,6 +2039,12 @@ impl Thread {
if let Some(retry_strategy) =
Thread::get_retry_strategy(completion_error)
{
log::info!(
"Retrying with {:?} for language model completion error {:?}",
retry_strategy,
completion_error
);
retry_scheduled = thread
.handle_retryable_error_with_delay(
&completion_error,
@ -2130,8 +2188,8 @@ impl Thread {
// General strategy here:
// - If retrying won't help (e.g. invalid API key or payload too large), return None so we don't retry at all.
// - If it's a time-based issue (e.g. server overloaded, rate limit exceeded), try multiple times with exponential backoff.
// - If it's an issue that *might* be fixed by retrying (e.g. internal server error), just retry once.
// - If it's a time-based issue (e.g. server overloaded, rate limit exceeded), retry up to 4 times with exponential backoff.
// - If it's an issue that *might* be fixed by retrying (e.g. internal server error), retry up to 3 times.
match error {
HttpResponseError {
status_code: StatusCode::TOO_MANY_REQUESTS,
@ -2146,16 +2204,48 @@ impl Thread {
max_attempts: MAX_RETRY_ATTEMPTS,
})
}
UpstreamProviderError {
status,
retry_after,
..
} => match *status {
StatusCode::TOO_MANY_REQUESTS | StatusCode::SERVICE_UNAVAILABLE => {
Some(RetryStrategy::Fixed {
delay: retry_after.unwrap_or(BASE_RETRY_DELAY),
max_attempts: MAX_RETRY_ATTEMPTS,
})
}
StatusCode::INTERNAL_SERVER_ERROR => Some(RetryStrategy::Fixed {
delay: retry_after.unwrap_or(BASE_RETRY_DELAY),
// Internal Server Error could be anything, retry up to 3 times.
max_attempts: 3,
}),
status => {
// There is no StatusCode variant for the unofficial HTTP 529 ("The service is overloaded"),
// but we frequently get them in practice. See https://http.dev/529
if status.as_u16() == 529 {
Some(RetryStrategy::Fixed {
delay: retry_after.unwrap_or(BASE_RETRY_DELAY),
max_attempts: MAX_RETRY_ATTEMPTS,
})
} else {
Some(RetryStrategy::Fixed {
delay: retry_after.unwrap_or(BASE_RETRY_DELAY),
max_attempts: 2,
})
}
}
},
ApiInternalServerError { .. } => Some(RetryStrategy::Fixed {
delay: BASE_RETRY_DELAY,
max_attempts: 1,
max_attempts: 3,
}),
ApiReadResponseError { .. }
| HttpSend { .. }
| DeserializeResponse { .. }
| BadRequestFormat { .. } => Some(RetryStrategy::Fixed {
delay: BASE_RETRY_DELAY,
max_attempts: 1,
max_attempts: 3,
}),
// Retrying these errors definitely shouldn't help.
HttpResponseError {
@ -2163,24 +2253,30 @@ impl Thread {
StatusCode::PAYLOAD_TOO_LARGE | StatusCode::FORBIDDEN | StatusCode::UNAUTHORIZED,
..
}
| SerializeRequest { .. }
| BuildRequestBody { .. }
| PromptTooLarge { .. }
| AuthenticationError { .. }
| PermissionError { .. }
| NoApiKey { .. }
| ApiEndpointNotFound { .. }
| NoApiKey { .. } => None,
| PromptTooLarge { .. } => None,
// These errors might be transient, so retry them
SerializeRequest { .. } | BuildRequestBody { .. } => Some(RetryStrategy::Fixed {
delay: BASE_RETRY_DELAY,
max_attempts: 1,
}),
// Retry all other 4xx and 5xx errors once.
HttpResponseError { status_code, .. }
if status_code.is_client_error() || status_code.is_server_error() =>
{
Some(RetryStrategy::Fixed {
delay: BASE_RETRY_DELAY,
max_attempts: 1,
max_attempts: 3,
})
}
// Conservatively assume that any other errors are non-retryable
HttpResponseError { .. } | Other(..) => None,
HttpResponseError { .. } | Other(..) => Some(RetryStrategy::Fixed {
delay: BASE_RETRY_DELAY,
max_attempts: 2,
}),
}
}
@ -2193,6 +2289,23 @@ impl Thread {
window: Option<AnyWindowHandle>,
cx: &mut Context<Self>,
) -> bool {
// Store context for the Retry button
self.last_error_context = Some((model.clone(), intent));
// Only auto-retry if Burn Mode is enabled
if self.completion_mode != CompletionMode::Burn {
// Show error with retry options
cx.emit(ThreadEvent::ShowError(ThreadError::RetryableError {
message: format!(
"{}\n\nTo automatically retry when similar errors happen, enable Burn Mode.",
error
)
.into(),
can_enable_burn_mode: true,
}));
return false;
}
let Some(strategy) = strategy.or_else(|| Self::get_retry_strategy(error)) else {
return false;
};
@ -2273,6 +2386,13 @@ impl Thread {
// Stop generating since we're giving up on retrying.
self.pending_completions.clear();
// Show error alongside a Retry button, but no
// Enable Burn Mode button (since it's already enabled)
cx.emit(ThreadEvent::ShowError(ThreadError::RetryableError {
message: format!("Failed after retrying: {}", error).into(),
can_enable_burn_mode: false,
}));
false
}
}
@ -3183,6 +3303,11 @@ pub enum ThreadError {
header: SharedString,
message: SharedString,
},
#[error("Retryable error: {message}")]
RetryableError {
message: SharedString,
can_enable_burn_mode: bool,
},
}
#[derive(Debug, Clone)]
@ -3583,6 +3708,7 @@ fn main() {{
}
#[gpui::test]
#[ignore] // turn this test on when project_notifications tool is re-enabled
async fn test_stale_buffer_notification(cx: &mut TestAppContext) {
init_test_settings(cx);
@ -4137,6 +4263,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create model that returns overloaded error
let model = Arc::new(ErrorInjector::new(TestError::Overloaded));
@ -4210,6 +4341,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create model that returns internal server error
let model = Arc::new(ErrorInjector::new(TestError::InternalServerError));
@ -4231,7 +4367,7 @@ fn main() {{
let retry_state = thread.retry_state.as_ref().unwrap();
assert_eq!(retry_state.attempt, 1, "Should be first retry attempt");
assert_eq!(
retry_state.max_attempts, 1,
retry_state.max_attempts, 3,
"Should have correct max attempts"
);
});
@ -4247,8 +4383,9 @@ fn main() {{
if let MessageSegment::Text(text) = seg {
text.contains("internal")
&& text.contains("Fake")
&& text.contains("Retrying in")
&& !text.contains("attempt")
&& text.contains("Retrying")
&& text.contains("attempt 1 of 3")
&& text.contains("seconds")
} else {
false
}
@ -4286,6 +4423,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create model that returns internal server error
let model = Arc::new(ErrorInjector::new(TestError::InternalServerError));
@ -4338,8 +4480,8 @@ fn main() {{
let retry_state = thread.retry_state.as_ref().unwrap();
assert_eq!(retry_state.attempt, 1, "Should be first retry attempt");
assert_eq!(
retry_state.max_attempts, 1,
"Internal server errors should only retry once"
retry_state.max_attempts, 3,
"Internal server errors should retry up to 3 times"
);
});
@ -4347,7 +4489,15 @@ fn main() {{
cx.executor().advance_clock(BASE_RETRY_DELAY);
cx.run_until_parked();
// Should have scheduled second retry - count retry messages
// Advance clock for second retry
cx.executor().advance_clock(BASE_RETRY_DELAY);
cx.run_until_parked();
// Advance clock for third retry
cx.executor().advance_clock(BASE_RETRY_DELAY);
cx.run_until_parked();
// Should have completed all retries - count retry messages
let retry_count = thread.update(cx, |thread, _| {
thread
.messages
@ -4365,24 +4515,24 @@ fn main() {{
.count()
});
assert_eq!(
retry_count, 1,
"Should have only one retry for internal server errors"
retry_count, 3,
"Should have 3 retries for internal server errors"
);
// For internal server errors, we only retry once and then give up
// Check that retry_state is cleared after the single retry
// For internal server errors, we retry 3 times and then give up
// Check that retry_state is cleared after all retries
thread.read_with(cx, |thread, _| {
assert!(
thread.retry_state.is_none(),
"Retry state should be cleared after single retry"
"Retry state should be cleared after all retries"
);
});
// Verify total attempts (1 initial + 1 retry)
// Verify total attempts (1 initial + 3 retries)
assert_eq!(
*completion_count.lock(),
2,
"Should have attempted once plus 1 retry"
4,
"Should have attempted once plus 3 retries"
);
}
@ -4393,6 +4543,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create model that returns overloaded error
let model = Arc::new(ErrorInjector::new(TestError::Overloaded));
@ -4479,6 +4634,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// We'll use a wrapper to switch behavior after first failure
struct RetryTestModel {
inner: Arc<FakeLanguageModel>,
@ -4647,6 +4807,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create a model that fails once then succeeds
struct FailOnceModel {
inner: Arc<FakeLanguageModel>,
@ -4808,6 +4973,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create a model that returns rate limit error with retry_after
struct RateLimitModel {
inner: Arc<FakeLanguageModel>,
@ -5081,6 +5251,79 @@ fn main() {{
);
}
#[gpui::test]
async fn test_no_retry_without_burn_mode(cx: &mut TestAppContext) {
init_test_settings(cx);
let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Ensure we're in Normal mode (not Burn mode)
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Normal);
});
// Track error events
let error_events = Arc::new(Mutex::new(Vec::new()));
let error_events_clone = error_events.clone();
let _subscription = thread.update(cx, |_, cx| {
cx.subscribe(&thread, move |_, _, event: &ThreadEvent, _| {
if let ThreadEvent::ShowError(error) = event {
error_events_clone.lock().push(error.clone());
}
})
});
// Create model that returns overloaded error
let model = Arc::new(ErrorInjector::new(TestError::Overloaded));
// Insert a user message
thread.update(cx, |thread, cx| {
thread.insert_user_message("Hello!", ContextLoadResult::default(), None, vec![], cx);
});
// Start completion
thread.update(cx, |thread, cx| {
thread.send_to_model(model.clone(), CompletionIntent::UserPrompt, None, cx);
});
cx.run_until_parked();
// Verify no retry state was created
thread.read_with(cx, |thread, _| {
assert!(
thread.retry_state.is_none(),
"Should not have retry state in Normal mode"
);
});
// Check that a retryable error was reported
let errors = error_events.lock();
assert!(!errors.is_empty(), "Should have received an error event");
if let ThreadError::RetryableError {
message: _,
can_enable_burn_mode,
} = &errors[0]
{
assert!(
*can_enable_burn_mode,
"Error should indicate burn mode can be enabled"
);
} else {
panic!("Expected RetryableError, got {:?}", errors[0]);
}
// Verify the thread is no longer generating
thread.read_with(cx, |thread, _| {
assert!(
!thread.is_generating(),
"Should not be generating after error without retry"
);
});
}
#[gpui::test]
async fn test_retry_cancelled_on_stop(cx: &mut TestAppContext) {
init_test_settings(cx);
@ -5088,6 +5331,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create model that returns overloaded error
let model = Arc::new(ErrorInjector::new(TestError::Overloaded));

View file

@ -1036,7 +1036,7 @@ impl ActiveThread {
.collect::<Vec<_>>()
.join("\n");
self.last_error = Some(ThreadError::Message {
header: "Error interacting with language model".into(),
header: "Error".into(),
message: error_message.into(),
});
}
@ -3722,8 +3722,11 @@ pub(crate) fn open_context(
AgentContextHandle::Thread(thread_context) => workspace.update(cx, |workspace, cx| {
if let Some(panel) = workspace.panel::<AgentPanel>(cx) {
panel.update(cx, |panel, cx| {
panel.open_thread(thread_context.thread.clone(), window, cx);
let thread = thread_context.thread.clone();
window.defer(cx, move |window, cx| {
panel.update(cx, |panel, cx| {
panel.open_thread(thread, window, cx);
});
});
}
}),
@ -3731,8 +3734,11 @@ pub(crate) fn open_context(
AgentContextHandle::TextThread(text_thread_context) => {
workspace.update(cx, |workspace, cx| {
if let Some(panel) = workspace.panel::<AgentPanel>(cx) {
panel.update(cx, |panel, cx| {
panel.open_prompt_editor(text_thread_context.context.clone(), window, cx)
let context = text_thread_context.context.clone();
window.defer(cx, move |window, cx| {
panel.update(cx, |panel, cx| {
panel.open_prompt_editor(context, window, cx)
});
});
}
})

View file

@ -64,8 +64,9 @@ use theme::ThemeSettings;
use time::UtcOffset;
use ui::utils::WithRemSize;
use ui::{
Banner, Callout, CheckboxWithLabel, ContextMenu, ElevationIndex, KeyBinding, PopoverMenu,
PopoverMenuHandle, ProgressBar, Tab, Tooltip, Vector, VectorName, prelude::*,
Banner, Button, Callout, CheckboxWithLabel, ContextMenu, ElevationIndex, IconPosition,
KeyBinding, PopoverMenu, PopoverMenuHandle, ProgressBar, Tab, Tooltip, Vector, VectorName,
prelude::*,
};
use util::ResultExt as _;
use workspace::{
@ -2913,6 +2914,21 @@ impl AgentPanel {
.size(IconSize::Small)
.color(Color::Error);
let retry_button = Button::new("retry", "Retry")
.icon(IconName::RotateCw)
.icon_position(IconPosition::Start)
.on_click({
let thread = thread.clone();
move |_, window, cx| {
thread.update(cx, |thread, cx| {
thread.clear_last_error();
thread.thread().update(cx, |thread, cx| {
thread.retry_last_completion(Some(window.window_handle()), cx);
});
});
}
});
div()
.border_t_1()
.border_color(cx.theme().colors().border)
@ -2921,13 +2937,72 @@ impl AgentPanel {
.icon(icon)
.title(header)
.description(message.clone())
.primary_action(self.dismiss_error_button(thread, cx))
.secondary_action(self.create_copy_button(message_with_header))
.primary_action(retry_button)
.secondary_action(self.dismiss_error_button(thread, cx))
.tertiary_action(self.create_copy_button(message_with_header))
.bg_color(self.error_callout_bg(cx)),
)
.into_any_element()
}
fn render_retryable_error(
&self,
message: SharedString,
can_enable_burn_mode: bool,
thread: &Entity<ActiveThread>,
cx: &mut Context<Self>,
) -> AnyElement {
let icon = Icon::new(IconName::XCircle)
.size(IconSize::Small)
.color(Color::Error);
let retry_button = Button::new("retry", "Retry")
.icon(IconName::RotateCw)
.icon_position(IconPosition::Start)
.on_click({
let thread = thread.clone();
move |_, window, cx| {
thread.update(cx, |thread, cx| {
thread.clear_last_error();
thread.thread().update(cx, |thread, cx| {
thread.retry_last_completion(Some(window.window_handle()), cx);
});
});
}
});
let mut callout = Callout::new()
.icon(icon)
.title("Error")
.description(message.clone())
.bg_color(self.error_callout_bg(cx))
.primary_action(retry_button);
if can_enable_burn_mode {
let burn_mode_button = Button::new("enable_burn_retry", "Enable Burn Mode and Retry")
.icon(IconName::ZedBurnMode)
.icon_position(IconPosition::Start)
.on_click({
let thread = thread.clone();
move |_, window, cx| {
thread.update(cx, |thread, cx| {
thread.clear_last_error();
thread.thread().update(cx, |thread, cx| {
thread.enable_burn_mode_and_retry(Some(window.window_handle()), cx);
});
});
}
});
callout = callout.secondary_action(burn_mode_button);
}
div()
.border_t_1()
.border_color(cx.theme().colors().border)
.child(callout)
.into_any_element()
}
fn render_prompt_editor(
&self,
context_editor: &Entity<TextThreadEditor>,
@ -3169,6 +3244,15 @@ impl Render for AgentPanel {
ThreadError::Message { header, message } => {
self.render_error_message(header, message, thread, cx)
}
ThreadError::RetryableError {
message,
can_enable_burn_mode,
} => self.render_retryable_error(
message,
can_enable_burn_mode,
thread,
cx,
),
})
.into_any(),
)

View file

@ -12,6 +12,7 @@ use collections::HashMap;
use fs::FakeFs;
use futures::{FutureExt, future::LocalBoxFuture};
use gpui::{AppContext, TestAppContext, Timer};
use http_client::StatusCode;
use indoc::{formatdoc, indoc};
use language_model::{
LanguageModelRegistry, LanguageModelRequestTool, LanguageModelToolResult,
@ -1675,6 +1676,30 @@ async fn retry_on_rate_limit<R>(mut request: impl AsyncFnMut() -> Result<R>) ->
Timer::after(retry_after + jitter).await;
continue;
}
LanguageModelCompletionError::UpstreamProviderError {
status,
retry_after,
..
} => {
// Only retry for specific status codes
let should_retry = matches!(
*status,
StatusCode::TOO_MANY_REQUESTS | StatusCode::SERVICE_UNAVAILABLE
) || status.as_u16() == 529;
if !should_retry {
return Err(err.into());
}
// Use server-provided retry_after if available, otherwise use default
let retry_after = retry_after.unwrap_or(Duration::from_secs(5));
let jitter = retry_after.mul_f64(rand::thread_rng().gen_range(0.0..1.0));
eprintln!(
"Attempt #{attempt}: {err}. Retry after {retry_after:?} + jitter of {jitter:?}"
);
Timer::after(retry_after + jitter).await;
continue;
}
_ => return Err(err.into()),
},
Err(err) => return Err(err),

View file

@ -21,7 +21,7 @@ use futures::{
channel::oneshot, future::BoxFuture,
};
use gpui::{App, AsyncApp, Entity, Global, Task, WeakEntity, actions};
use http_client::{AsyncBody, HttpClient, HttpClientWithUrl};
use http_client::{AsyncBody, HttpClient, HttpClientWithUrl, http};
use parking_lot::RwLock;
use postage::watch;
use proxy::connect_proxy_stream;
@ -1123,6 +1123,7 @@ impl Client {
let http = self.http.clone();
let proxy = http.proxy().cloned();
let user_agent = http.user_agent().cloned();
let credentials = credentials.clone();
let rpc_url = self.rpc_url(http, release_channel);
let system_id = self.telemetry.system_id();
@ -1174,7 +1175,7 @@ impl Client {
// We then modify the request to add our desired headers.
let request_headers = request.headers_mut();
request_headers.insert(
"Authorization",
http::header::AUTHORIZATION,
HeaderValue::from_str(&credentials.authorization_header())?,
);
request_headers.insert(
@ -1186,6 +1187,9 @@ impl Client {
"x-zed-release-channel",
HeaderValue::from_str(release_channel.map(|r| r.dev_name()).unwrap_or("unknown"))?,
);
if let Some(user_agent) = user_agent {
request_headers.insert(http::header::USER_AGENT, user_agent);
}
if let Some(system_id) = system_id {
request_headers.insert("x-zed-system-id", HeaderValue::from_str(&system_id)?);
}

View file

@ -1772,7 +1772,7 @@ impl Editor {
) -> Self {
debug_assert!(
display_map.is_none() || mode.is_minimap(),
"Providing a display map for a new editor is only intended for the minimap and might have unindended side effects otherwise!"
"Providing a display map for a new editor is only intended for the minimap and might have unintended side effects otherwise!"
);
let full_mode = mode.is_full();
@ -8193,8 +8193,7 @@ impl Editor {
return;
};
// Try to find a closest, enclosing node using tree-sitter that has a
// task
// Try to find a closest, enclosing node using tree-sitter that has a task
let Some((buffer, buffer_row, tasks)) = self
.find_enclosing_node_task(cx)
// Or find the task that's closest in row-distance.
@ -21732,11 +21731,11 @@ impl CodeActionProvider for Entity<Project> {
cx: &mut App,
) -> Task<Result<Vec<CodeAction>>> {
self.update(cx, |project, cx| {
let code_lens = project.code_lens(buffer, range.clone(), cx);
let code_lens_actions = project.code_lens_actions(buffer, range.clone(), cx);
let code_actions = project.code_actions(buffer, range, None, cx);
cx.background_spawn(async move {
let (code_lens, code_actions) = join(code_lens, code_actions).await;
Ok(code_lens
let (code_lens_actions, code_actions) = join(code_lens_actions, code_actions).await;
Ok(code_lens_actions
.context("code lens fetch")?
.into_iter()
.chain(code_actions.context("code action fetch")?)

View file

@ -9570,6 +9570,74 @@ async fn test_document_format_during_save(cx: &mut TestAppContext) {
}
}
#[gpui::test]
async fn test_redo_after_noop_format(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.ensure_final_newline_on_save = Some(false);
});
let fs = FakeFs::new(cx.executor());
fs.insert_file(path!("/file.txt"), "foo".into()).await;
let project = Project::test(fs, [path!("/file.txt").as_ref()], cx).await;
let buffer = project
.update(cx, |project, cx| {
project.open_local_buffer(path!("/file.txt"), cx)
})
.await
.unwrap();
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let (editor, cx) = cx.add_window_view(|window, cx| {
build_editor_with_project(project.clone(), buffer, window, cx)
});
editor.update_in(cx, |editor, window, cx| {
editor.change_selections(SelectionEffects::default(), window, cx, |s| {
s.select_ranges([0..0])
});
});
assert!(!cx.read(|cx| editor.is_dirty(cx)));
editor.update_in(cx, |editor, window, cx| {
editor.handle_input("\n", window, cx)
});
cx.run_until_parked();
save(&editor, &project, cx).await;
assert_eq!("\nfoo", editor.read_with(cx, |editor, cx| editor.text(cx)));
editor.update_in(cx, |editor, window, cx| {
editor.undo(&Default::default(), window, cx);
});
save(&editor, &project, cx).await;
assert_eq!("foo", editor.read_with(cx, |editor, cx| editor.text(cx)));
editor.update_in(cx, |editor, window, cx| {
editor.redo(&Default::default(), window, cx);
});
cx.run_until_parked();
assert_eq!("\nfoo", editor.read_with(cx, |editor, cx| editor.text(cx)));
async fn save(editor: &Entity<Editor>, project: &Entity<Project>, cx: &mut VisualTestContext) {
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(
SaveOptions {
format: true,
autosave: false,
},
project.clone(),
window,
cx,
)
})
.unwrap();
cx.executor().start_waiting();
save.await;
assert!(!cx.read(|cx| editor.is_dirty(cx)));
}
}
#[gpui::test]
async fn test_multibuffer_format_during_save(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@ -9955,8 +10023,14 @@ async fn test_autosave_with_dirty_buffers(cx: &mut TestAppContext) {
);
}
#[gpui::test]
async fn test_range_format_during_save(cx: &mut TestAppContext) {
async fn setup_range_format_test(
cx: &mut TestAppContext,
) -> (
Entity<Project>,
Entity<Editor>,
&mut gpui::VisualTestContext,
lsp::FakeLanguageServer,
) {
init_test(cx, |_| {});
let fs = FakeFs::new(cx.executor());
@ -9971,9 +10045,9 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
FakeLspAdapter {
capabilities: lsp::ServerCapabilities {
document_range_formatting_provider: Some(lsp::OneOf::Left(true)),
..Default::default()
..lsp::ServerCapabilities::default()
},
..Default::default()
..FakeLspAdapter::default()
},
);
@ -9988,14 +10062,22 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
let (editor, cx) = cx.add_window_view(|window, cx| {
build_editor_with_project(project.clone(), buffer, window, cx)
});
cx.executor().start_waiting();
let fake_server = fake_servers.next().await.unwrap();
(project, editor, cx, fake_server)
}
#[gpui::test]
async fn test_range_format_on_save_success(cx: &mut TestAppContext) {
let (project, editor, cx, fake_server) = setup_range_format_test(cx).await;
editor.update_in(cx, |editor, window, cx| {
editor.set_text("one\ntwo\nthree\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
cx.executor().start_waiting();
let fake_server = fake_servers.next().await.unwrap();
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(
@ -10030,13 +10112,18 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
"one, two\nthree\n"
);
assert!(!cx.read(|cx| editor.is_dirty(cx)));
}
#[gpui::test]
async fn test_range_format_on_save_timeout(cx: &mut TestAppContext) {
let (project, editor, cx, fake_server) = setup_range_format_test(cx).await;
editor.update_in(cx, |editor, window, cx| {
editor.set_text("one\ntwo\nthree\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
// Ensure we can still save even if formatting hangs.
// Test that save still works when formatting hangs
fake_server.set_request_handler::<lsp::request::RangeFormatting, _, _>(
move |params, _| async move {
assert_eq!(
@ -10068,8 +10155,13 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
"one\ntwo\nthree\n"
);
assert!(!cx.read(|cx| editor.is_dirty(cx)));
}
// For non-dirty buffer, no formatting request should be sent
#[gpui::test]
async fn test_range_format_not_called_for_clean_buffer(cx: &mut TestAppContext) {
let (project, editor, cx, fake_server) = setup_range_format_test(cx).await;
// Buffer starts clean, no formatting should be requested
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(
@ -10090,6 +10182,12 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
.next();
cx.executor().start_waiting();
save.await;
cx.run_until_parked();
}
#[gpui::test]
async fn test_range_format_respects_language_tab_size_override(cx: &mut TestAppContext) {
let (project, editor, cx, fake_server) = setup_range_format_test(cx).await;
// Set Rust language override and assert overridden tabsize is sent to language server
update_test_language_settings(cx, |settings| {
@ -10103,7 +10201,7 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
});
editor.update_in(cx, |editor, window, cx| {
editor.set_text("somehting_new\n", window, cx)
editor.set_text("something_new\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
let save = editor
@ -21188,16 +21286,32 @@ async fn test_apply_code_lens_actions_with_commands(cx: &mut gpui::TestAppContex
},
);
let (buffer, _handle) = project
.update(cx, |p, cx| {
p.open_local_buffer_with_lsp(path!("/dir/a.ts"), cx)
let editor = workspace
.update(cx, |workspace, window, cx| {
workspace.open_abs_path(
PathBuf::from(path!("/dir/a.ts")),
OpenOptions::default(),
window,
cx,
)
})
.unwrap()
.await
.unwrap()
.downcast::<Editor>()
.unwrap();
cx.executor().run_until_parked();
let fake_server = fake_language_servers.next().await.unwrap();
let buffer = editor.update(cx, |editor, cx| {
editor
.buffer()
.read(cx)
.as_singleton()
.expect("have opened a single file by path")
});
let buffer_snapshot = buffer.update(cx, |buffer, _| buffer.snapshot());
let anchor = buffer_snapshot.anchor_at(0, text::Bias::Left);
drop(buffer_snapshot);
@ -21255,7 +21369,7 @@ async fn test_apply_code_lens_actions_with_commands(cx: &mut gpui::TestAppContex
assert_eq!(
actions.len(),
1,
"Should have only one valid action for the 0..0 range"
"Should have only one valid action for the 0..0 range, got: {actions:#?}"
);
let action = actions[0].clone();
let apply = project.update(cx, |project, cx| {
@ -21301,7 +21415,7 @@ async fn test_apply_code_lens_actions_with_commands(cx: &mut gpui::TestAppContex
.into_iter()
.collect(),
),
..Default::default()
..lsp::WorkspaceEdit::default()
},
},
)
@ -21324,6 +21438,38 @@ async fn test_apply_code_lens_actions_with_commands(cx: &mut gpui::TestAppContex
buffer.undo(cx);
assert_eq!(buffer.text(), "a");
});
let actions_after_edits = cx
.update_window(*workspace, |_, window, cx| {
project.code_actions(&buffer, anchor..anchor, window, cx)
})
.unwrap()
.await
.unwrap();
assert_eq!(
actions, actions_after_edits,
"For the same selection, same code lens actions should be returned"
);
let _responses =
fake_server.set_request_handler::<lsp::request::CodeLensRequest, _, _>(|_, _| async move {
panic!("No more code lens requests are expected");
});
editor.update_in(cx, |editor, window, cx| {
editor.select_all(&SelectAll, window, cx);
});
cx.executor().run_until_parked();
let new_actions = cx
.update_window(*workspace, |_, window, cx| {
project.code_actions(&buffer, anchor..anchor, window, cx)
})
.unwrap()
.await
.unwrap();
assert_eq!(
actions, new_actions,
"Code lens are queried for the same range and should get the same set back, but without additional LSP queries now"
);
}
#[gpui::test]
@ -22708,7 +22854,7 @@ pub(crate) fn init_test(cx: &mut TestAppContext, f: fn(&mut AllLanguageSettingsC
workspace::init_settings(cx);
crate::init(cx);
});
zlog::init_test();
update_test_language_settings(cx, f);
}

View file

@ -6,7 +6,7 @@ use gpui::{Hsla, Rgba};
use itertools::Itertools;
use language::point_from_lsp;
use multi_buffer::Anchor;
use project::{DocumentColor, lsp_store::ColorFetchStrategy};
use project::{DocumentColor, lsp_store::LspFetchStrategy};
use settings::Settings as _;
use text::{Bias, BufferId, OffsetRangeExt as _};
use ui::{App, Context, Window};
@ -180,9 +180,9 @@ impl Editor {
.filter_map(|buffer| {
let buffer_id = buffer.read(cx).remote_id();
let fetch_strategy = if ignore_cache {
ColorFetchStrategy::IgnoreCache
LspFetchStrategy::IgnoreCache
} else {
ColorFetchStrategy::UseCache {
LspFetchStrategy::UseCache {
known_cache_version: self.colors.as_ref().and_then(|colors| {
Some(colors.buffer_colors.get(&buffer_id)?.cache_version_used)
}),

View file

@ -12,7 +12,7 @@ use crate::{
};
pub use autoscroll::{Autoscroll, AutoscrollStrategy};
use core::fmt::Debug;
use gpui::{App, Axis, Context, Global, Pixels, Task, Window, point, px};
use gpui::{Along, App, Axis, Context, Global, Pixels, Task, Window, point, px};
use language::language_settings::{AllLanguageSettings, SoftWrap};
use language::{Bias, Point};
pub use scroll_amount::ScrollAmount;
@ -49,14 +49,14 @@ impl ScrollAnchor {
}
pub fn scroll_position(&self, snapshot: &DisplaySnapshot) -> gpui::Point<f32> {
let mut scroll_position = self.offset;
if self.anchor == Anchor::min() {
scroll_position.y = 0.;
} else {
let scroll_top = self.anchor.to_display_point(snapshot).row().as_f32();
scroll_position.y += scroll_top;
}
scroll_position
self.offset.apply_along(Axis::Vertical, |offset| {
if self.anchor == Anchor::min() {
0.
} else {
let scroll_top = self.anchor.to_display_point(snapshot).row().as_f32();
(offset + scroll_top).max(0.)
}
})
}
pub fn top_row(&self, buffer: &MultiBufferSnapshot) -> u32 {

View file

@ -422,6 +422,13 @@ impl AppContext for ExampleContext {
self.app.update_entity(handle, update)
}
fn as_mut<'a, T>(&'a mut self, handle: &Entity<T>) -> Self::Result<gpui::GpuiBorrow<'a, T>>
where
T: 'static,
{
self.app.as_mut(handle)
}
fn read_entity<T, R>(
&self,
handle: &Entity<T>,

View file

@ -102,7 +102,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn language_server_initialization_options(
@ -127,7 +127,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn language_server_workspace_configuration(
@ -150,7 +150,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn language_server_additional_initialization_options(
@ -175,7 +175,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn language_server_additional_workspace_configuration(
@ -200,7 +200,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn labels_for_completions(
@ -226,7 +226,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn labels_for_symbols(
@ -252,7 +252,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn complete_slash_command_argument(
@ -271,7 +271,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn run_slash_command(
@ -297,7 +297,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn context_server_command(
@ -316,7 +316,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn context_server_configuration(
@ -343,7 +343,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn suggest_docs_packages(&self, provider: Arc<str>) -> Result<Vec<String>> {
@ -358,7 +358,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn index_docs(
@ -384,7 +384,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn get_dap_binary(
@ -406,7 +406,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn dap_request_kind(
&self,
@ -423,7 +423,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn dap_config_to_scenario(&self, config: ZedDebugConfig) -> Result<DebugScenario> {
@ -437,7 +437,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn dap_locator_create_scenario(
@ -461,7 +461,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
async fn run_dap_locator(
&self,
@ -477,7 +477,7 @@ impl extension::Extension for WasmExtension {
}
.boxed()
})
.await
.await?
}
}
@ -739,7 +739,7 @@ impl WasmExtension {
.with_context(|| format!("failed to load wasm extension {}", manifest.id))
}
pub async fn call<T, Fn>(&self, f: Fn) -> T
pub async fn call<T, Fn>(&self, f: Fn) -> Result<T>
where
T: 'static + Send,
Fn: 'static
@ -755,8 +755,19 @@ impl WasmExtension {
}
.boxed()
}))
.expect("wasm extension channel should not be closed yet");
return_rx.await.expect("wasm extension channel")
.map_err(|_| {
anyhow!(
"wasm extension channel should not be closed yet, extension {} (id {})",
self.manifest.name,
self.manifest.id,
)
})?;
return_rx.await.with_context(|| {
format!(
"wasm extension channel, extension {} (id {})",
self.manifest.name, self.manifest.id,
)
})
}
}
@ -777,8 +788,19 @@ impl WasmState {
}
.boxed_local()
}))
.expect("main thread message channel should not be closed yet");
async move { return_rx.await.expect("main thread message channel") }
.unwrap_or_else(|_| {
panic!(
"main thread message channel should not be closed yet, extension {} (id {})",
self.manifest.name, self.manifest.id,
)
});
let name = self.manifest.name.clone();
let id = self.manifest.id.clone();
async move {
return_rx.await.unwrap_or_else(|_| {
panic!("main thread message channel, extension {name} (id {id})")
})
}
}
fn work_dir(&self) -> PathBuf {

View file

@ -126,7 +126,7 @@ mod macos {
"ContentMask".into(),
"Uniforms".into(),
"AtlasTile".into(),
"PathInputIndex".into(),
"PathRasterizationInputIndex".into(),
"PathVertex_ScaledPixels".into(),
"ShadowInputIndex".into(),
"Shadow".into(),

View file

@ -1,13 +1,9 @@
use gpui::{
Application, Background, Bounds, ColorSpace, Context, MouseDownEvent, Path, PathBuilder,
PathStyle, Pixels, Point, Render, SharedString, StrokeOptions, Window, WindowBounds,
WindowOptions, canvas, div, linear_color_stop, linear_gradient, point, prelude::*, px, rgb,
size,
PathStyle, Pixels, Point, Render, SharedString, StrokeOptions, Window, WindowOptions, canvas,
div, linear_color_stop, linear_gradient, point, prelude::*, px, rgb, size,
};
const DEFAULT_WINDOW_WIDTH: Pixels = px(1024.0);
const DEFAULT_WINDOW_HEIGHT: Pixels = px(768.0);
struct PaintingViewer {
default_lines: Vec<(Path<Pixels>, Background)>,
lines: Vec<Vec<Point<Pixels>>>,
@ -151,6 +147,8 @@ impl PaintingViewer {
px(320.0 + (i as f32 * 10.0).sin() * 40.0),
));
}
let path = builder.build().unwrap();
lines.push((path, gpui::green().into()));
Self {
default_lines: lines.clone(),
@ -185,13 +183,9 @@ fn button(
}
impl Render for PaintingViewer {
fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
window.request_animation_frame();
fn render(&mut self, _: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let default_lines = self.default_lines.clone();
let lines = self.lines.clone();
let window_size = window.bounds().size;
let scale = window_size.width / DEFAULT_WINDOW_WIDTH;
let dashed = self.dashed;
div()
@ -228,7 +222,7 @@ impl Render for PaintingViewer {
move |_, _, _| {},
move |_, _, window, _| {
for (path, color) in default_lines {
window.paint_path(path.clone().scale(scale), color);
window.paint_path(path, color);
}
for points in lines {
@ -304,11 +298,6 @@ fn main() {
cx.open_window(
WindowOptions {
focus: true,
window_bounds: Some(WindowBounds::Windowed(Bounds::centered(
None,
size(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT),
cx,
))),
..Default::default()
},
|window, cx| cx.new(|cx| PaintingViewer::new(window, cx)),

View file

@ -448,15 +448,23 @@ impl App {
}
pub(crate) fn update<R>(&mut self, update: impl FnOnce(&mut Self) -> R) -> R {
self.pending_updates += 1;
self.start_update();
let result = update(self);
self.finish_update();
result
}
pub(crate) fn start_update(&mut self) {
self.pending_updates += 1;
}
pub(crate) fn finish_update(&mut self) {
if !self.flushing_effects && self.pending_updates == 1 {
self.flushing_effects = true;
self.flush_effects();
self.flushing_effects = false;
}
self.pending_updates -= 1;
result
}
/// Arrange a callback to be invoked when the given entity calls `notify` on its respective context.
@ -868,7 +876,6 @@ impl App {
loop {
self.release_dropped_entities();
self.release_dropped_focus_handles();
if let Some(effect) = self.pending_effects.pop_front() {
match effect {
Effect::Notify { emitter } => {
@ -1819,6 +1826,13 @@ impl AppContext for App {
})
}
fn as_mut<'a, T>(&'a mut self, handle: &Entity<T>) -> GpuiBorrow<'a, T>
where
T: 'static,
{
GpuiBorrow::new(handle.clone(), self)
}
fn read_entity<T, R>(
&self,
handle: &Entity<T>,
@ -2007,6 +2021,10 @@ impl HttpClient for NullHttpClient {
.boxed()
}
fn user_agent(&self) -> Option<&http_client::http::HeaderValue> {
None
}
fn proxy(&self) -> Option<&Url> {
None
}
@ -2015,3 +2033,79 @@ impl HttpClient for NullHttpClient {
type_name::<Self>()
}
}
/// A mutable reference to an entity owned by GPUI
pub struct GpuiBorrow<'a, T> {
inner: Option<Lease<T>>,
app: &'a mut App,
}
impl<'a, T: 'static> GpuiBorrow<'a, T> {
fn new(inner: Entity<T>, app: &'a mut App) -> Self {
app.start_update();
let lease = app.entities.lease(&inner);
Self {
inner: Some(lease),
app,
}
}
}
impl<'a, T: 'static> std::borrow::Borrow<T> for GpuiBorrow<'a, T> {
fn borrow(&self) -> &T {
self.inner.as_ref().unwrap().borrow()
}
}
impl<'a, T: 'static> std::borrow::BorrowMut<T> for GpuiBorrow<'a, T> {
fn borrow_mut(&mut self) -> &mut T {
self.inner.as_mut().unwrap().borrow_mut()
}
}
impl<'a, T> Drop for GpuiBorrow<'a, T> {
fn drop(&mut self) {
let lease = self.inner.take().unwrap();
self.app.notify(lease.id);
self.app.entities.end_lease(lease);
self.app.finish_update();
}
}
#[cfg(test)]
mod test {
use std::{cell::RefCell, rc::Rc};
use crate::{AppContext, TestAppContext};
#[test]
fn test_gpui_borrow() {
let cx = TestAppContext::single();
let observation_count = Rc::new(RefCell::new(0));
let state = cx.update(|cx| {
let state = cx.new(|_| false);
cx.observe(&state, {
let observation_count = observation_count.clone();
move |_, _| {
let mut count = observation_count.borrow_mut();
*count += 1;
}
})
.detach();
state
});
cx.update(|cx| {
// Calling this like this so that we don't clobber the borrow_mut above
*std::borrow::BorrowMut::borrow_mut(&mut state.as_mut(cx)) = true;
});
cx.update(|cx| {
state.write(cx, false);
});
assert_eq!(*observation_count.borrow(), 2);
}
}

View file

@ -3,7 +3,7 @@ use crate::{
Entity, EventEmitter, Focusable, ForegroundExecutor, Global, PromptButton, PromptLevel, Render,
Reservation, Result, Subscription, Task, VisualContext, Window, WindowHandle,
};
use anyhow::Context as _;
use anyhow::{Context as _, anyhow};
use derive_more::{Deref, DerefMut};
use futures::channel::oneshot;
use std::{future::Future, rc::Weak};
@ -58,6 +58,15 @@ impl AppContext for AsyncApp {
Ok(app.update_entity(handle, update))
}
fn as_mut<'a, T>(&'a mut self, _handle: &Entity<T>) -> Self::Result<super::GpuiBorrow<'a, T>>
where
T: 'static,
{
Err(anyhow!(
"Cannot as_mut with an async context. Try calling update() first"
))
}
fn read_entity<T, R>(
&self,
handle: &Entity<T>,
@ -364,6 +373,15 @@ impl AppContext for AsyncWindowContext {
.update(self, |_, _, cx| cx.update_entity(handle, update))
}
fn as_mut<'a, T>(&'a mut self, _: &Entity<T>) -> Self::Result<super::GpuiBorrow<'a, T>>
where
T: 'static,
{
Err(anyhow!(
"Cannot use as_mut() from an async context, call `update`"
))
}
fn read_entity<T, R>(
&self,
handle: &Entity<T>,

View file

@ -726,6 +726,13 @@ impl<T> AppContext for Context<'_, T> {
self.app.update_entity(handle, update)
}
fn as_mut<'a, E>(&'a mut self, handle: &Entity<E>) -> Self::Result<super::GpuiBorrow<'a, E>>
where
E: 'static,
{
self.app.as_mut(handle)
}
fn read_entity<U, R>(
&self,
handle: &Entity<U>,

View file

@ -1,4 +1,4 @@
use crate::{App, AppContext, VisualContext, Window, seal::Sealed};
use crate::{App, AppContext, GpuiBorrow, VisualContext, Window, seal::Sealed};
use anyhow::{Context as _, Result};
use collections::FxHashSet;
use derive_more::{Deref, DerefMut};
@ -105,7 +105,7 @@ impl EntityMap {
/// Move an entity to the stack.
#[track_caller]
pub fn lease<'a, T>(&mut self, pointer: &'a Entity<T>) -> Lease<'a, T> {
pub fn lease<T>(&mut self, pointer: &Entity<T>) -> Lease<T> {
self.assert_valid_context(pointer);
let mut accessed_entities = self.accessed_entities.borrow_mut();
accessed_entities.insert(pointer.entity_id);
@ -117,15 +117,14 @@ impl EntityMap {
);
Lease {
entity,
pointer,
id: pointer.entity_id,
entity_type: PhantomData,
}
}
/// Returns an entity after moving it to the stack.
pub fn end_lease<T>(&mut self, mut lease: Lease<T>) {
self.entities
.insert(lease.pointer.entity_id, lease.entity.take().unwrap());
self.entities.insert(lease.id, lease.entity.take().unwrap());
}
pub fn read<T: 'static>(&self, entity: &Entity<T>) -> &T {
@ -187,13 +186,13 @@ fn double_lease_panic<T>(operation: &str) -> ! {
)
}
pub(crate) struct Lease<'a, T> {
pub(crate) struct Lease<T> {
entity: Option<Box<dyn Any>>,
pub pointer: &'a Entity<T>,
pub id: EntityId,
entity_type: PhantomData<T>,
}
impl<T: 'static> core::ops::Deref for Lease<'_, T> {
impl<T: 'static> core::ops::Deref for Lease<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
@ -201,13 +200,13 @@ impl<T: 'static> core::ops::Deref for Lease<'_, T> {
}
}
impl<T: 'static> core::ops::DerefMut for Lease<'_, T> {
impl<T: 'static> core::ops::DerefMut for Lease<T> {
fn deref_mut(&mut self) -> &mut Self::Target {
self.entity.as_mut().unwrap().downcast_mut().unwrap()
}
}
impl<T> Drop for Lease<'_, T> {
impl<T> Drop for Lease<T> {
fn drop(&mut self) {
if self.entity.is_some() && !panicking() {
panic!("Leases must be ended with EntityMap::end_lease")
@ -437,6 +436,19 @@ impl<T: 'static> Entity<T> {
cx.update_entity(self, update)
}
/// Updates the entity referenced by this handle with the given function.
pub fn as_mut<'a, C: AppContext>(&self, cx: &'a mut C) -> C::Result<GpuiBorrow<'a, T>> {
cx.as_mut(self)
}
/// Updates the entity referenced by this handle with the given function.
pub fn write<C: AppContext>(&self, cx: &mut C, value: T) -> C::Result<()> {
self.update(cx, |entity, cx| {
*entity = value;
cx.notify();
})
}
/// Updates the entity referenced by this handle with the given function if
/// the referenced entity still exists, within a visual context that has a window.
/// Returns an error if the entity has been released.

View file

@ -9,6 +9,7 @@ use crate::{
};
use anyhow::{anyhow, bail};
use futures::{Stream, StreamExt, channel::oneshot};
use rand::{SeedableRng, rngs::StdRng};
use std::{cell::RefCell, future::Future, ops::Deref, rc::Rc, sync::Arc, time::Duration};
/// A TestAppContext is provided to tests created with `#[gpui::test]`, it provides
@ -63,6 +64,13 @@ impl AppContext for TestAppContext {
app.update_entity(handle, update)
}
fn as_mut<'a, T>(&'a mut self, _: &Entity<T>) -> Self::Result<super::GpuiBorrow<'a, T>>
where
T: 'static,
{
panic!("Cannot use as_mut with a test app context. Try calling update() first")
}
fn read_entity<T, R>(
&self,
handle: &Entity<T>,
@ -134,6 +142,12 @@ impl TestAppContext {
}
}
/// Create a single TestAppContext, for non-multi-client tests
pub fn single() -> Self {
let dispatcher = TestDispatcher::new(StdRng::from_entropy());
Self::build(dispatcher, None)
}
/// The name of the test function that created this `TestAppContext`
pub fn test_function_name(&self) -> Option<&'static str> {
self.fn_name
@ -914,6 +928,13 @@ impl AppContext for VisualTestContext {
self.cx.update_entity(handle, update)
}
fn as_mut<'a, T>(&'a mut self, handle: &Entity<T>) -> Self::Result<super::GpuiBorrow<'a, T>>
where
T: 'static,
{
self.cx.as_mut(handle)
}
fn read_entity<T, R>(
&self,
handle: &Entity<T>,

View file

@ -39,7 +39,7 @@ use crate::{
use derive_more::{Deref, DerefMut};
pub(crate) use smallvec::SmallVec;
use std::{
any::Any,
any::{Any, type_name},
fmt::{self, Debug, Display},
mem, panic,
};
@ -220,14 +220,17 @@ impl<C: RenderOnce> Element for Component<C> {
window: &mut Window,
cx: &mut App,
) -> (LayoutId, Self::RequestLayoutState) {
let mut element = self
.component
.take()
.unwrap()
.render(window, cx)
.into_any_element();
let layout_id = element.request_layout(window, cx);
(layout_id, element)
window.with_global_id(ElementId::Name(type_name::<C>().into()), |_, window| {
let mut element = self
.component
.take()
.unwrap()
.render(window, cx)
.into_any_element();
let layout_id = element.request_layout(window, cx);
(layout_id, element)
})
}
fn prepaint(
@ -239,7 +242,9 @@ impl<C: RenderOnce> Element for Component<C> {
window: &mut Window,
cx: &mut App,
) {
element.prepaint(window, cx);
window.with_global_id(ElementId::Name(type_name::<C>().into()), |_, window| {
element.prepaint(window, cx);
})
}
fn paint(
@ -252,7 +257,9 @@ impl<C: RenderOnce> Element for Component<C> {
window: &mut Window,
cx: &mut App,
) {
element.paint(window, cx);
window.with_global_id(ElementId::Name(type_name::<C>().into()), |_, window| {
element.paint(window, cx);
})
}
}

View file

@ -197,6 +197,11 @@ pub trait AppContext {
where
T: 'static;
/// Update a entity in the app context.
fn as_mut<'a, T>(&'a mut self, handle: &Entity<T>) -> Self::Result<GpuiBorrow<'a, T>>
where
T: 'static;
/// Read a entity from the app context.
fn read_entity<T, R>(
&self,

View file

@ -336,7 +336,10 @@ impl PathBuilder {
let v1 = buf.vertices[i1];
let v2 = buf.vertices[i2];
path.push_triangle((v0.into(), v1.into(), v2.into()));
path.push_triangle(
(v0.into(), v1.into(), v2.into()),
(point(0., 1.), point(0., 1.), point(0., 1.)),
);
}
path

View file

@ -794,6 +794,7 @@ pub(crate) struct AtlasTextureId {
pub(crate) enum AtlasTextureKind {
Monochrome = 0,
Polychrome = 1,
Path = 2,
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]

View file

@ -10,6 +10,8 @@ use etagere::BucketedAtlasAllocator;
use parking_lot::Mutex;
use std::{borrow::Cow, ops, sync::Arc};
pub(crate) const PATH_TEXTURE_FORMAT: gpu::TextureFormat = gpu::TextureFormat::R16Float;
pub(crate) struct BladeAtlas(Mutex<BladeAtlasState>);
struct PendingUpload {
@ -25,6 +27,7 @@ struct BladeAtlasState {
tiles_by_key: FxHashMap<AtlasKey, AtlasTile>,
initializations: Vec<AtlasTextureId>,
uploads: Vec<PendingUpload>,
path_sample_count: u32,
}
#[cfg(gles)]
@ -38,13 +41,13 @@ impl BladeAtlasState {
}
pub struct BladeTextureInfo {
#[allow(dead_code)]
pub size: gpu::Extent,
pub raw_view: gpu::TextureView,
pub msaa_view: Option<gpu::TextureView>,
}
impl BladeAtlas {
pub(crate) fn new(gpu: &Arc<gpu::Context>) -> Self {
pub(crate) fn new(gpu: &Arc<gpu::Context>, path_sample_count: u32) -> Self {
BladeAtlas(Mutex::new(BladeAtlasState {
gpu: Arc::clone(gpu),
upload_belt: BufferBelt::new(BufferBeltDescriptor {
@ -56,6 +59,7 @@ impl BladeAtlas {
tiles_by_key: Default::default(),
initializations: Vec::new(),
uploads: Vec::new(),
path_sample_count,
}))
}
@ -63,7 +67,6 @@ impl BladeAtlas {
self.0.lock().destroy();
}
#[allow(dead_code)]
pub(crate) fn clear_textures(&self, texture_kind: AtlasTextureKind) {
let mut lock = self.0.lock();
let textures = &mut lock.storage[texture_kind];
@ -72,6 +75,19 @@ impl BladeAtlas {
}
}
/// Allocate a rectangle and make it available for rendering immediately (without waiting for `before_frame`)
pub fn allocate_for_rendering(
&self,
size: Size<DevicePixels>,
texture_kind: AtlasTextureKind,
gpu_encoder: &mut gpu::CommandEncoder,
) -> AtlasTile {
let mut lock = self.0.lock();
let tile = lock.allocate(size, texture_kind);
lock.flush_initializations(gpu_encoder);
tile
}
pub fn before_frame(&self, gpu_encoder: &mut gpu::CommandEncoder) {
let mut lock = self.0.lock();
lock.flush(gpu_encoder);
@ -93,6 +109,7 @@ impl BladeAtlas {
depth: 1,
},
raw_view: texture.raw_view,
msaa_view: texture.msaa_view,
}
}
}
@ -183,8 +200,48 @@ impl BladeAtlasState {
format = gpu::TextureFormat::Bgra8UnormSrgb;
usage = gpu::TextureUsage::COPY | gpu::TextureUsage::RESOURCE;
}
AtlasTextureKind::Path => {
format = PATH_TEXTURE_FORMAT;
usage = gpu::TextureUsage::COPY
| gpu::TextureUsage::RESOURCE
| gpu::TextureUsage::TARGET;
}
}
// We currently only enable MSAA for path textures.
let (msaa, msaa_view) = if self.path_sample_count > 1 && kind == AtlasTextureKind::Path {
let msaa = self.gpu.create_texture(gpu::TextureDesc {
name: "msaa path texture",
format,
size: gpu::Extent {
width: size.width.into(),
height: size.height.into(),
depth: 1,
},
array_layer_count: 1,
mip_level_count: 1,
sample_count: self.path_sample_count,
dimension: gpu::TextureDimension::D2,
usage: gpu::TextureUsage::TARGET,
external: None,
});
(
Some(msaa),
Some(self.gpu.create_texture_view(
msaa,
gpu::TextureViewDesc {
name: "msaa texture view",
format,
dimension: gpu::ViewDimension::D2,
subresources: &Default::default(),
},
)),
)
} else {
(None, None)
};
let raw = self.gpu.create_texture(gpu::TextureDesc {
name: "atlas",
format,
@ -222,6 +279,8 @@ impl BladeAtlasState {
format,
raw,
raw_view,
msaa,
msaa_view,
live_atlas_keys: 0,
};
@ -281,6 +340,7 @@ impl BladeAtlasState {
struct BladeAtlasStorage {
monochrome_textures: AtlasTextureList<BladeAtlasTexture>,
polychrome_textures: AtlasTextureList<BladeAtlasTexture>,
path_textures: AtlasTextureList<BladeAtlasTexture>,
}
impl ops::Index<AtlasTextureKind> for BladeAtlasStorage {
@ -289,6 +349,7 @@ impl ops::Index<AtlasTextureKind> for BladeAtlasStorage {
match kind {
crate::AtlasTextureKind::Monochrome => &self.monochrome_textures,
crate::AtlasTextureKind::Polychrome => &self.polychrome_textures,
crate::AtlasTextureKind::Path => &self.path_textures,
}
}
}
@ -298,6 +359,7 @@ impl ops::IndexMut<AtlasTextureKind> for BladeAtlasStorage {
match kind {
crate::AtlasTextureKind::Monochrome => &mut self.monochrome_textures,
crate::AtlasTextureKind::Polychrome => &mut self.polychrome_textures,
crate::AtlasTextureKind::Path => &mut self.path_textures,
}
}
}
@ -308,6 +370,7 @@ impl ops::Index<AtlasTextureId> for BladeAtlasStorage {
let textures = match id.kind {
crate::AtlasTextureKind::Monochrome => &self.monochrome_textures,
crate::AtlasTextureKind::Polychrome => &self.polychrome_textures,
crate::AtlasTextureKind::Path => &self.path_textures,
};
textures[id.index as usize].as_ref().unwrap()
}
@ -321,6 +384,9 @@ impl BladeAtlasStorage {
for mut texture in self.polychrome_textures.drain().flatten() {
texture.destroy(gpu);
}
for mut texture in self.path_textures.drain().flatten() {
texture.destroy(gpu);
}
}
}
@ -329,6 +395,8 @@ struct BladeAtlasTexture {
allocator: BucketedAtlasAllocator,
raw: gpu::Texture,
raw_view: gpu::TextureView,
msaa: Option<gpu::Texture>,
msaa_view: Option<gpu::TextureView>,
format: gpu::TextureFormat,
live_atlas_keys: u32,
}
@ -356,6 +424,12 @@ impl BladeAtlasTexture {
fn destroy(&mut self, gpu: &gpu::Context) {
gpu.destroy_texture(self.raw);
gpu.destroy_texture_view(self.raw_view);
if let Some(msaa) = self.msaa {
gpu.destroy_texture(msaa);
}
if let Some(msaa_view) = self.msaa_view {
gpu.destroy_texture_view(msaa_view);
}
}
fn bytes_per_pixel(&self) -> u8 {

View file

@ -1,19 +1,24 @@
// Doing `if let` gives you nice scoping with passes/encoders
#![allow(irrefutable_let_patterns)]
use super::{BladeAtlas, BladeContext};
use super::{BladeAtlas, BladeContext, PATH_TEXTURE_FORMAT};
use crate::{
Background, Bounds, ContentMask, DevicePixels, GpuSpecs, MonochromeSprite, PathVertex,
PolychromeSprite, PrimitiveBatch, Quad, ScaledPixels, Scene, Shadow, Size, Underline,
AtlasTextureKind, AtlasTile, Background, Bounds, ContentMask, DevicePixels, GpuSpecs,
MonochromeSprite, Path, PathId, PathVertex, PolychromeSprite, PrimitiveBatch, Quad,
ScaledPixels, Scene, Shadow, Size, Underline,
};
use blade_graphics::{self as gpu};
use blade_graphics as gpu;
use blade_util::{BufferBelt, BufferBeltDescriptor};
use bytemuck::{Pod, Zeroable};
use collections::HashMap;
#[cfg(target_os = "macos")]
use media::core_video::CVMetalTextureCache;
use std::{mem, sync::Arc};
const MAX_FRAME_TIME_MS: u32 = 10000;
// Use 4x MSAA, all devices support it.
// https://developer.apple.com/documentation/metal/mtldevice/1433355-supportstexturesamplecount
const DEFAULT_PATH_SAMPLE_COUNT: u32 = 4;
#[repr(C)]
#[derive(Clone, Copy, Pod, Zeroable)]
@ -61,9 +66,16 @@ struct ShaderShadowsData {
}
#[derive(blade_macros::ShaderData)]
struct ShaderPathsData {
struct ShaderPathRasterizationData {
globals: GlobalParams,
b_path_vertices: gpu::BufferPiece,
}
#[derive(blade_macros::ShaderData)]
struct ShaderPathsData {
globals: GlobalParams,
t_sprite: gpu::TextureView,
s_sprite: gpu::Sampler,
b_path_sprites: gpu::BufferPiece,
}
@ -103,27 +115,13 @@ struct ShaderSurfacesData {
struct PathSprite {
bounds: Bounds<ScaledPixels>,
color: Background,
}
/// Argument buffer layout for `draw_indirect` commands.
#[repr(C)]
#[derive(Copy, Clone, Debug, Default, Pod, Zeroable)]
pub struct DrawIndirectArgs {
/// The number of vertices to draw.
pub vertex_count: u32,
/// The number of instances to draw.
pub instance_count: u32,
/// The Index of the first vertex to draw.
pub first_vertex: u32,
/// The instance ID of the first instance to draw.
///
/// Has to be 0, unless [`Features::INDIRECT_FIRST_INSTANCE`](crate::Features::INDIRECT_FIRST_INSTANCE) is enabled.
pub first_instance: u32,
tile: AtlasTile,
}
struct BladePipelines {
quads: gpu::RenderPipeline,
shadows: gpu::RenderPipeline,
path_rasterization: gpu::RenderPipeline,
paths: gpu::RenderPipeline,
underlines: gpu::RenderPipeline,
mono_sprites: gpu::RenderPipeline,
@ -132,7 +130,7 @@ struct BladePipelines {
}
impl BladePipelines {
fn new(gpu: &gpu::Context, surface_info: gpu::SurfaceInfo, sample_count: u32) -> Self {
fn new(gpu: &gpu::Context, surface_info: gpu::SurfaceInfo, path_sample_count: u32) -> Self {
use gpu::ShaderData as _;
log::info!(
@ -180,10 +178,7 @@ impl BladePipelines {
depth_stencil: None,
fragment: Some(shader.at("fs_quad")),
color_targets,
multisample_state: gpu::MultisampleState {
sample_count,
..Default::default()
},
multisample_state: gpu::MultisampleState::default(),
}),
shadows: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "shadows",
@ -197,8 +192,26 @@ impl BladePipelines {
depth_stencil: None,
fragment: Some(shader.at("fs_shadow")),
color_targets,
multisample_state: gpu::MultisampleState::default(),
}),
path_rasterization: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "path_rasterization",
data_layouts: &[&ShaderPathRasterizationData::layout()],
vertex: shader.at("vs_path_rasterization"),
vertex_fetches: &[],
primitive: gpu::PrimitiveState {
topology: gpu::PrimitiveTopology::TriangleList,
..Default::default()
},
depth_stencil: None,
fragment: Some(shader.at("fs_path_rasterization")),
color_targets: &[gpu::ColorTargetState {
format: PATH_TEXTURE_FORMAT,
blend: Some(gpu::BlendState::ADDITIVE),
write_mask: gpu::ColorWrites::default(),
}],
multisample_state: gpu::MultisampleState {
sample_count,
sample_count: path_sample_count,
..Default::default()
},
}),
@ -208,16 +221,13 @@ impl BladePipelines {
vertex: shader.at("vs_path"),
vertex_fetches: &[],
primitive: gpu::PrimitiveState {
topology: gpu::PrimitiveTopology::TriangleList,
topology: gpu::PrimitiveTopology::TriangleStrip,
..Default::default()
},
depth_stencil: None,
fragment: Some(shader.at("fs_path")),
color_targets,
multisample_state: gpu::MultisampleState {
sample_count,
..Default::default()
},
multisample_state: gpu::MultisampleState::default(),
}),
underlines: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "underlines",
@ -231,10 +241,7 @@ impl BladePipelines {
depth_stencil: None,
fragment: Some(shader.at("fs_underline")),
color_targets,
multisample_state: gpu::MultisampleState {
sample_count,
..Default::default()
},
multisample_state: gpu::MultisampleState::default(),
}),
mono_sprites: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "mono-sprites",
@ -248,10 +255,7 @@ impl BladePipelines {
depth_stencil: None,
fragment: Some(shader.at("fs_mono_sprite")),
color_targets,
multisample_state: gpu::MultisampleState {
sample_count,
..Default::default()
},
multisample_state: gpu::MultisampleState::default(),
}),
poly_sprites: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "poly-sprites",
@ -265,10 +269,7 @@ impl BladePipelines {
depth_stencil: None,
fragment: Some(shader.at("fs_poly_sprite")),
color_targets,
multisample_state: gpu::MultisampleState {
sample_count,
..Default::default()
},
multisample_state: gpu::MultisampleState::default(),
}),
surfaces: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "surfaces",
@ -282,10 +283,7 @@ impl BladePipelines {
depth_stencil: None,
fragment: Some(shader.at("fs_surface")),
color_targets,
multisample_state: gpu::MultisampleState {
sample_count,
..Default::default()
},
multisample_state: gpu::MultisampleState::default(),
}),
}
}
@ -293,6 +291,7 @@ impl BladePipelines {
fn destroy(&mut self, gpu: &gpu::Context) {
gpu.destroy_render_pipeline(&mut self.quads);
gpu.destroy_render_pipeline(&mut self.shadows);
gpu.destroy_render_pipeline(&mut self.path_rasterization);
gpu.destroy_render_pipeline(&mut self.paths);
gpu.destroy_render_pipeline(&mut self.underlines);
gpu.destroy_render_pipeline(&mut self.mono_sprites);
@ -318,13 +317,12 @@ pub struct BladeRenderer {
last_sync_point: Option<gpu::SyncPoint>,
pipelines: BladePipelines,
instance_belt: BufferBelt,
path_tiles: HashMap<PathId, AtlasTile>,
atlas: Arc<BladeAtlas>,
atlas_sampler: gpu::Sampler,
#[cfg(target_os = "macos")]
core_video_texture_cache: CVMetalTextureCache,
sample_count: u32,
texture_msaa: Option<gpu::Texture>,
texture_view_msaa: Option<gpu::TextureView>,
path_sample_count: u32,
}
impl BladeRenderer {
@ -333,18 +331,6 @@ impl BladeRenderer {
window: &I,
config: BladeSurfaceConfig,
) -> anyhow::Result<Self> {
// workaround for https://github.com/zed-industries/zed/issues/26143
let sample_count = std::env::var("ZED_SAMPLE_COUNT")
.ok()
.or_else(|| std::env::var("ZED_PATH_SAMPLE_COUNT").ok())
.and_then(|v| v.parse().ok())
.or_else(|| {
[4, 2, 1]
.into_iter()
.find(|count| context.gpu.supports_texture_sample_count(*count))
})
.unwrap_or(1);
let surface_config = gpu::SurfaceConfig {
size: config.size,
usage: gpu::TextureUsage::TARGET,
@ -358,27 +344,22 @@ impl BladeRenderer {
.create_surface_configured(window, surface_config)
.map_err(|err| anyhow::anyhow!("Failed to create surface: {err:?}"))?;
let (texture_msaa, texture_view_msaa) = create_msaa_texture_if_needed(
&context.gpu,
surface.info().format,
config.size.width,
config.size.height,
sample_count,
)
.unzip();
let command_encoder = context.gpu.create_command_encoder(gpu::CommandEncoderDesc {
name: "main",
buffer_count: 2,
});
let pipelines = BladePipelines::new(&context.gpu, surface.info(), sample_count);
// workaround for https://github.com/zed-industries/zed/issues/26143
let path_sample_count = std::env::var("ZED_PATH_SAMPLE_COUNT")
.ok()
.and_then(|v| v.parse().ok())
.unwrap_or(DEFAULT_PATH_SAMPLE_COUNT);
let pipelines = BladePipelines::new(&context.gpu, surface.info(), path_sample_count);
let instance_belt = BufferBelt::new(BufferBeltDescriptor {
memory: gpu::Memory::Shared,
min_chunk_size: 0x1000,
alignment: 0x40, // Vulkan `minStorageBufferOffsetAlignment` on Intel Xe
});
let atlas = Arc::new(BladeAtlas::new(&context.gpu));
let atlas = Arc::new(BladeAtlas::new(&context.gpu, path_sample_count));
let atlas_sampler = context.gpu.create_sampler(gpu::SamplerDesc {
name: "atlas",
mag_filter: gpu::FilterMode::Linear,
@ -402,13 +383,12 @@ impl BladeRenderer {
last_sync_point: None,
pipelines,
instance_belt,
path_tiles: HashMap::default(),
atlas,
atlas_sampler,
#[cfg(target_os = "macos")]
core_video_texture_cache,
sample_count,
texture_msaa,
texture_view_msaa,
path_sample_count,
})
}
@ -461,24 +441,6 @@ impl BladeRenderer {
self.surface_config.size = gpu_size;
self.gpu
.reconfigure_surface(&mut self.surface, self.surface_config);
if let Some(texture_msaa) = self.texture_msaa {
self.gpu.destroy_texture(texture_msaa);
}
if let Some(texture_view_msaa) = self.texture_view_msaa {
self.gpu.destroy_texture_view(texture_view_msaa);
}
let (texture_msaa, texture_view_msaa) = create_msaa_texture_if_needed(
&self.gpu,
self.surface.info().format,
gpu_size.width,
gpu_size.height,
self.sample_count,
)
.unzip();
self.texture_msaa = texture_msaa;
self.texture_view_msaa = texture_view_msaa;
}
}
@ -489,7 +451,8 @@ impl BladeRenderer {
self.gpu
.reconfigure_surface(&mut self.surface, self.surface_config);
self.pipelines.destroy(&self.gpu);
self.pipelines = BladePipelines::new(&self.gpu, self.surface.info(), self.sample_count);
self.pipelines =
BladePipelines::new(&self.gpu, self.surface.info(), self.path_sample_count);
}
}
@ -527,6 +490,80 @@ impl BladeRenderer {
objc2::rc::Retained::as_ptr(&self.surface.metal_layer()) as *mut _
}
#[profiling::function]
fn rasterize_paths(&mut self, paths: &[Path<ScaledPixels>]) {
self.path_tiles.clear();
let mut vertices_by_texture_id = HashMap::default();
for path in paths {
let clipped_bounds = path
.bounds
.intersect(&path.content_mask.bounds)
.map_origin(|origin| origin.floor())
.map_size(|size| size.ceil());
let tile = self.atlas.allocate_for_rendering(
clipped_bounds.size.map(Into::into),
AtlasTextureKind::Path,
&mut self.command_encoder,
);
vertices_by_texture_id
.entry(tile.texture_id)
.or_insert(Vec::new())
.extend(path.vertices.iter().map(|vertex| PathVertex {
xy_position: vertex.xy_position - clipped_bounds.origin
+ tile.bounds.origin.map(Into::into),
st_position: vertex.st_position,
content_mask: ContentMask {
bounds: tile.bounds.map(Into::into),
},
}));
self.path_tiles.insert(path.id, tile);
}
for (texture_id, vertices) in vertices_by_texture_id {
let tex_info = self.atlas.get_texture_info(texture_id);
let globals = GlobalParams {
viewport_size: [tex_info.size.width as f32, tex_info.size.height as f32],
premultiplied_alpha: 0,
pad: 0,
};
let vertex_buf = unsafe { self.instance_belt.alloc_typed(&vertices, &self.gpu) };
let frame_view = tex_info.raw_view;
let color_target = if let Some(msaa_view) = tex_info.msaa_view {
gpu::RenderTarget {
view: msaa_view,
init_op: gpu::InitOp::Clear(gpu::TextureColor::OpaqueBlack),
finish_op: gpu::FinishOp::ResolveTo(frame_view),
}
} else {
gpu::RenderTarget {
view: frame_view,
init_op: gpu::InitOp::Clear(gpu::TextureColor::OpaqueBlack),
finish_op: gpu::FinishOp::Store,
}
};
if let mut pass = self.command_encoder.render(
"paths",
gpu::RenderTargetSet {
colors: &[color_target],
depth_stencil: None,
},
) {
let mut encoder = pass.with(&self.pipelines.path_rasterization);
encoder.bind(
0,
&ShaderPathRasterizationData {
globals,
b_path_vertices: vertex_buf,
},
);
encoder.draw(0, vertices.len() as u32, 0, 1);
}
}
}
pub fn destroy(&mut self) {
self.wait_for_gpu();
self.atlas.destroy();
@ -535,26 +572,17 @@ impl BladeRenderer {
self.gpu.destroy_command_encoder(&mut self.command_encoder);
self.pipelines.destroy(&self.gpu);
self.gpu.destroy_surface(&mut self.surface);
if let Some(texture_msaa) = self.texture_msaa {
self.gpu.destroy_texture(texture_msaa);
}
if let Some(texture_view_msaa) = self.texture_view_msaa {
self.gpu.destroy_texture_view(texture_view_msaa);
}
}
pub fn draw(&mut self, scene: &Scene) {
self.command_encoder.start();
self.atlas.before_frame(&mut self.command_encoder);
self.rasterize_paths(scene.paths());
let frame = {
profiling::scope!("acquire frame");
self.surface.acquire_frame()
};
let frame_view = frame.texture_view();
if let Some(texture_msaa) = self.texture_msaa {
self.command_encoder.init_texture(texture_msaa);
}
self.command_encoder.init_texture(frame.texture());
let globals = GlobalParams {
@ -569,25 +597,14 @@ impl BladeRenderer {
pad: 0,
};
let target = if let Some(texture_view_msaa) = self.texture_view_msaa {
gpu::RenderTarget {
view: texture_view_msaa,
init_op: gpu::InitOp::Clear(gpu::TextureColor::TransparentBlack),
finish_op: gpu::FinishOp::ResolveTo(frame_view),
}
} else {
gpu::RenderTarget {
view: frame_view,
init_op: gpu::InitOp::Clear(gpu::TextureColor::TransparentBlack),
finish_op: gpu::FinishOp::Store,
}
};
// draw to the target texture
if let mut pass = self.command_encoder.render(
"main",
gpu::RenderTargetSet {
colors: &[target],
colors: &[gpu::RenderTarget {
view: frame.texture_view(),
init_op: gpu::InitOp::Clear(gpu::TextureColor::TransparentBlack),
finish_op: gpu::FinishOp::Store,
}],
depth_stencil: None,
},
) {
@ -622,55 +639,32 @@ impl BladeRenderer {
}
PrimitiveBatch::Paths(paths) => {
let mut encoder = pass.with(&self.pipelines.paths);
let mut vertices = Vec::new();
let mut sprites = Vec::with_capacity(paths.len());
let mut draw_indirect_commands = Vec::with_capacity(paths.len());
let mut first_vertex = 0;
for (i, path) in paths.iter().enumerate() {
draw_indirect_commands.push(DrawIndirectArgs {
vertex_count: path.vertices.len() as u32,
instance_count: 1,
first_vertex,
first_instance: i as u32,
});
first_vertex += path.vertices.len() as u32;
vertices.extend(path.vertices.iter().map(|v| PathVertex {
xy_position: v.xy_position,
content_mask: ContentMask {
bounds: path.content_mask.bounds,
// todo(linux): group by texture ID
for path in paths {
let tile = &self.path_tiles[&path.id];
let tex_info = self.atlas.get_texture_info(tile.texture_id);
let origin = path.bounds.intersect(&path.content_mask.bounds).origin;
let sprites = [PathSprite {
bounds: Bounds {
origin: origin.map(|p| p.floor()),
size: tile.bounds.size.map(Into::into),
},
}));
sprites.push(PathSprite {
bounds: path.bounds,
color: path.color,
});
}
tile: (*tile).clone(),
}];
let b_path_vertices =
unsafe { self.instance_belt.alloc_typed(&vertices, &self.gpu) };
let instance_buf =
unsafe { self.instance_belt.alloc_typed(&sprites, &self.gpu) };
let indirect_buf = unsafe {
self.instance_belt
.alloc_typed(&draw_indirect_commands, &self.gpu)
};
encoder.bind(
0,
&ShaderPathsData {
globals,
b_path_vertices,
b_path_sprites: instance_buf,
},
);
for i in 0..paths.len() {
encoder.draw_indirect(indirect_buf.buffer.at(indirect_buf.offset
+ (i * mem::size_of::<DrawIndirectArgs>()) as u64));
let instance_buf =
unsafe { self.instance_belt.alloc_typed(&sprites, &self.gpu) };
encoder.bind(
0,
&ShaderPathsData {
globals,
t_sprite: tex_info.raw_view,
s_sprite: self.atlas_sampler,
b_path_sprites: instance_buf,
},
);
encoder.draw(0, 4, 0, sprites.len() as u32);
}
}
PrimitiveBatch::Underlines(underlines) => {
@ -823,47 +817,9 @@ impl BladeRenderer {
profiling::scope!("finish");
self.instance_belt.flush(&sync_point);
self.atlas.after_frame(&sync_point);
self.atlas.clear_textures(AtlasTextureKind::Path);
self.wait_for_gpu();
self.last_sync_point = Some(sync_point);
}
}
fn create_msaa_texture_if_needed(
gpu: &gpu::Context,
format: gpu::TextureFormat,
width: u32,
height: u32,
sample_count: u32,
) -> Option<(gpu::Texture, gpu::TextureView)> {
if sample_count <= 1 {
return None;
}
let texture_msaa = gpu.create_texture(gpu::TextureDesc {
name: "msaa",
format,
size: gpu::Extent {
width,
height,
depth: 1,
},
array_layer_count: 1,
mip_level_count: 1,
sample_count,
dimension: gpu::TextureDimension::D2,
usage: gpu::TextureUsage::TARGET,
external: None,
});
let texture_view_msaa = gpu.create_texture_view(
texture_msaa,
gpu::TextureViewDesc {
name: "msaa view",
format,
dimension: gpu::ViewDimension::D2,
subresources: &Default::default(),
},
);
Some((texture_msaa, texture_view_msaa))
}

View file

@ -922,23 +922,59 @@ fn fs_shadow(input: ShadowVarying) -> @location(0) vec4<f32> {
return blend_color(input.color, alpha);
}
// --- paths --- //
// --- path rasterization --- //
struct PathVertex {
xy_position: vec2<f32>,
st_position: vec2<f32>,
content_mask: Bounds,
}
var<storage, read> b_path_vertices: array<PathVertex>;
struct PathRasterizationVarying {
@builtin(position) position: vec4<f32>,
@location(0) st_position: vec2<f32>,
//TODO: use `clip_distance` once Naga supports it
@location(3) clip_distances: vec4<f32>,
}
@vertex
fn vs_path_rasterization(@builtin(vertex_index) vertex_id: u32) -> PathRasterizationVarying {
let v = b_path_vertices[vertex_id];
var out = PathRasterizationVarying();
out.position = to_device_position_impl(v.xy_position);
out.st_position = v.st_position;
out.clip_distances = distance_from_clip_rect_impl(v.xy_position, v.content_mask);
return out;
}
@fragment
fn fs_path_rasterization(input: PathRasterizationVarying) -> @location(0) f32 {
let dx = dpdx(input.st_position);
let dy = dpdy(input.st_position);
if (any(input.clip_distances < vec4<f32>(0.0))) {
return 0.0;
}
let gradient = 2.0 * input.st_position.xx * vec2<f32>(dx.x, dy.x) - vec2<f32>(dx.y, dy.y);
let f = input.st_position.x * input.st_position.x - input.st_position.y;
let distance = f / length(gradient);
return saturate(0.5 - distance);
}
// --- paths --- //
struct PathSprite {
bounds: Bounds,
color: Background,
tile: AtlasTile,
}
var<storage, read> b_path_vertices: array<PathVertex>;
var<storage, read> b_path_sprites: array<PathSprite>;
struct PathVarying {
@builtin(position) position: vec4<f32>,
@location(0) clip_distances: vec4<f32>,
@location(0) tile_position: vec2<f32>,
@location(1) @interpolate(flat) instance_id: u32,
@location(2) @interpolate(flat) color_solid: vec4<f32>,
@location(3) @interpolate(flat) color0: vec4<f32>,
@ -947,12 +983,13 @@ struct PathVarying {
@vertex
fn vs_path(@builtin(vertex_index) vertex_id: u32, @builtin(instance_index) instance_id: u32) -> PathVarying {
let v = b_path_vertices[vertex_id];
let unit_vertex = vec2<f32>(f32(vertex_id & 1u), 0.5 * f32(vertex_id & 2u));
let sprite = b_path_sprites[instance_id];
// Don't apply content mask because it was already accounted for when rasterizing the path.
var out = PathVarying();
out.position = to_device_position_impl(v.xy_position);
out.clip_distances = distance_from_clip_rect_impl(v.xy_position, v.content_mask);
out.position = to_device_position(unit_vertex, sprite.bounds);
out.tile_position = to_tile_position(unit_vertex, sprite.tile);
out.instance_id = instance_id;
let gradient = prepare_gradient_color(
@ -969,15 +1006,13 @@ fn vs_path(@builtin(vertex_index) vertex_id: u32, @builtin(instance_index) insta
@fragment
fn fs_path(input: PathVarying) -> @location(0) vec4<f32> {
if any(input.clip_distances < vec4<f32>(0.0)) {
return vec4<f32>(0.0);
}
let sample = textureSample(t_sprite, s_sprite, input.tile_position).r;
let mask = 1.0 - abs(1.0 - sample % 2.0);
let sprite = b_path_sprites[input.instance_id];
let background = sprite.color;
let color = gradient_color(background, input.position.xy, sprite.bounds,
input.color_solid, input.color0, input.color1);
return blend_color(color, 1.0);
return blend_color(color, mask);
}
// --- underlines --- //

View file

@ -417,17 +417,6 @@ impl Modifiers {
self.control || self.alt || self.shift || self.platform || self.function
}
/// Returns the XOR of two modifier sets
pub fn xor(&self, other: &Modifiers) -> Modifiers {
Modifiers {
control: self.control ^ other.control,
alt: self.alt ^ other.alt,
shift: self.shift ^ other.shift,
platform: self.platform ^ other.platform,
function: self.function ^ other.function,
}
}
/// Whether the semantically 'secondary' modifier key is pressed.
///
/// On macOS, this is the command key.
@ -545,11 +534,62 @@ impl Modifiers {
/// Checks if this [`Modifiers`] is a subset of another [`Modifiers`].
pub fn is_subset_of(&self, other: &Modifiers) -> bool {
(other.control || !self.control)
&& (other.alt || !self.alt)
&& (other.shift || !self.shift)
&& (other.platform || !self.platform)
&& (other.function || !self.function)
(*other & *self) == *self
}
}
impl std::ops::BitOr for Modifiers {
type Output = Self;
fn bitor(mut self, other: Self) -> Self::Output {
self |= other;
self
}
}
impl std::ops::BitOrAssign for Modifiers {
fn bitor_assign(&mut self, other: Self) {
self.control |= other.control;
self.alt |= other.alt;
self.shift |= other.shift;
self.platform |= other.platform;
self.function |= other.function;
}
}
impl std::ops::BitXor for Modifiers {
type Output = Self;
fn bitxor(mut self, rhs: Self) -> Self::Output {
self ^= rhs;
self
}
}
impl std::ops::BitXorAssign for Modifiers {
fn bitxor_assign(&mut self, other: Self) {
self.control ^= other.control;
self.alt ^= other.alt;
self.shift ^= other.shift;
self.platform ^= other.platform;
self.function ^= other.function;
}
}
impl std::ops::BitAnd for Modifiers {
type Output = Self;
fn bitand(mut self, rhs: Self) -> Self::Output {
self &= rhs;
self
}
}
impl std::ops::BitAndAssign for Modifiers {
fn bitand_assign(&mut self, other: Self) {
self.control &= other.control;
self.alt &= other.alt;
self.shift &= other.shift;
self.platform &= other.platform;
self.function &= other.function;
}
}

View file

@ -822,11 +822,41 @@ impl crate::Keystroke {
Keysym::underscore => "_".to_owned(),
Keysym::equal => "=".to_owned(),
Keysym::plus => "+".to_owned(),
Keysym::space => "space".to_owned(),
Keysym::BackSpace => "backspace".to_owned(),
Keysym::Tab => "tab".to_owned(),
Keysym::Delete => "delete".to_owned(),
Keysym::Escape => "escape".to_owned(),
Keysym::Left => "left".to_owned(),
Keysym::Right => "right".to_owned(),
Keysym::Up => "up".to_owned(),
Keysym::Down => "down".to_owned(),
Keysym::Home => "home".to_owned(),
Keysym::End => "end".to_owned(),
_ => {
let name = xkb::keysym_get_name(key_sym).to_lowercase();
if key_sym.is_keypad_key() {
name.replace("kp_", "")
} else if let Some(key) = key_utf8.chars().next()
&& key_utf8.len() == 1
&& key.is_ascii()
{
if key.is_ascii_graphic() {
key_utf8.to_lowercase()
// map ctrl-a to `a`
// ctrl-0..9 may emit control codes like ctrl-[, but
// we don't want to map them to `[`
} else if key_utf32 <= 0x1f
&& !name.chars().next().is_some_and(|c| c.is_ascii_digit())
{
((key_utf32 as u8 + 0x40) as char)
.to_ascii_lowercase()
.to_string()
} else {
name
}
} else if let Some(key_en) = guess_ascii(keycode, modifiers.shift) {
String::from(key_en)
} else {

View file

@ -13,12 +13,14 @@ use std::borrow::Cow;
pub(crate) struct MetalAtlas(Mutex<MetalAtlasState>);
impl MetalAtlas {
pub(crate) fn new(device: Device) -> Self {
pub(crate) fn new(device: Device, path_sample_count: u32) -> Self {
MetalAtlas(Mutex::new(MetalAtlasState {
device: AssertSend(device),
monochrome_textures: Default::default(),
polychrome_textures: Default::default(),
path_textures: Default::default(),
tiles_by_key: Default::default(),
path_sample_count,
}))
}
@ -26,7 +28,10 @@ impl MetalAtlas {
self.0.lock().texture(id).metal_texture.clone()
}
#[allow(dead_code)]
pub(crate) fn msaa_texture(&self, id: AtlasTextureId) -> Option<metal::Texture> {
self.0.lock().texture(id).msaa_texture.clone()
}
pub(crate) fn allocate(
&self,
size: Size<DevicePixels>,
@ -35,12 +40,12 @@ impl MetalAtlas {
self.0.lock().allocate(size, texture_kind)
}
#[allow(dead_code)]
pub(crate) fn clear_textures(&self, texture_kind: AtlasTextureKind) {
let mut lock = self.0.lock();
let textures = match texture_kind {
AtlasTextureKind::Monochrome => &mut lock.monochrome_textures,
AtlasTextureKind::Polychrome => &mut lock.polychrome_textures,
AtlasTextureKind::Path => &mut lock.path_textures,
};
for texture in textures.iter_mut() {
texture.clear();
@ -52,7 +57,9 @@ struct MetalAtlasState {
device: AssertSend<Device>,
monochrome_textures: AtlasTextureList<MetalAtlasTexture>,
polychrome_textures: AtlasTextureList<MetalAtlasTexture>,
path_textures: AtlasTextureList<MetalAtlasTexture>,
tiles_by_key: FxHashMap<AtlasKey, AtlasTile>,
path_sample_count: u32,
}
impl PlatformAtlas for MetalAtlas {
@ -87,6 +94,7 @@ impl PlatformAtlas for MetalAtlas {
let textures = match id.kind {
AtlasTextureKind::Monochrome => &mut lock.monochrome_textures,
AtlasTextureKind::Polychrome => &mut lock.polychrome_textures,
AtlasTextureKind::Path => &mut lock.polychrome_textures,
};
let Some(texture_slot) = textures
@ -120,6 +128,7 @@ impl MetalAtlasState {
let textures = match texture_kind {
AtlasTextureKind::Monochrome => &mut self.monochrome_textures,
AtlasTextureKind::Polychrome => &mut self.polychrome_textures,
AtlasTextureKind::Path => &mut self.path_textures,
};
if let Some(tile) = textures
@ -164,14 +173,31 @@ impl MetalAtlasState {
pixel_format = metal::MTLPixelFormat::BGRA8Unorm;
usage = metal::MTLTextureUsage::ShaderRead;
}
AtlasTextureKind::Path => {
pixel_format = metal::MTLPixelFormat::R16Float;
usage = metal::MTLTextureUsage::RenderTarget | metal::MTLTextureUsage::ShaderRead;
}
}
texture_descriptor.set_pixel_format(pixel_format);
texture_descriptor.set_usage(usage);
let metal_texture = self.device.new_texture(&texture_descriptor);
// We currently only enable MSAA for path textures.
let msaa_texture = if self.path_sample_count > 1 && kind == AtlasTextureKind::Path {
let mut descriptor = texture_descriptor.clone();
descriptor.set_texture_type(metal::MTLTextureType::D2Multisample);
descriptor.set_storage_mode(metal::MTLStorageMode::Private);
descriptor.set_sample_count(self.path_sample_count as _);
let msaa_texture = self.device.new_texture(&descriptor);
Some(msaa_texture)
} else {
None
};
let texture_list = match kind {
AtlasTextureKind::Monochrome => &mut self.monochrome_textures,
AtlasTextureKind::Polychrome => &mut self.polychrome_textures,
AtlasTextureKind::Path => &mut self.path_textures,
};
let index = texture_list.free_list.pop();
@ -183,6 +209,7 @@ impl MetalAtlasState {
},
allocator: etagere::BucketedAtlasAllocator::new(size.into()),
metal_texture: AssertSend(metal_texture),
msaa_texture: AssertSend(msaa_texture),
live_atlas_keys: 0,
};
@ -199,6 +226,7 @@ impl MetalAtlasState {
let textures = match id.kind {
crate::AtlasTextureKind::Monochrome => &self.monochrome_textures,
crate::AtlasTextureKind::Polychrome => &self.polychrome_textures,
crate::AtlasTextureKind::Path => &self.path_textures,
};
textures[id.index as usize].as_ref().unwrap()
}
@ -208,6 +236,7 @@ struct MetalAtlasTexture {
id: AtlasTextureId,
allocator: BucketedAtlasAllocator,
metal_texture: AssertSend<metal::Texture>,
msaa_texture: AssertSend<Option<metal::Texture>>,
live_atlas_keys: u32,
}

View file

@ -1,28 +1,27 @@
use super::metal_atlas::MetalAtlas;
use crate::{
AtlasTextureId, Background, Bounds, ContentMask, DevicePixels, MonochromeSprite, PaintSurface,
Path, PathVertex, PolychromeSprite, PrimitiveBatch, Quad, ScaledPixels, Scene, Shadow, Size,
Surface, Underline, point, size,
AtlasTextureId, AtlasTextureKind, AtlasTile, Background, Bounds, ContentMask, DevicePixels,
MonochromeSprite, PaintSurface, Path, PathId, PathVertex, PolychromeSprite, PrimitiveBatch,
Quad, ScaledPixels, Scene, Shadow, Size, Surface, Underline, point, size,
};
use anyhow::Result;
use anyhow::{Context as _, Result};
use block::ConcreteBlock;
use cocoa::{
base::{NO, YES},
foundation::{NSSize, NSUInteger},
quartzcore::AutoresizingMask,
};
use collections::HashMap;
use core_foundation::base::TCFType;
use core_video::{
metal_texture::CVMetalTextureGetTexture, metal_texture_cache::CVMetalTextureCache,
pixel_buffer::kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
};
use foreign_types::{ForeignType, ForeignTypeRef};
use metal::{
CAMetalLayer, CommandQueue, MTLDrawPrimitivesIndirectArguments, MTLPixelFormat,
MTLResourceOptions, NSRange,
};
use metal::{CAMetalLayer, CommandQueue, MTLPixelFormat, MTLResourceOptions, NSRange};
use objc::{self, msg_send, sel, sel_impl};
use parking_lot::Mutex;
use smallvec::SmallVec;
use std::{cell::Cell, ffi::c_void, mem, ptr, sync::Arc};
// Exported to metal
@ -32,6 +31,9 @@ pub(crate) type PointF = crate::Point<f32>;
const SHADERS_METALLIB: &[u8] = include_bytes!(concat!(env!("OUT_DIR"), "/shaders.metallib"));
#[cfg(feature = "runtime_shaders")]
const SHADERS_SOURCE_FILE: &str = include_str!(concat!(env!("OUT_DIR"), "/stitched_shaders.metal"));
// Use 4x MSAA, all devices support it.
// https://developer.apple.com/documentation/metal/mtldevice/1433355-supportstexturesamplecount
const PATH_SAMPLE_COUNT: u32 = 4;
pub type Context = Arc<Mutex<InstanceBufferPool>>;
pub type Renderer = MetalRenderer;
@ -96,7 +98,8 @@ pub(crate) struct MetalRenderer {
layer: metal::MetalLayer,
presents_with_transaction: bool,
command_queue: CommandQueue,
path_pipeline_state: metal::RenderPipelineState,
paths_rasterization_pipeline_state: metal::RenderPipelineState,
path_sprites_pipeline_state: metal::RenderPipelineState,
shadows_pipeline_state: metal::RenderPipelineState,
quads_pipeline_state: metal::RenderPipelineState,
underlines_pipeline_state: metal::RenderPipelineState,
@ -108,8 +111,6 @@ pub(crate) struct MetalRenderer {
instance_buffer_pool: Arc<Mutex<InstanceBufferPool>>,
sprite_atlas: Arc<MetalAtlas>,
core_video_texture_cache: core_video::metal_texture_cache::CVMetalTextureCache,
sample_count: u64,
msaa_texture: Option<metal::Texture>,
}
impl MetalRenderer {
@ -168,19 +169,22 @@ impl MetalRenderer {
MTLResourceOptions::StorageModeManaged,
);
let sample_count = [4, 2, 1]
.into_iter()
.find(|count| device.supports_texture_sample_count(*count))
.unwrap_or(1);
let path_pipeline_state = build_pipeline_state(
let paths_rasterization_pipeline_state = build_path_rasterization_pipeline_state(
&device,
&library,
"paths",
"path_vertex",
"path_fragment",
"paths_rasterization",
"path_rasterization_vertex",
"path_rasterization_fragment",
MTLPixelFormat::R16Float,
PATH_SAMPLE_COUNT,
);
let path_sprites_pipeline_state = build_pipeline_state(
&device,
&library,
"path_sprites",
"path_sprite_vertex",
"path_sprite_fragment",
MTLPixelFormat::BGRA8Unorm,
sample_count,
);
let shadows_pipeline_state = build_pipeline_state(
&device,
@ -189,7 +193,6 @@ impl MetalRenderer {
"shadow_vertex",
"shadow_fragment",
MTLPixelFormat::BGRA8Unorm,
sample_count,
);
let quads_pipeline_state = build_pipeline_state(
&device,
@ -198,7 +201,6 @@ impl MetalRenderer {
"quad_vertex",
"quad_fragment",
MTLPixelFormat::BGRA8Unorm,
sample_count,
);
let underlines_pipeline_state = build_pipeline_state(
&device,
@ -207,7 +209,6 @@ impl MetalRenderer {
"underline_vertex",
"underline_fragment",
MTLPixelFormat::BGRA8Unorm,
sample_count,
);
let monochrome_sprites_pipeline_state = build_pipeline_state(
&device,
@ -216,7 +217,6 @@ impl MetalRenderer {
"monochrome_sprite_vertex",
"monochrome_sprite_fragment",
MTLPixelFormat::BGRA8Unorm,
sample_count,
);
let polychrome_sprites_pipeline_state = build_pipeline_state(
&device,
@ -225,7 +225,6 @@ impl MetalRenderer {
"polychrome_sprite_vertex",
"polychrome_sprite_fragment",
MTLPixelFormat::BGRA8Unorm,
sample_count,
);
let surfaces_pipeline_state = build_pipeline_state(
&device,
@ -234,21 +233,20 @@ impl MetalRenderer {
"surface_vertex",
"surface_fragment",
MTLPixelFormat::BGRA8Unorm,
sample_count,
);
let command_queue = device.new_command_queue();
let sprite_atlas = Arc::new(MetalAtlas::new(device.clone()));
let sprite_atlas = Arc::new(MetalAtlas::new(device.clone(), PATH_SAMPLE_COUNT));
let core_video_texture_cache =
CVMetalTextureCache::new(None, device.clone(), None).unwrap();
let msaa_texture = create_msaa_texture(&device, &layer, sample_count);
Self {
device,
layer,
presents_with_transaction: false,
command_queue,
path_pipeline_state,
paths_rasterization_pipeline_state,
path_sprites_pipeline_state,
shadows_pipeline_state,
quads_pipeline_state,
underlines_pipeline_state,
@ -259,8 +257,6 @@ impl MetalRenderer {
instance_buffer_pool,
sprite_atlas,
core_video_texture_cache,
sample_count,
msaa_texture,
}
}
@ -293,8 +289,6 @@ impl MetalRenderer {
setDrawableSize: size
];
}
self.msaa_texture = create_msaa_texture(&self.device, &self.layer, self.sample_count);
}
pub fn update_transparency(&self, _transparent: bool) {
@ -381,23 +375,25 @@ impl MetalRenderer {
let command_queue = self.command_queue.clone();
let command_buffer = command_queue.new_command_buffer();
let mut instance_offset = 0;
let path_tiles = self
.rasterize_paths(
scene.paths(),
instance_buffer,
&mut instance_offset,
command_buffer,
)
.with_context(|| format!("rasterizing {} paths", scene.paths().len()))?;
let render_pass_descriptor = metal::RenderPassDescriptor::new();
let color_attachment = render_pass_descriptor
.color_attachments()
.object_at(0)
.unwrap();
if let Some(msaa_texture_ref) = self.msaa_texture.as_deref() {
color_attachment.set_texture(Some(msaa_texture_ref));
color_attachment.set_load_action(metal::MTLLoadAction::Clear);
color_attachment.set_store_action(metal::MTLStoreAction::MultisampleResolve);
color_attachment.set_resolve_texture(Some(drawable.texture()));
} else {
color_attachment.set_load_action(metal::MTLLoadAction::Clear);
color_attachment.set_texture(Some(drawable.texture()));
color_attachment.set_store_action(metal::MTLStoreAction::Store);
}
color_attachment.set_texture(Some(drawable.texture()));
color_attachment.set_load_action(metal::MTLLoadAction::Clear);
color_attachment.set_store_action(metal::MTLStoreAction::Store);
let alpha = if self.layer.is_opaque() { 1. } else { 0. };
color_attachment.set_clear_color(metal::MTLClearColor::new(0., 0., 0., alpha));
let command_encoder = command_buffer.new_render_command_encoder(render_pass_descriptor);
@ -429,6 +425,7 @@ impl MetalRenderer {
),
PrimitiveBatch::Paths(paths) => self.draw_paths(
paths,
&path_tiles,
instance_buffer,
&mut instance_offset,
viewport_size,
@ -496,6 +493,106 @@ impl MetalRenderer {
Ok(command_buffer.to_owned())
}
fn rasterize_paths(
&self,
paths: &[Path<ScaledPixels>],
instance_buffer: &mut InstanceBuffer,
instance_offset: &mut usize,
command_buffer: &metal::CommandBufferRef,
) -> Option<HashMap<PathId, AtlasTile>> {
self.sprite_atlas.clear_textures(AtlasTextureKind::Path);
let mut tiles = HashMap::default();
let mut vertices_by_texture_id = HashMap::default();
for path in paths {
let clipped_bounds = path.bounds.intersect(&path.content_mask.bounds);
let tile = self
.sprite_atlas
.allocate(clipped_bounds.size.map(Into::into), AtlasTextureKind::Path)?;
vertices_by_texture_id
.entry(tile.texture_id)
.or_insert(Vec::new())
.extend(path.vertices.iter().map(|vertex| PathVertex {
xy_position: vertex.xy_position - clipped_bounds.origin
+ tile.bounds.origin.map(Into::into),
st_position: vertex.st_position,
content_mask: ContentMask {
bounds: tile.bounds.map(Into::into),
},
}));
tiles.insert(path.id, tile);
}
for (texture_id, vertices) in vertices_by_texture_id {
align_offset(instance_offset);
let vertices_bytes_len = mem::size_of_val(vertices.as_slice());
let next_offset = *instance_offset + vertices_bytes_len;
if next_offset > instance_buffer.size {
return None;
}
let render_pass_descriptor = metal::RenderPassDescriptor::new();
let color_attachment = render_pass_descriptor
.color_attachments()
.object_at(0)
.unwrap();
let texture = self.sprite_atlas.metal_texture(texture_id);
let msaa_texture = self.sprite_atlas.msaa_texture(texture_id);
if let Some(msaa_texture) = msaa_texture {
color_attachment.set_texture(Some(&msaa_texture));
color_attachment.set_resolve_texture(Some(&texture));
color_attachment.set_load_action(metal::MTLLoadAction::Clear);
color_attachment.set_store_action(metal::MTLStoreAction::MultisampleResolve);
} else {
color_attachment.set_texture(Some(&texture));
color_attachment.set_load_action(metal::MTLLoadAction::Clear);
color_attachment.set_store_action(metal::MTLStoreAction::Store);
}
color_attachment.set_clear_color(metal::MTLClearColor::new(0., 0., 0., 1.));
let command_encoder = command_buffer.new_render_command_encoder(render_pass_descriptor);
command_encoder.set_render_pipeline_state(&self.paths_rasterization_pipeline_state);
command_encoder.set_vertex_buffer(
PathRasterizationInputIndex::Vertices as u64,
Some(&instance_buffer.metal_buffer),
*instance_offset as u64,
);
let texture_size = Size {
width: DevicePixels::from(texture.width()),
height: DevicePixels::from(texture.height()),
};
command_encoder.set_vertex_bytes(
PathRasterizationInputIndex::AtlasTextureSize as u64,
mem::size_of_val(&texture_size) as u64,
&texture_size as *const Size<DevicePixels> as *const _,
);
let buffer_contents = unsafe {
(instance_buffer.metal_buffer.contents() as *mut u8).add(*instance_offset)
};
unsafe {
ptr::copy_nonoverlapping(
vertices.as_ptr() as *const u8,
buffer_contents,
vertices_bytes_len,
);
}
command_encoder.draw_primitives(
metal::MTLPrimitiveType::Triangle,
0,
vertices.len() as u64,
);
command_encoder.end_encoding();
*instance_offset = next_offset;
}
Some(tiles)
}
fn draw_shadows(
&self,
shadows: &[Shadow],
@ -621,6 +718,7 @@ impl MetalRenderer {
fn draw_paths(
&self,
paths: &[Path<ScaledPixels>],
tiles_by_path_id: &HashMap<PathId, AtlasTile>,
instance_buffer: &mut InstanceBuffer,
instance_offset: &mut usize,
viewport_size: Size<DevicePixels>,
@ -630,108 +728,100 @@ impl MetalRenderer {
return true;
}
command_encoder.set_render_pipeline_state(&self.path_pipeline_state);
command_encoder.set_render_pipeline_state(&self.path_sprites_pipeline_state);
command_encoder.set_vertex_buffer(
SpriteInputIndex::Vertices as u64,
Some(&self.unit_vertices),
0,
);
command_encoder.set_vertex_bytes(
SpriteInputIndex::ViewportSize as u64,
mem::size_of_val(&viewport_size) as u64,
&viewport_size as *const Size<DevicePixels> as *const _,
);
unsafe {
let base_addr = instance_buffer.metal_buffer.contents();
let mut p = (base_addr as *mut u8).add(*instance_offset);
let mut draw_indirect_commands = Vec::with_capacity(paths.len());
let mut prev_texture_id = None;
let mut sprites = SmallVec::<[_; 1]>::new();
let mut paths_and_tiles = paths
.iter()
.map(|path| (path, tiles_by_path_id.get(&path.id).unwrap()))
.peekable();
// copy vertices
let vertices_offset = (p as usize) - (base_addr as usize);
let mut first_vertex = 0;
for (i, path) in paths.iter().enumerate() {
if (p as usize) - (base_addr as usize)
+ (mem::size_of::<PathVertex<ScaledPixels>>() * path.vertices.len())
> instance_buffer.size
{
loop {
if let Some((path, tile)) = paths_and_tiles.peek() {
if prev_texture_id.map_or(true, |texture_id| texture_id == tile.texture_id) {
prev_texture_id = Some(tile.texture_id);
let origin = path.bounds.intersect(&path.content_mask.bounds).origin;
sprites.push(PathSprite {
bounds: Bounds {
origin: origin.map(|p| p.floor()),
size: tile.bounds.size.map(Into::into),
},
color: path.color,
tile: (*tile).clone(),
});
paths_and_tiles.next();
continue;
}
}
if sprites.is_empty() {
break;
} else {
align_offset(instance_offset);
let texture_id = prev_texture_id.take().unwrap();
let texture: metal::Texture = self.sprite_atlas.metal_texture(texture_id);
let texture_size = size(
DevicePixels(texture.width() as i32),
DevicePixels(texture.height() as i32),
);
command_encoder.set_vertex_buffer(
SpriteInputIndex::Sprites as u64,
Some(&instance_buffer.metal_buffer),
*instance_offset as u64,
);
command_encoder.set_vertex_bytes(
SpriteInputIndex::AtlasTextureSize as u64,
mem::size_of_val(&texture_size) as u64,
&texture_size as *const Size<DevicePixels> as *const _,
);
command_encoder.set_fragment_buffer(
SpriteInputIndex::Sprites as u64,
Some(&instance_buffer.metal_buffer),
*instance_offset as u64,
);
command_encoder
.set_fragment_texture(SpriteInputIndex::AtlasTexture as u64, Some(&texture));
let sprite_bytes_len = mem::size_of_val(sprites.as_slice());
let next_offset = *instance_offset + sprite_bytes_len;
if next_offset > instance_buffer.size {
return false;
}
for v in &path.vertices {
*(p as *mut PathVertex<ScaledPixels>) = PathVertex {
xy_position: v.xy_position,
content_mask: ContentMask {
bounds: path.content_mask.bounds,
},
};
p = p.add(mem::size_of::<PathVertex<ScaledPixels>>());
let buffer_contents = unsafe {
(instance_buffer.metal_buffer.contents() as *mut u8).add(*instance_offset)
};
unsafe {
ptr::copy_nonoverlapping(
sprites.as_ptr() as *const u8,
buffer_contents,
sprite_bytes_len,
);
}
draw_indirect_commands.push(MTLDrawPrimitivesIndirectArguments {
vertexCount: path.vertices.len() as u32,
instanceCount: 1,
vertexStart: first_vertex,
baseInstance: i as u32,
});
first_vertex += path.vertices.len() as u32;
}
// copy sprites
let sprites_offset = (p as u64) - (base_addr as u64);
if (p as usize) - (base_addr as usize) + (mem::size_of::<PathSprite>() * paths.len())
> instance_buffer.size
{
return false;
}
for path in paths {
*(p as *mut PathSprite) = PathSprite {
bounds: path.bounds,
color: path.color,
};
p = p.add(mem::size_of::<PathSprite>());
}
// copy indirect commands
let icb_bytes_len = mem::size_of_val(draw_indirect_commands.as_slice());
let icb_offset = (p as u64) - (base_addr as u64);
if (p as usize) - (base_addr as usize) + icb_bytes_len > instance_buffer.size {
return false;
}
ptr::copy_nonoverlapping(
draw_indirect_commands.as_ptr() as *const u8,
p,
icb_bytes_len,
);
p = p.add(icb_bytes_len);
// draw path
command_encoder.set_vertex_buffer(
PathInputIndex::Vertices as u64,
Some(&instance_buffer.metal_buffer),
vertices_offset as u64,
);
command_encoder.set_vertex_bytes(
PathInputIndex::ViewportSize as u64,
mem::size_of_val(&viewport_size) as u64,
&viewport_size as *const Size<DevicePixels> as *const _,
);
command_encoder.set_vertex_buffer(
PathInputIndex::Sprites as u64,
Some(&instance_buffer.metal_buffer),
sprites_offset,
);
command_encoder.set_fragment_buffer(
PathInputIndex::Sprites as u64,
Some(&instance_buffer.metal_buffer),
sprites_offset,
);
for i in 0..paths.len() {
command_encoder.draw_primitives_indirect(
command_encoder.draw_primitives_instanced(
metal::MTLPrimitiveType::Triangle,
&instance_buffer.metal_buffer,
icb_offset
+ (i * std::mem::size_of::<MTLDrawPrimitivesIndirectArguments>()) as u64,
0,
6,
sprites.len() as u64,
);
*instance_offset = next_offset;
sprites.clear();
}
*instance_offset = (p as usize) - (base_addr as usize);
}
true
}
@ -1053,7 +1143,6 @@ fn build_pipeline_state(
vertex_fn_name: &str,
fragment_fn_name: &str,
pixel_format: metal::MTLPixelFormat,
sample_count: u64,
) -> metal::RenderPipelineState {
let vertex_fn = library
.get_function(vertex_fn_name, None)
@ -1066,7 +1155,6 @@ fn build_pipeline_state(
descriptor.set_label(label);
descriptor.set_vertex_function(Some(vertex_fn.as_ref()));
descriptor.set_fragment_function(Some(fragment_fn.as_ref()));
descriptor.set_sample_count(sample_count);
let color_attachment = descriptor.color_attachments().object_at(0).unwrap();
color_attachment.set_pixel_format(pixel_format);
color_attachment.set_blending_enabled(true);
@ -1082,45 +1170,50 @@ fn build_pipeline_state(
.expect("could not create render pipeline state")
}
fn build_path_rasterization_pipeline_state(
device: &metal::DeviceRef,
library: &metal::LibraryRef,
label: &str,
vertex_fn_name: &str,
fragment_fn_name: &str,
pixel_format: metal::MTLPixelFormat,
path_sample_count: u32,
) -> metal::RenderPipelineState {
let vertex_fn = library
.get_function(vertex_fn_name, None)
.expect("error locating vertex function");
let fragment_fn = library
.get_function(fragment_fn_name, None)
.expect("error locating fragment function");
let descriptor = metal::RenderPipelineDescriptor::new();
descriptor.set_label(label);
descriptor.set_vertex_function(Some(vertex_fn.as_ref()));
descriptor.set_fragment_function(Some(fragment_fn.as_ref()));
if path_sample_count > 1 {
descriptor.set_raster_sample_count(path_sample_count as _);
descriptor.set_alpha_to_coverage_enabled(true);
}
let color_attachment = descriptor.color_attachments().object_at(0).unwrap();
color_attachment.set_pixel_format(pixel_format);
color_attachment.set_blending_enabled(true);
color_attachment.set_rgb_blend_operation(metal::MTLBlendOperation::Add);
color_attachment.set_alpha_blend_operation(metal::MTLBlendOperation::Add);
color_attachment.set_source_rgb_blend_factor(metal::MTLBlendFactor::One);
color_attachment.set_source_alpha_blend_factor(metal::MTLBlendFactor::One);
color_attachment.set_destination_rgb_blend_factor(metal::MTLBlendFactor::One);
color_attachment.set_destination_alpha_blend_factor(metal::MTLBlendFactor::One);
device
.new_render_pipeline_state(&descriptor)
.expect("could not create render pipeline state")
}
// Align to multiples of 256 make Metal happy.
fn align_offset(offset: &mut usize) {
*offset = (*offset).div_ceil(256) * 256;
}
fn create_msaa_texture(
device: &metal::Device,
layer: &metal::MetalLayer,
sample_count: u64,
) -> Option<metal::Texture> {
let viewport_size = layer.drawable_size();
let width = viewport_size.width.ceil() as u64;
let height = viewport_size.height.ceil() as u64;
if width == 0 || height == 0 {
return None;
}
if sample_count <= 1 {
return None;
}
let texture_descriptor = metal::TextureDescriptor::new();
texture_descriptor.set_texture_type(metal::MTLTextureType::D2Multisample);
// MTLStorageMode default is `shared` only for Apple silicon GPUs. Use `private` for Apple and Intel GPUs both.
// Reference: https://developer.apple.com/documentation/metal/choosing-a-resource-storage-mode-for-apple-gpus
texture_descriptor.set_storage_mode(metal::MTLStorageMode::Private);
texture_descriptor.set_width(width);
texture_descriptor.set_height(height);
texture_descriptor.set_pixel_format(layer.pixel_format());
texture_descriptor.set_usage(metal::MTLTextureUsage::RenderTarget);
texture_descriptor.set_sample_count(sample_count);
let metal_texture = device.new_texture(&texture_descriptor);
Some(metal_texture)
}
#[repr(C)]
enum ShadowInputIndex {
Vertices = 0,
@ -1162,10 +1255,9 @@ enum SurfaceInputIndex {
}
#[repr(C)]
enum PathInputIndex {
enum PathRasterizationInputIndex {
Vertices = 0,
ViewportSize = 1,
Sprites = 2,
AtlasTextureSize = 1,
}
#[derive(Clone, Debug, Eq, PartialEq)]
@ -1173,6 +1265,7 @@ enum PathInputIndex {
pub struct PathSprite {
pub bounds: Bounds<ScaledPixels>,
pub color: Background,
pub tile: AtlasTile,
}
#[derive(Clone, Debug, Eq, PartialEq)]

View file

@ -698,27 +698,76 @@ fragment float4 polychrome_sprite_fragment(
return color;
}
struct PathVertexOutput {
struct PathRasterizationVertexOutput {
float4 position [[position]];
float2 st_position;
float clip_rect_distance [[clip_distance]][4];
};
struct PathRasterizationFragmentInput {
float4 position [[position]];
float2 st_position;
};
vertex PathRasterizationVertexOutput path_rasterization_vertex(
uint vertex_id [[vertex_id]],
constant PathVertex_ScaledPixels *vertices
[[buffer(PathRasterizationInputIndex_Vertices)]],
constant Size_DevicePixels *atlas_size
[[buffer(PathRasterizationInputIndex_AtlasTextureSize)]]) {
PathVertex_ScaledPixels v = vertices[vertex_id];
float2 vertex_position = float2(v.xy_position.x, v.xy_position.y);
float2 viewport_size = float2(atlas_size->width, atlas_size->height);
return PathRasterizationVertexOutput{
float4(vertex_position / viewport_size * float2(2., -2.) +
float2(-1., 1.),
0., 1.),
float2(v.st_position.x, v.st_position.y),
{v.xy_position.x - v.content_mask.bounds.origin.x,
v.content_mask.bounds.origin.x + v.content_mask.bounds.size.width -
v.xy_position.x,
v.xy_position.y - v.content_mask.bounds.origin.y,
v.content_mask.bounds.origin.y + v.content_mask.bounds.size.height -
v.xy_position.y}};
}
fragment float4 path_rasterization_fragment(PathRasterizationFragmentInput input
[[stage_in]]) {
float2 dx = dfdx(input.st_position);
float2 dy = dfdy(input.st_position);
float2 gradient = float2((2. * input.st_position.x) * dx.x - dx.y,
(2. * input.st_position.x) * dy.x - dy.y);
float f = (input.st_position.x * input.st_position.x) - input.st_position.y;
float distance = f / length(gradient);
float alpha = saturate(0.5 - distance);
return float4(alpha, 0., 0., 1.);
}
struct PathSpriteVertexOutput {
float4 position [[position]];
float2 tile_position;
uint sprite_id [[flat]];
float4 solid_color [[flat]];
float4 color0 [[flat]];
float4 color1 [[flat]];
float4 clip_distance;
};
vertex PathVertexOutput path_vertex(
uint vertex_id [[vertex_id]],
constant PathVertex_ScaledPixels *vertices [[buffer(PathInputIndex_Vertices)]],
uint sprite_id [[instance_id]],
constant PathSprite *sprites [[buffer(PathInputIndex_Sprites)]],
constant Size_DevicePixels *input_viewport_size [[buffer(PathInputIndex_ViewportSize)]]) {
PathVertex_ScaledPixels v = vertices[vertex_id];
float2 vertex_position = float2(v.xy_position.x, v.xy_position.y);
float2 viewport_size = float2((float)input_viewport_size->width,
(float)input_viewport_size->height);
vertex PathSpriteVertexOutput path_sprite_vertex(
uint unit_vertex_id [[vertex_id]], uint sprite_id [[instance_id]],
constant float2 *unit_vertices [[buffer(SpriteInputIndex_Vertices)]],
constant PathSprite *sprites [[buffer(SpriteInputIndex_Sprites)]],
constant Size_DevicePixels *viewport_size
[[buffer(SpriteInputIndex_ViewportSize)]],
constant Size_DevicePixels *atlas_size
[[buffer(SpriteInputIndex_AtlasTextureSize)]]) {
float2 unit_vertex = unit_vertices[unit_vertex_id];
PathSprite sprite = sprites[sprite_id];
float4 device_position = float4(vertex_position / viewport_size * float2(2., -2.) + float2(-1., 1.), 0., 1.);
// Don't apply content mask because it was already accounted for when
// rasterizing the path.
float4 device_position =
to_device_position(unit_vertex, sprite.bounds, viewport_size);
float2 tile_position = to_tile_position(unit_vertex, sprite.tile, atlas_size);
GradientColor gradient = prepare_fill_color(
sprite.color.tag,
@ -728,32 +777,30 @@ vertex PathVertexOutput path_vertex(
sprite.color.colors[1].color
);
return PathVertexOutput{
return PathSpriteVertexOutput{
device_position,
tile_position,
sprite_id,
gradient.solid,
gradient.color0,
gradient.color1,
{v.xy_position.x - v.content_mask.bounds.origin.x,
v.content_mask.bounds.origin.x + v.content_mask.bounds.size.width -
v.xy_position.x,
v.xy_position.y - v.content_mask.bounds.origin.y,
v.content_mask.bounds.origin.y + v.content_mask.bounds.size.height -
v.xy_position.y}
gradient.color1
};
}
fragment float4 path_fragment(
PathVertexOutput input [[stage_in]],
constant PathSprite *sprites [[buffer(PathInputIndex_Sprites)]]) {
if (any(input.clip_distance < float4(0.0))) {
return float4(0.0);
}
fragment float4 path_sprite_fragment(
PathSpriteVertexOutput input [[stage_in]],
constant PathSprite *sprites [[buffer(SpriteInputIndex_Sprites)]],
texture2d<float> atlas_texture [[texture(SpriteInputIndex_AtlasTexture)]]) {
constexpr sampler atlas_texture_sampler(mag_filter::linear,
min_filter::linear);
float4 sample =
atlas_texture.sample(atlas_texture_sampler, input.tile_position);
float mask = 1. - abs(1. - fmod(sample.r, 2.));
PathSprite sprite = sprites[input.sprite_id];
Background background = sprite.color;
float4 color = fill_color(background, input.position.xy, sprite.bounds,
input.solid_color, input.color0, input.color1);
color.a *= mask;
return color;
}

View file

@ -341,7 +341,7 @@ impl PlatformAtlas for TestAtlas {
crate::AtlasTile {
texture_id: AtlasTextureId {
index: texture_id,
kind: crate::AtlasTextureKind::Polychrome,
kind: crate::AtlasTextureKind::Path,
},
tile_id: TileId(tile_id),
padding: 0,

View file

@ -6,7 +6,7 @@ use serde::{Deserialize, Serialize};
use crate::{
AtlasTextureId, AtlasTile, Background, Bounds, ContentMask, Corners, Edges, Hsla, Pixels,
Point, Radians, ScaledPixels, Size, bounds_tree::BoundsTree,
Point, Radians, ScaledPixels, Size, bounds_tree::BoundsTree, point,
};
use std::{fmt::Debug, iter::Peekable, ops::Range, slice};
@ -43,7 +43,13 @@ impl Scene {
self.surfaces.clear();
}
#[allow(dead_code)]
#[cfg_attr(
all(
any(target_os = "linux", target_os = "freebsd"),
not(any(feature = "x11", feature = "wayland"))
),
allow(dead_code)
)]
pub fn paths(&self) -> &[Path<ScaledPixels>] {
&self.paths
}
@ -683,7 +689,6 @@ pub struct Path<P: Clone + Debug + Default + PartialEq> {
start: Point<P>,
current: Point<P>,
contour_count: usize,
base_scale: f32,
}
impl Path<Pixels> {
@ -702,35 +707,25 @@ impl Path<Pixels> {
content_mask: Default::default(),
color: Default::default(),
contour_count: 0,
base_scale: 1.0,
}
}
/// Set the base scale of the path.
pub fn scale(mut self, factor: f32) -> Self {
self.base_scale = factor;
self
}
/// Apply a scale to the path.
pub(crate) fn apply_scale(&self, factor: f32) -> Path<ScaledPixels> {
/// Scale this path by the given factor.
pub fn scale(&self, factor: f32) -> Path<ScaledPixels> {
Path {
id: self.id,
order: self.order,
bounds: self.bounds.scale(self.base_scale * factor),
content_mask: self.content_mask.scale(self.base_scale * factor),
bounds: self.bounds.scale(factor),
content_mask: self.content_mask.scale(factor),
vertices: self
.vertices
.iter()
.map(|vertex| vertex.scale(self.base_scale * factor))
.map(|vertex| vertex.scale(factor))
.collect(),
start: self
.start
.map(|start| start.scale(self.base_scale * factor)),
current: self.current.scale(self.base_scale * factor),
start: self.start.map(|start| start.scale(factor)),
current: self.current.scale(factor),
contour_count: self.contour_count,
color: self.color,
base_scale: 1.0,
}
}
@ -745,7 +740,10 @@ impl Path<Pixels> {
pub fn line_to(&mut self, to: Point<Pixels>) {
self.contour_count += 1;
if self.contour_count > 1 {
self.push_triangle((self.start, self.current, to));
self.push_triangle(
(self.start, self.current, to),
(point(0., 1.), point(0., 1.), point(0., 1.)),
);
}
self.current = to;
}
@ -754,15 +752,25 @@ impl Path<Pixels> {
pub fn curve_to(&mut self, to: Point<Pixels>, ctrl: Point<Pixels>) {
self.contour_count += 1;
if self.contour_count > 1 {
self.push_triangle((self.start, self.current, to));
self.push_triangle(
(self.start, self.current, to),
(point(0., 1.), point(0., 1.), point(0., 1.)),
);
}
self.push_triangle((self.current, ctrl, to));
self.push_triangle(
(self.current, ctrl, to),
(point(0., 0.), point(0.5, 0.), point(1., 1.)),
);
self.current = to;
}
/// Push a triangle to the Path.
pub fn push_triangle(&mut self, xy: (Point<Pixels>, Point<Pixels>, Point<Pixels>)) {
pub fn push_triangle(
&mut self,
xy: (Point<Pixels>, Point<Pixels>, Point<Pixels>),
st: (Point<f32>, Point<f32>, Point<f32>),
) {
self.bounds = self
.bounds
.union(&Bounds {
@ -780,14 +788,17 @@ impl Path<Pixels> {
self.vertices.push(PathVertex {
xy_position: xy.0,
st_position: st.0,
content_mask: Default::default(),
});
self.vertices.push(PathVertex {
xy_position: xy.1,
st_position: st.1,
content_mask: Default::default(),
});
self.vertices.push(PathVertex {
xy_position: xy.2,
st_position: st.2,
content_mask: Default::default(),
});
}
@ -803,6 +814,7 @@ impl From<Path<ScaledPixels>> for Primitive {
#[repr(C)]
pub(crate) struct PathVertex<P: Clone + Debug + Default + PartialEq> {
pub(crate) xy_position: Point<P>,
pub(crate) st_position: Point<f32>,
pub(crate) content_mask: ContentMask<P>,
}
@ -810,6 +822,7 @@ impl PathVertex<Pixels> {
pub fn scale(&self, factor: f32) -> PathVertex<ScaledPixels> {
PathVertex {
xy_position: self.xy_position.scale(factor),
st_position: self.st_position,
content_mask: self.content_mask.scale(factor),
}
}

View file

@ -2424,6 +2424,53 @@ impl Window {
result
}
/// Use a piece of state that exists as long this element is being rendered in consecutive frames.
pub fn use_keyed_state<S: 'static>(
&mut self,
key: impl Into<ElementId>,
cx: &mut App,
init: impl FnOnce(&mut Self, &mut App) -> S,
) -> Entity<S> {
let current_view = self.current_view();
self.with_global_id(key.into(), |global_id, window| {
window.with_element_state(global_id, |state: Option<Entity<S>>, window| {
if let Some(state) = state {
(state.clone(), state)
} else {
let new_state = cx.new(|cx| init(window, cx));
cx.observe(&new_state, move |_, cx| {
cx.notify(current_view);
})
.detach();
(new_state.clone(), new_state)
}
})
})
}
/// Immediately push an element ID onto the stack. Useful for simplifying IDs in lists
pub fn with_id<R>(&mut self, id: impl Into<ElementId>, f: impl FnOnce(&mut Self) -> R) -> R {
self.with_global_id(id.into(), |_, window| f(window))
}
/// Use a piece of state that exists as long this element is being rendered in consecutive frames, without needing to specify a key
///
/// NOTE: This method uses the location of the caller to generate an ID for this state.
/// If this is not sufficient to identify your state (e.g. you're rendering a list item),
/// you can provide a custom ElementID using the `use_keyed_state` method.
#[track_caller]
pub fn use_state<S: 'static>(
&mut self,
cx: &mut App,
init: impl FnOnce(&mut Self, &mut App) -> S,
) -> Entity<S> {
self.use_keyed_state(
ElementId::CodeLocation(*core::panic::Location::caller()),
cx,
init,
)
}
/// Updates or initializes state for an element with the given id that lives across multiple
/// frames. If an element with this ID existed in the rendered frame, its state will be passed
/// to the given closure. The state returned by the closure will be stored so it can be referenced
@ -2658,7 +2705,7 @@ impl Window {
path.color = color.opacity(opacity);
self.next_frame
.scene
.insert_primitive(path.apply_scale(scale_factor));
.insert_primitive(path.scale(scale_factor));
}
/// Paint an underline into the scene for the next frame at the current z-index.
@ -4577,6 +4624,8 @@ pub enum ElementId {
NamedInteger(SharedString, u64),
/// A path.
Path(Arc<std::path::Path>),
/// A code location.
CodeLocation(core::panic::Location<'static>),
}
impl ElementId {
@ -4596,6 +4645,7 @@ impl Display for ElementId {
ElementId::NamedInteger(s, i) => write!(f, "{}-{}", s, i)?,
ElementId::Uuid(uuid) => write!(f, "{}", uuid)?,
ElementId::Path(path) => write!(f, "{}", path.display())?,
ElementId::CodeLocation(location) => write!(f, "{}", location)?,
}
Ok(())

View file

@ -53,6 +53,16 @@ pub fn derive_app_context(input: TokenStream) -> TokenStream {
self.#app_variable.update_entity(handle, update)
}
fn as_mut<'y, 'z, T>(
&'y mut self,
handle: &'z gpui::Entity<T>,
) -> Self::Result<gpui::GpuiBorrow<'y, T>>
where
T: 'static,
{
self.#app_variable.as_mut(handle)
}
fn read_entity<T, R>(
&self,
handle: &gpui::Entity<T>,

View file

@ -4,6 +4,7 @@ pub mod github;
pub use anyhow::{Result, anyhow};
pub use async_body::{AsyncBody, Inner};
use derive_more::Deref;
use http::HeaderValue;
pub use http::{self, Method, Request, Response, StatusCode, Uri};
use futures::future::BoxFuture;
@ -39,6 +40,8 @@ impl HttpRequestExt for http::request::Builder {
pub trait HttpClient: 'static + Send + Sync {
fn type_name(&self) -> &'static str;
fn user_agent(&self) -> Option<&HeaderValue>;
fn send(
&self,
req: http::Request<AsyncBody>,
@ -118,6 +121,10 @@ impl HttpClient for HttpClientWithProxy {
self.client.send(req)
}
fn user_agent(&self) -> Option<&HeaderValue> {
self.client.user_agent()
}
fn proxy(&self) -> Option<&Url> {
self.proxy.as_ref()
}
@ -135,6 +142,10 @@ impl HttpClient for Arc<HttpClientWithProxy> {
self.client.send(req)
}
fn user_agent(&self) -> Option<&HeaderValue> {
self.client.user_agent()
}
fn proxy(&self) -> Option<&Url> {
self.proxy.as_ref()
}
@ -250,6 +261,10 @@ impl HttpClient for Arc<HttpClientWithUrl> {
self.client.send(req)
}
fn user_agent(&self) -> Option<&HeaderValue> {
self.client.user_agent()
}
fn proxy(&self) -> Option<&Url> {
self.client.proxy.as_ref()
}
@ -267,6 +282,10 @@ impl HttpClient for HttpClientWithUrl {
self.client.send(req)
}
fn user_agent(&self) -> Option<&HeaderValue> {
self.client.user_agent()
}
fn proxy(&self) -> Option<&Url> {
self.client.proxy.as_ref()
}
@ -314,6 +333,10 @@ impl HttpClient for BlockedHttpClient {
})
}
fn user_agent(&self) -> Option<&HeaderValue> {
None
}
fn proxy(&self) -> Option<&Url> {
None
}
@ -334,6 +357,7 @@ type FakeHttpHandler = Box<
#[cfg(feature = "test-support")]
pub struct FakeHttpClient {
handler: FakeHttpHandler,
user_agent: HeaderValue,
}
#[cfg(feature = "test-support")]
@ -348,6 +372,7 @@ impl FakeHttpClient {
client: HttpClientWithProxy {
client: Arc::new(Self {
handler: Box::new(move |req| Box::pin(handler(req))),
user_agent: HeaderValue::from_static(type_name::<Self>()),
}),
proxy: None,
},
@ -390,6 +415,10 @@ impl HttpClient for FakeHttpClient {
future
}
fn user_agent(&self) -> Option<&HeaderValue> {
Some(&self.user_agent)
}
fn proxy(&self) -> Option<&Url> {
None
}

View file

@ -2072,6 +2072,21 @@ impl Buffer {
self.text.push_transaction(transaction, now);
}
/// Differs from `push_transaction` in that it does not clear the redo
/// stack. Intended to be used to create a parent transaction to merge
/// potential child transactions into.
///
/// The caller is responsible for removing it from the undo history using
/// `forget_transaction` if no edits are merged into it. Otherwise, if edits
/// are merged into this transaction, the caller is responsible for ensuring
/// the redo stack is cleared. The easiest way to ensure the redo stack is
/// cleared is to create transactions with the usual `start_transaction` and
/// `end_transaction` methods and merging the resulting transactions into
/// the transaction created by this method
pub fn push_empty_transaction(&mut self, now: Instant) -> TransactionId {
self.text.push_empty_transaction(now)
}
/// Prevent the last transaction from being grouped with any subsequent transactions,
/// even if they occur with the buffer's undo grouping duration.
pub fn finalize_last_transaction(&mut self) -> Option<&Transaction> {

View file

@ -116,6 +116,12 @@ pub enum LanguageModelCompletionError {
provider: LanguageModelProviderName,
message: String,
},
#[error("{message}")]
UpstreamProviderError {
message: String,
status: StatusCode,
retry_after: Option<Duration>,
},
#[error("HTTP response error from {provider}'s API: status {status_code} - {message:?}")]
HttpResponseError {
provider: LanguageModelProviderName,

View file

@ -644,8 +644,62 @@ struct ApiError {
headers: HeaderMap<HeaderValue>,
}
/// Represents error responses from Zed's cloud API.
///
/// Example JSON for an upstream HTTP error:
/// ```json
/// {
/// "code": "upstream_http_error",
/// "message": "Received an error from the Anthropic API: upstream connect error or disconnect/reset before headers, reset reason: connection timeout",
/// "upstream_status": 503
/// }
/// ```
#[derive(Debug, serde::Deserialize)]
struct CloudApiError {
code: String,
message: String,
#[serde(default)]
#[serde(deserialize_with = "deserialize_optional_status_code")]
upstream_status: Option<StatusCode>,
#[serde(default)]
retry_after: Option<f64>,
}
fn deserialize_optional_status_code<'de, D>(deserializer: D) -> Result<Option<StatusCode>, D::Error>
where
D: serde::Deserializer<'de>,
{
let opt: Option<u16> = Option::deserialize(deserializer)?;
Ok(opt.and_then(|code| StatusCode::from_u16(code).ok()))
}
impl From<ApiError> for LanguageModelCompletionError {
fn from(error: ApiError) -> Self {
if let Ok(cloud_error) = serde_json::from_str::<CloudApiError>(&error.body) {
if cloud_error.code.starts_with("upstream_http_") {
let status = if let Some(status) = cloud_error.upstream_status {
status
} else if cloud_error.code.ends_with("_error") {
error.status
} else {
// If there's a status code in the code string (e.g. "upstream_http_429")
// then use that; otherwise, see if the JSON contains a status code.
cloud_error
.code
.strip_prefix("upstream_http_")
.and_then(|code_str| code_str.parse::<u16>().ok())
.and_then(|code| StatusCode::from_u16(code).ok())
.unwrap_or(error.status)
};
return LanguageModelCompletionError::UpstreamProviderError {
message: cloud_error.message,
status,
retry_after: cloud_error.retry_after.map(Duration::from_secs_f64),
};
}
}
let retry_after = None;
LanguageModelCompletionError::from_http_status(
PROVIDER_NAME,
@ -1279,3 +1333,155 @@ impl Component for ZedAiConfiguration {
)
}
}
#[cfg(test)]
mod tests {
use super::*;
use http_client::http::{HeaderMap, StatusCode};
use language_model::LanguageModelCompletionError;
#[test]
fn test_api_error_conversion_with_upstream_http_error() {
// upstream_http_error with 503 status should become ServerOverloaded
let error_body = r#"{"code":"upstream_http_error","message":"Received an error from the Anthropic API: upstream connect error or disconnect/reset before headers, reset reason: connection timeout","upstream_status":503}"#;
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::UpstreamProviderError { message, .. } => {
assert_eq!(
message,
"Received an error from the Anthropic API: upstream connect error or disconnect/reset before headers, reset reason: connection timeout"
);
}
_ => panic!(
"Expected UpstreamProviderError for upstream 503, got: {:?}",
completion_error
),
}
// upstream_http_error with 500 status should become ApiInternalServerError
let error_body = r#"{"code":"upstream_http_error","message":"Received an error from the OpenAI API: internal server error","upstream_status":500}"#;
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::UpstreamProviderError { message, .. } => {
assert_eq!(
message,
"Received an error from the OpenAI API: internal server error"
);
}
_ => panic!(
"Expected UpstreamProviderError for upstream 500, got: {:?}",
completion_error
),
}
// upstream_http_error with 429 status should become RateLimitExceeded
let error_body = r#"{"code":"upstream_http_error","message":"Received an error from the Google API: rate limit exceeded","upstream_status":429}"#;
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::UpstreamProviderError { message, .. } => {
assert_eq!(
message,
"Received an error from the Google API: rate limit exceeded"
);
}
_ => panic!(
"Expected UpstreamProviderError for upstream 429, got: {:?}",
completion_error
),
}
// Regular 500 error without upstream_http_error should remain ApiInternalServerError for Zed
let error_body = "Regular internal server error";
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::ApiInternalServerError { provider, message } => {
assert_eq!(provider, PROVIDER_NAME);
assert_eq!(message, "Regular internal server error");
}
_ => panic!(
"Expected ApiInternalServerError for regular 500, got: {:?}",
completion_error
),
}
// upstream_http_429 format should be converted to UpstreamProviderError
let error_body = r#"{"code":"upstream_http_429","message":"Upstream Anthropic rate limit exceeded.","retry_after":30.5}"#;
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::UpstreamProviderError {
message,
status,
retry_after,
} => {
assert_eq!(message, "Upstream Anthropic rate limit exceeded.");
assert_eq!(status, StatusCode::TOO_MANY_REQUESTS);
assert_eq!(retry_after, Some(Duration::from_secs_f64(30.5)));
}
_ => panic!(
"Expected UpstreamProviderError for upstream_http_429, got: {:?}",
completion_error
),
}
// Invalid JSON in error body should fall back to regular error handling
let error_body = "Not JSON at all";
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::ApiInternalServerError { provider, .. } => {
assert_eq!(provider, PROVIDER_NAME);
}
_ => panic!(
"Expected ApiInternalServerError for invalid JSON, got: {:?}",
completion_error
),
}
}
}

View file

@ -410,8 +410,20 @@ pub fn into_mistral(
.push_part(mistral::MessagePart::Text { text: text.clone() });
}
MessageContent::RedactedThinking(_) => {}
MessageContent::ToolUse(_) | MessageContent::ToolResult(_) => {
// Tool content is not supported in User messages for Mistral
MessageContent::ToolUse(_) => {
// Tool use is not supported in User messages for Mistral
}
MessageContent::ToolResult(tool_result) => {
let tool_content = match &tool_result.content {
LanguageModelToolResultContent::Text(text) => text.to_string(),
LanguageModelToolResultContent::Image(_) => {
"[Tool responded with an image, but Zed doesn't support these in Mistral models yet]".to_string()
}
};
messages.push(mistral::RequestMessage::Tool {
content: tool_content,
tool_call_id: tool_result.tool_use_id.to_string(),
});
}
}
}
@ -482,24 +494,6 @@ pub fn into_mistral(
}
}
for message in &request.messages {
for content in &message.content {
if let MessageContent::ToolResult(tool_result) = content {
let content = match &tool_result.content {
LanguageModelToolResultContent::Text(text) => text.to_string(),
LanguageModelToolResultContent::Image(_) => {
"[Tool responded with an image, but Zed doesn't support these in Mistral models yet]".to_string()
}
};
messages.push(mistral::RequestMessage::Tool {
content,
tool_call_id: tool_result.tool_use_id.to_string(),
});
}
}
}
// The Mistral API requires that tool messages be followed by assistant messages,
// not user messages. When we have a tool->user sequence in the conversation,
// we need to insert a placeholder assistant message to maintain proper conversation

View file

@ -231,6 +231,13 @@ impl JsonLspAdapter {
))
}
schemas
.as_array_mut()
.unwrap()
.extend(cx.all_action_names().into_iter().map(|&name| {
project::lsp_store::json_language_server_ext::url_schema_for_action(name)
}));
// This can be viewed via `dev: open language server logs` -> `json-language-server` ->
// `Server Info`
serde_json::json!({

View file

@ -273,6 +273,7 @@ pub fn init(languages: Arc<LanguageRegistry>, node: NodeRuntime, cx: &mut App) {
"Astro",
"CSS",
"ERB",
"HTML/ERB",
"HEEX",
"HTML",
"JavaScript",

View file

@ -179,6 +179,7 @@ impl LspAdapter for TailwindLspAdapter {
("Elixir".to_string(), "phoenix-heex".to_string()),
("HEEX".to_string(), "phoenix-heex".to_string()),
("ERB".to_string(), "erb".to_string()),
("HTML/ERB".to_string(), "erb".to_string()),
("PHP".to_string(), "php".to_string()),
("Vue.js".to_string(), "vue".to_string()),
])

View file

@ -1,4 +1,5 @@
pub mod clangd_ext;
pub mod json_language_server_ext;
pub mod lsp_ext_command;
pub mod rust_analyzer_ext;
@ -1034,6 +1035,7 @@ impl LocalLspStore {
})
.detach();
json_language_server_ext::register_requests(this.clone(), language_server);
rust_analyzer_ext::register_notifications(this.clone(), language_server);
clangd_ext::register_notifications(this, language_server, adapter);
}
@ -1272,15 +1274,11 @@ impl LocalLspStore {
// grouped with the previous transaction in the history
// based on the transaction group interval
buffer.finalize_last_transaction();
let transaction_id = buffer
buffer
.start_transaction()
.context("transaction already open")?;
let transaction = buffer
.get_transaction(transaction_id)
.expect("transaction started")
.clone();
buffer.end_transaction(cx);
buffer.push_transaction(transaction, cx.background_executor().now());
let transaction_id = buffer.push_empty_transaction(cx.background_executor().now());
buffer.finalize_last_transaction();
anyhow::Ok(transaction_id)
})??;
@ -3553,7 +3551,8 @@ pub struct LspStore {
_maintain_buffer_languages: Task<()>,
diagnostic_summaries:
HashMap<WorktreeId, HashMap<Arc<Path>, HashMap<LanguageServerId, DiagnosticSummary>>>,
lsp_data: HashMap<BufferId, DocumentColorData>,
lsp_document_colors: HashMap<BufferId, DocumentColorData>,
lsp_code_lens: HashMap<BufferId, CodeLensData>,
}
#[derive(Debug, Default, Clone)]
@ -3563,6 +3562,7 @@ pub struct DocumentColors {
}
type DocumentColorTask = Shared<Task<std::result::Result<DocumentColors, Arc<anyhow::Error>>>>;
type CodeLensTask = Shared<Task<std::result::Result<Vec<CodeAction>, Arc<anyhow::Error>>>>;
#[derive(Debug, Default)]
struct DocumentColorData {
@ -3572,8 +3572,15 @@ struct DocumentColorData {
colors_update: Option<(Global, DocumentColorTask)>,
}
#[derive(Debug, Default)]
struct CodeLensData {
lens_for_version: Global,
lens: HashMap<LanguageServerId, Vec<CodeAction>>,
update: Option<(Global, CodeLensTask)>,
}
#[derive(Debug, PartialEq, Eq, Clone, Copy)]
pub enum ColorFetchStrategy {
pub enum LspFetchStrategy {
IgnoreCache,
UseCache { known_cache_version: Option<usize> },
}
@ -3806,7 +3813,8 @@ impl LspStore {
language_server_statuses: Default::default(),
nonce: StdRng::from_entropy().r#gen(),
diagnostic_summaries: HashMap::default(),
lsp_data: HashMap::default(),
lsp_document_colors: HashMap::default(),
lsp_code_lens: HashMap::default(),
active_entry: None,
_maintain_workspace_config,
_maintain_buffer_languages: Self::maintain_buffer_languages(languages, cx),
@ -3863,7 +3871,8 @@ impl LspStore {
language_server_statuses: Default::default(),
nonce: StdRng::from_entropy().r#gen(),
diagnostic_summaries: HashMap::default(),
lsp_data: HashMap::default(),
lsp_document_colors: HashMap::default(),
lsp_code_lens: HashMap::default(),
active_entry: None,
toolchain_store,
_maintain_workspace_config,
@ -4164,7 +4173,8 @@ impl LspStore {
*refcount
};
if refcount == 0 {
lsp_store.lsp_data.remove(&buffer_id);
lsp_store.lsp_document_colors.remove(&buffer_id);
lsp_store.lsp_code_lens.remove(&buffer_id);
let local = lsp_store.as_local_mut().unwrap();
local.registered_buffers.remove(&buffer_id);
local.buffers_opened_in_servers.remove(&buffer_id);
@ -5704,69 +5714,168 @@ impl LspStore {
}
}
pub fn code_lens(
pub fn code_lens_actions(
&mut self,
buffer_handle: &Entity<Buffer>,
buffer: &Entity<Buffer>,
cx: &mut Context<Self>,
) -> Task<Result<Vec<CodeAction>>> {
) -> CodeLensTask {
let version_queried_for = buffer.read(cx).version();
let buffer_id = buffer.read(cx).remote_id();
if let Some(cached_data) = self.lsp_code_lens.get(&buffer_id) {
if !version_queried_for.changed_since(&cached_data.lens_for_version) {
let has_different_servers = self.as_local().is_some_and(|local| {
local
.buffers_opened_in_servers
.get(&buffer_id)
.cloned()
.unwrap_or_default()
!= cached_data.lens.keys().copied().collect()
});
if !has_different_servers {
return Task::ready(Ok(cached_data.lens.values().flatten().cloned().collect()))
.shared();
}
}
}
let lsp_data = self.lsp_code_lens.entry(buffer_id).or_default();
if let Some((updating_for, running_update)) = &lsp_data.update {
if !version_queried_for.changed_since(&updating_for) {
return running_update.clone();
}
}
let buffer = buffer.clone();
let query_version_queried_for = version_queried_for.clone();
let new_task = cx
.spawn(async move |lsp_store, cx| {
cx.background_executor()
.timer(Duration::from_millis(30))
.await;
let fetched_lens = lsp_store
.update(cx, |lsp_store, cx| lsp_store.fetch_code_lens(&buffer, cx))
.map_err(Arc::new)?
.await
.context("fetching code lens")
.map_err(Arc::new);
let fetched_lens = match fetched_lens {
Ok(fetched_lens) => fetched_lens,
Err(e) => {
lsp_store
.update(cx, |lsp_store, _| {
lsp_store.lsp_code_lens.entry(buffer_id).or_default().update = None;
})
.ok();
return Err(e);
}
};
lsp_store
.update(cx, |lsp_store, _| {
let lsp_data = lsp_store.lsp_code_lens.entry(buffer_id).or_default();
if lsp_data.lens_for_version == query_version_queried_for {
lsp_data.lens.extend(fetched_lens.clone());
} else if !lsp_data
.lens_for_version
.changed_since(&query_version_queried_for)
{
lsp_data.lens_for_version = query_version_queried_for;
lsp_data.lens = fetched_lens.clone();
}
lsp_data.update = None;
lsp_data.lens.values().flatten().cloned().collect()
})
.map_err(Arc::new)
})
.shared();
lsp_data.update = Some((version_queried_for, new_task.clone()));
new_task
}
fn fetch_code_lens(
&mut self,
buffer: &Entity<Buffer>,
cx: &mut Context<Self>,
) -> Task<Result<HashMap<LanguageServerId, Vec<CodeAction>>>> {
if let Some((upstream_client, project_id)) = self.upstream_client() {
let request_task = upstream_client.request(proto::MultiLspQuery {
buffer_id: buffer_handle.read(cx).remote_id().into(),
version: serialize_version(&buffer_handle.read(cx).version()),
buffer_id: buffer.read(cx).remote_id().into(),
version: serialize_version(&buffer.read(cx).version()),
project_id,
strategy: Some(proto::multi_lsp_query::Strategy::All(
proto::AllLanguageServers {},
)),
request: Some(proto::multi_lsp_query::Request::GetCodeLens(
GetCodeLens.to_proto(project_id, buffer_handle.read(cx)),
GetCodeLens.to_proto(project_id, buffer.read(cx)),
)),
});
let buffer = buffer_handle.clone();
cx.spawn(async move |weak_project, cx| {
let Some(project) = weak_project.upgrade() else {
return Ok(Vec::new());
let buffer = buffer.clone();
cx.spawn(async move |weak_lsp_store, cx| {
let Some(lsp_store) = weak_lsp_store.upgrade() else {
return Ok(HashMap::default());
};
let responses = request_task.await?.responses;
let code_lens = join_all(
let code_lens_actions = join_all(
responses
.into_iter()
.filter_map(|lsp_response| match lsp_response.response? {
proto::lsp_response::Response::GetCodeLensResponse(response) => {
Some(response)
}
unexpected => {
debug_panic!("Unexpected response: {unexpected:?}");
None
}
.filter_map(|lsp_response| {
let response = match lsp_response.response? {
proto::lsp_response::Response::GetCodeLensResponse(response) => {
Some(response)
}
unexpected => {
debug_panic!("Unexpected response: {unexpected:?}");
None
}
}?;
let server_id = LanguageServerId::from_proto(lsp_response.server_id);
Some((server_id, response))
})
.map(|code_lens_response| {
GetCodeLens.response_from_proto(
code_lens_response,
project.clone(),
buffer.clone(),
cx.clone(),
)
.map(|(server_id, code_lens_response)| {
let lsp_store = lsp_store.clone();
let buffer = buffer.clone();
let cx = cx.clone();
async move {
(
server_id,
GetCodeLens
.response_from_proto(
code_lens_response,
lsp_store,
buffer,
cx,
)
.await,
)
}
}),
)
.await;
Ok(code_lens
let mut has_errors = false;
let code_lens_actions = code_lens_actions
.into_iter()
.collect::<Result<Vec<Vec<_>>>>()?
.into_iter()
.flatten()
.collect())
.filter_map(|(server_id, code_lens)| match code_lens {
Ok(code_lens) => Some((server_id, code_lens)),
Err(e) => {
has_errors = true;
log::error!("{e:#}");
None
}
})
.collect::<HashMap<_, _>>();
anyhow::ensure!(
!has_errors || !code_lens_actions.is_empty(),
"Failed to fetch code lens"
);
Ok(code_lens_actions)
})
} else {
let code_lens_task =
self.request_multiple_lsp_locally(buffer_handle, None::<usize>, GetCodeLens, cx);
cx.spawn(async move |_, _| {
Ok(code_lens_task
.await
.into_iter()
.flat_map(|(_, code_lens)| code_lens)
.collect())
})
let code_lens_actions_task =
self.request_multiple_lsp_locally(buffer, None::<usize>, GetCodeLens, cx);
cx.background_spawn(
async move { Ok(code_lens_actions_task.await.into_iter().collect()) },
)
}
}
@ -6599,7 +6708,7 @@ impl LspStore {
pub fn document_colors(
&mut self,
fetch_strategy: ColorFetchStrategy,
fetch_strategy: LspFetchStrategy,
buffer: Entity<Buffer>,
cx: &mut Context<Self>,
) -> Option<DocumentColorTask> {
@ -6607,11 +6716,11 @@ impl LspStore {
let buffer_id = buffer.read(cx).remote_id();
match fetch_strategy {
ColorFetchStrategy::IgnoreCache => {}
ColorFetchStrategy::UseCache {
LspFetchStrategy::IgnoreCache => {}
LspFetchStrategy::UseCache {
known_cache_version,
} => {
if let Some(cached_data) = self.lsp_data.get(&buffer_id) {
if let Some(cached_data) = self.lsp_document_colors.get(&buffer_id) {
if !version_queried_for.changed_since(&cached_data.colors_for_version) {
let has_different_servers = self.as_local().is_some_and(|local| {
local
@ -6644,7 +6753,7 @@ impl LspStore {
}
}
let lsp_data = self.lsp_data.entry(buffer_id).or_default();
let lsp_data = self.lsp_document_colors.entry(buffer_id).or_default();
if let Some((updating_for, running_update)) = &lsp_data.colors_update {
if !version_queried_for.changed_since(&updating_for) {
return Some(running_update.clone());
@ -6658,14 +6767,14 @@ impl LspStore {
.await;
let fetched_colors = lsp_store
.update(cx, |lsp_store, cx| {
lsp_store.fetch_document_colors_for_buffer(buffer.clone(), cx)
lsp_store.fetch_document_colors_for_buffer(&buffer, cx)
})?
.await
.context("fetching document colors")
.map_err(Arc::new);
let fetched_colors = match fetched_colors {
Ok(fetched_colors) => {
if fetch_strategy != ColorFetchStrategy::IgnoreCache
if fetch_strategy != LspFetchStrategy::IgnoreCache
&& Some(true)
== buffer
.update(cx, |buffer, _| {
@ -6681,7 +6790,7 @@ impl LspStore {
lsp_store
.update(cx, |lsp_store, _| {
lsp_store
.lsp_data
.lsp_document_colors
.entry(buffer_id)
.or_default()
.colors_update = None;
@ -6693,7 +6802,7 @@ impl LspStore {
lsp_store
.update(cx, |lsp_store, _| {
let lsp_data = lsp_store.lsp_data.entry(buffer_id).or_default();
let lsp_data = lsp_store.lsp_document_colors.entry(buffer_id).or_default();
if lsp_data.colors_for_version == query_version_queried_for {
lsp_data.colors.extend(fetched_colors.clone());
@ -6727,7 +6836,7 @@ impl LspStore {
fn fetch_document_colors_for_buffer(
&mut self,
buffer: Entity<Buffer>,
buffer: &Entity<Buffer>,
cx: &mut Context<Self>,
) -> Task<anyhow::Result<HashMap<LanguageServerId, HashSet<DocumentColor>>>> {
if let Some((client, project_id)) = self.upstream_client() {
@ -6742,6 +6851,7 @@ impl LspStore {
GetDocumentColor {}.to_proto(project_id, buffer.read(cx)),
)),
});
let buffer = buffer.clone();
cx.spawn(async move |project, cx| {
let Some(project) = project.upgrade() else {
return Ok(HashMap::default());
@ -6787,7 +6897,7 @@ impl LspStore {
})
} else {
let document_colors_task =
self.request_multiple_lsp_locally(&buffer, None::<usize>, GetDocumentColor, cx);
self.request_multiple_lsp_locally(buffer, None::<usize>, GetDocumentColor, cx);
cx.spawn(async move |_, _| {
Ok(document_colors_task
.await
@ -7327,21 +7437,23 @@ impl LspStore {
}
pub(crate) async fn refresh_workspace_configurations(
this: &WeakEntity<Self>,
lsp_store: &WeakEntity<Self>,
fs: Arc<dyn Fs>,
cx: &mut AsyncApp,
) {
maybe!(async move {
let servers = this
.update(cx, |this, cx| {
let Some(local) = this.as_local() else {
let mut refreshed_servers = HashSet::default();
let servers = lsp_store
.update(cx, |lsp_store, cx| {
let toolchain_store = lsp_store.toolchain_store(cx);
let Some(local) = lsp_store.as_local() else {
return Vec::default();
};
local
.language_server_ids
.iter()
.flat_map(|((worktree_id, _), server_ids)| {
let worktree = this
let worktree = lsp_store
.worktree_store
.read(cx)
.worktree_for_id(*worktree_id, cx);
@ -7357,43 +7469,54 @@ impl LspStore {
)
});
server_ids.iter().filter_map(move |server_id| {
let fs = fs.clone();
let toolchain_store = toolchain_store.clone();
server_ids.iter().filter_map(|server_id| {
let delegate = delegate.clone()? as Arc<dyn LspAdapterDelegate>;
let states = local.language_servers.get(server_id)?;
match states {
LanguageServerState::Starting { .. } => None,
LanguageServerState::Running {
adapter, server, ..
} => Some((
adapter.adapter.clone(),
server.clone(),
delegate.clone()? as Arc<dyn LspAdapterDelegate>,
)),
} => {
let fs = fs.clone();
let toolchain_store = toolchain_store.clone();
let adapter = adapter.clone();
let server = server.clone();
refreshed_servers.insert(server.name());
Some(cx.spawn(async move |_, cx| {
let settings =
LocalLspStore::workspace_configuration_for_adapter(
adapter.adapter.clone(),
fs.as_ref(),
&delegate,
toolchain_store,
cx,
)
.await
.ok()?;
server
.notify::<lsp::notification::DidChangeConfiguration>(
&lsp::DidChangeConfigurationParams { settings },
)
.ok()?;
Some(())
}))
}
}
})
}).collect::<Vec<_>>()
})
.collect::<Vec<_>>()
})
.ok()?;
let toolchain_store = this.update(cx, |this, cx| this.toolchain_store(cx)).ok()?;
for (adapter, server, delegate) in servers {
let settings = LocalLspStore::workspace_configuration_for_adapter(
adapter,
fs.as_ref(),
&delegate,
toolchain_store.clone(),
cx,
)
.await
.ok()?;
server
.notify::<lsp::notification::DidChangeConfiguration>(
&lsp::DidChangeConfigurationParams { settings },
)
.ok();
}
log::info!("Refreshing workspace configurations for servers {refreshed_servers:?}");
// TODO this asynchronous job runs concurrently with extension (de)registration and may take enough time for a certain extension
// to stop and unregister its language server wrapper.
// This is racy : an extension might have already removed all `local.language_servers` state, but here we `.clone()` and hold onto it anyway.
// This now causes errors in the logs, we should find a way to remove such servers from the processing everywhere.
let _: Vec<Option<()>> = join_all(servers).await;
Some(())
})
.await;
@ -11280,9 +11403,12 @@ impl LspStore {
}
fn cleanup_lsp_data(&mut self, for_server: LanguageServerId) {
for buffer_lsp_data in self.lsp_data.values_mut() {
buffer_lsp_data.colors.remove(&for_server);
buffer_lsp_data.cache_version += 1;
for buffer_colors in self.lsp_document_colors.values_mut() {
buffer_colors.colors.remove(&for_server);
buffer_colors.cache_version += 1;
}
for buffer_lens in self.lsp_code_lens.values_mut() {
buffer_lens.lens.remove(&for_server);
}
if let Some(local) = self.as_local_mut() {
local.buffer_pull_diagnostics_result_ids.remove(&for_server);

View file

@ -0,0 +1,101 @@
use anyhow::Context as _;
use collections::HashMap;
use gpui::WeakEntity;
use lsp::LanguageServer;
use crate::LspStore;
/// https://github.com/Microsoft/vscode/blob/main/extensions/json-language-features/server/README.md#schema-content-request
///
/// Represents a "JSON language server-specific, non-standardized, extension to the LSP" with which the vscode-json-language-server
/// can request the contents of a schema that is associated with a uri scheme it does not support.
/// In our case, we provide the uris for actions on server startup under the `zed://schemas/action/{normalize_action_name}` scheme.
/// We can then respond to this request with the schema content on demand, thereby greatly reducing the total size of the JSON we send to the server on startup
struct SchemaContentRequest {}
impl lsp::request::Request for SchemaContentRequest {
type Params = Vec<String>;
type Result = String;
const METHOD: &'static str = "vscode/content";
}
pub fn register_requests(_lsp_store: WeakEntity<LspStore>, language_server: &LanguageServer) {
language_server
.on_request::<SchemaContentRequest, _, _>(|params, cx| {
// PERF: Use a cache (`OnceLock`?) to avoid recomputing the action schemas
let mut generator = settings::KeymapFile::action_schema_generator();
let all_schemas = cx.update(|cx| HashMap::from_iter(cx.action_schemas(&mut generator)));
async move {
let all_schemas = all_schemas?;
let Some(uri) = params.get(0) else {
anyhow::bail!("No URI");
};
let normalized_action_name = uri
.strip_prefix("zed://schemas/action/")
.context("Invalid URI")?;
let action_name = denormalize_action_name(normalized_action_name);
let schema = root_schema_from_action_schema(
all_schemas
.get(action_name.as_str())
.and_then(Option::as_ref),
&mut generator,
)
.to_value();
serde_json::to_string(&schema).context("Failed to serialize schema")
}
})
.detach();
}
pub fn normalize_action_name(action_name: &str) -> String {
action_name.replace("::", "__")
}
pub fn denormalize_action_name(action_name: &str) -> String {
action_name.replace("__", "::")
}
pub fn normalized_action_file_name(action_name: &str) -> String {
normalized_action_name_to_file_name(normalize_action_name(action_name))
}
pub fn normalized_action_name_to_file_name(mut normalized_action_name: String) -> String {
normalized_action_name.push_str(".json");
normalized_action_name
}
pub fn url_schema_for_action(action_name: &str) -> serde_json::Value {
let normalized_name = normalize_action_name(action_name);
let file_name = normalized_action_name_to_file_name(normalized_name.clone());
serde_json::json!({
"fileMatch": [file_name],
"url": format!("zed://schemas/action/{}", normalized_name)
})
}
fn root_schema_from_action_schema(
action_schema: Option<&schemars::Schema>,
generator: &mut schemars::SchemaGenerator,
) -> schemars::Schema {
let Some(action_schema) = action_schema else {
return schemars::json_schema!(false);
};
let meta_schema = generator
.settings()
.meta_schema
.as_ref()
.expect("meta_schema should be present in schemars settings")
.to_string();
let defs = generator.definitions();
let mut schema = schemars::json_schema!({
"$schema": meta_schema,
"allowTrailingCommas": true,
"$defs": defs,
});
schema
.ensure_object()
.extend(std::mem::take(action_schema.clone().ensure_object()));
schema
}

View file

@ -113,7 +113,7 @@ use std::{
use task_store::TaskStore;
use terminals::Terminals;
use text::{Anchor, BufferId, Point};
use text::{Anchor, BufferId, OffsetRangeExt, Point};
use toolchain_store::EmptyToolchainStore;
use util::{
ResultExt as _,
@ -590,7 +590,7 @@ pub(crate) struct CoreCompletion {
}
/// A code action provided by a language server.
#[derive(Clone, Debug)]
#[derive(Clone, Debug, PartialEq)]
pub struct CodeAction {
/// The id of the language server that produced this code action.
pub server_id: LanguageServerId,
@ -604,7 +604,7 @@ pub struct CodeAction {
}
/// An action sent back by a language server.
#[derive(Clone, Debug)]
#[derive(Clone, Debug, PartialEq)]
pub enum LspAction {
/// An action with the full data, may have a command or may not.
/// May require resolving.
@ -3607,20 +3607,29 @@ impl Project {
})
}
pub fn code_lens<T: Clone + ToOffset>(
pub fn code_lens_actions<T: Clone + ToOffset>(
&mut self,
buffer_handle: &Entity<Buffer>,
buffer: &Entity<Buffer>,
range: Range<T>,
cx: &mut Context<Self>,
) -> Task<Result<Vec<CodeAction>>> {
let snapshot = buffer_handle.read(cx).snapshot();
let range = snapshot.anchor_before(range.start)..snapshot.anchor_after(range.end);
let snapshot = buffer.read(cx).snapshot();
let range = range.clone().to_owned().to_point(&snapshot);
let range_start = snapshot.anchor_before(range.start);
let range_end = if range.start == range.end {
range_start
} else {
snapshot.anchor_after(range.end)
};
let range = range_start..range_end;
let code_lens_actions = self
.lsp_store
.update(cx, |lsp_store, cx| lsp_store.code_lens(buffer_handle, cx));
.update(cx, |lsp_store, cx| lsp_store.code_lens_actions(buffer, cx));
cx.background_spawn(async move {
let mut code_lens_actions = code_lens_actions.await?;
let mut code_lens_actions = code_lens_actions
.await
.map_err(|e| anyhow!("code lens fetch failed: {e:#}"))?;
code_lens_actions.retain(|code_lens_action| {
range
.start

View file

@ -384,12 +384,20 @@ struct ItemColors {
focused: Hsla,
}
fn get_item_color(cx: &App) -> ItemColors {
fn get_item_color(is_sticky: bool, cx: &App) -> ItemColors {
let colors = cx.theme().colors();
ItemColors {
default: colors.panel_background,
hover: colors.element_hover,
default: if is_sticky {
colors.panel_overlay_background
} else {
colors.panel_background
},
hover: if is_sticky {
colors.panel_overlay_hover
} else {
colors.element_hover
},
marked: colors.element_selected,
focused: colors.panel_focused_border,
drag_over: colors.drop_target_background,
@ -3903,7 +3911,7 @@ impl ProjectPanel {
let filename_text_color = details.filename_text_color;
let diagnostic_severity = details.diagnostic_severity;
let item_colors = get_item_color(cx);
let item_colors = get_item_color(is_sticky, cx);
let canonical_path = details
.canonical_path

View file

@ -20,6 +20,7 @@ static REDACT_REGEX: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"key=[^&]+")
pub struct ReqwestClient {
client: reqwest::Client,
proxy: Option<Url>,
user_agent: Option<HeaderValue>,
handle: tokio::runtime::Handle,
}
@ -44,9 +45,11 @@ impl ReqwestClient {
Ok(client.into())
}
pub fn proxy_and_user_agent(proxy: Option<Url>, agent: &str) -> anyhow::Result<Self> {
pub fn proxy_and_user_agent(proxy: Option<Url>, user_agent: &str) -> anyhow::Result<Self> {
let user_agent = HeaderValue::from_str(user_agent)?;
let mut map = HeaderMap::new();
map.insert(http::header::USER_AGENT, HeaderValue::from_str(agent)?);
map.insert(http::header::USER_AGENT, user_agent.clone());
let mut client = Self::builder().default_headers(map);
let client_has_proxy;
@ -73,6 +76,7 @@ impl ReqwestClient {
.build()?;
let mut client: ReqwestClient = client.into();
client.proxy = client_has_proxy.then_some(proxy).flatten();
client.user_agent = Some(user_agent);
Ok(client)
}
}
@ -96,6 +100,7 @@ impl From<reqwest::Client> for ReqwestClient {
client,
handle,
proxy: None,
user_agent: None,
}
}
}
@ -216,6 +221,10 @@ impl http_client::HttpClient for ReqwestClient {
type_name::<Self>()
}
fn user_agent(&self) -> Option<&HeaderValue> {
self.user_agent.as_ref()
}
fn send(
&self,
req: http::Request<http_client::AsyncBody>,

View file

@ -847,6 +847,7 @@ impl KeymapFile {
}
}
#[derive(Clone)]
pub enum KeybindUpdateOperation<'a> {
Replace {
/// Describes the keybind to create
@ -865,6 +866,47 @@ pub enum KeybindUpdateOperation<'a> {
},
}
impl KeybindUpdateOperation<'_> {
pub fn generate_telemetry(
&self,
) -> (
// The keybind that is created
String,
// The keybinding that was removed
String,
// The source of the keybinding
String,
) {
let (new_binding, removed_binding, source) = match &self {
KeybindUpdateOperation::Replace {
source,
target,
target_keybind_source,
} => (Some(source), Some(target), Some(*target_keybind_source)),
KeybindUpdateOperation::Add { source, .. } => (Some(source), None, None),
KeybindUpdateOperation::Remove {
target,
target_keybind_source,
} => (None, Some(target), Some(*target_keybind_source)),
};
let new_binding = new_binding
.map(KeybindUpdateTarget::telemetry_string)
.unwrap_or("null".to_owned());
let removed_binding = removed_binding
.map(KeybindUpdateTarget::telemetry_string)
.unwrap_or("null".to_owned());
let source = source
.as_ref()
.map(KeybindSource::name)
.map(ToOwned::to_owned)
.unwrap_or("null".to_owned());
(new_binding, removed_binding, source)
}
}
impl<'a> KeybindUpdateOperation<'a> {
pub fn add(source: KeybindUpdateTarget<'a>) -> Self {
Self::Add { source, from: None }
@ -905,21 +947,33 @@ impl<'a> KeybindUpdateTarget<'a> {
keystrokes.pop();
keystrokes
}
fn telemetry_string(&self) -> String {
format!(
"action_name: {}, context: {}, action_arguments: {}, keystrokes: {}",
self.action_name,
self.context.unwrap_or("global"),
self.action_arguments.unwrap_or("none"),
self.keystrokes_unparsed()
)
}
}
#[derive(Clone, Copy, PartialEq, Eq)]
#[derive(Clone, Copy, Default, PartialEq, Eq, PartialOrd, Ord)]
pub enum KeybindSource {
User,
Default,
Base,
Vim,
Base,
#[default]
Default,
Unknown,
}
impl KeybindSource {
const BASE: KeyBindingMetaIndex = KeyBindingMetaIndex(0);
const DEFAULT: KeyBindingMetaIndex = KeyBindingMetaIndex(1);
const VIM: KeyBindingMetaIndex = KeyBindingMetaIndex(2);
const USER: KeyBindingMetaIndex = KeyBindingMetaIndex(3);
const BASE: KeyBindingMetaIndex = KeyBindingMetaIndex(KeybindSource::Base as u32);
const DEFAULT: KeyBindingMetaIndex = KeyBindingMetaIndex(KeybindSource::Default as u32);
const VIM: KeyBindingMetaIndex = KeyBindingMetaIndex(KeybindSource::Vim as u32);
const USER: KeyBindingMetaIndex = KeyBindingMetaIndex(KeybindSource::User as u32);
pub fn name(&self) -> &'static str {
match self {
@ -927,6 +981,7 @@ impl KeybindSource {
KeybindSource::Default => "Default",
KeybindSource::Base => "Base",
KeybindSource::Vim => "Vim",
KeybindSource::Unknown => "Unknown",
}
}
@ -936,6 +991,7 @@ impl KeybindSource {
KeybindSource::Default => Self::DEFAULT,
KeybindSource::Base => Self::BASE,
KeybindSource::Vim => Self::VIM,
KeybindSource::Unknown => KeyBindingMetaIndex(*self as u32),
}
}
@ -945,7 +1001,7 @@ impl KeybindSource {
Self::BASE => KeybindSource::Base,
Self::DEFAULT => KeybindSource::Default,
Self::VIM => KeybindSource::Vim,
_ => unreachable!(),
_ => KeybindSource::Unknown,
}
}
}
@ -958,7 +1014,7 @@ impl From<KeyBindingMetaIndex> for KeybindSource {
impl From<KeybindSource> for KeyBindingMetaIndex {
fn from(source: KeybindSource) -> Self {
return source.meta();
source.meta()
}
}
@ -1567,4 +1623,44 @@ mod tests {
.unindent(),
);
}
#[test]
fn test_keymap_remove() {
zlog::init_test();
check_keymap_update(
r#"
[
{
"context": "Editor",
"bindings": {
"cmd-k cmd-u": "editor::ConvertToUpperCase",
"cmd-k cmd-l": "editor::ConvertToLowerCase",
"cmd-[": "pane::GoBack",
}
},
]
"#,
KeybindUpdateOperation::Remove {
target: KeybindUpdateTarget {
context: Some("Editor"),
keystrokes: &parse_keystrokes("cmd-k cmd-l"),
action_name: "editor::ConvertToLowerCase",
action_arguments: None,
},
target_keybind_source: KeybindSource::User,
},
r#"
[
{
"context": "Editor",
"bindings": {
"cmd-k cmd-u": "editor::ConvertToUpperCase",
"cmd-[": "pane::GoBack",
}
},
]
"#,
);
}
}

View file

@ -190,6 +190,7 @@ fn replace_value_in_json_text(
}
}
let mut removed_comma = false;
// Look backward for a preceding comma first
let preceding_text = text.get(0..removal_start).unwrap_or("");
if let Some(comma_pos) = preceding_text.rfind(',') {
@ -197,10 +198,12 @@ fn replace_value_in_json_text(
let between_comma_and_key = text.get(comma_pos + 1..removal_start).unwrap_or("");
if between_comma_and_key.trim().is_empty() {
removal_start = comma_pos;
removed_comma = true;
}
}
if let Some(remaining_text) = text.get(existing_value_range.end..) {
if let Some(remaining_text) = text.get(existing_value_range.end..)
&& !removed_comma
{
let mut chars = remaining_text.char_indices();
while let Some((offset, ch)) = chars.next() {
if ch == ',' {

View file

@ -23,6 +23,7 @@ feature_flags.workspace = true
fs.workspace = true
fuzzy.workspace = true
gpui.workspace = true
itertools.workspace = true
language.workspace = true
log.workspace = true
menu.workspace = true
@ -34,6 +35,8 @@ search.workspace = true
serde.workspace = true
serde_json.workspace = true
settings.workspace = true
telemetry.workspace = true
tempfile.workspace = true
theme.workspace = true
tree-sitter-json.workspace = true
tree-sitter-rust.workspace = true

File diff suppressed because it is too large Load diff

View file

@ -2,19 +2,24 @@ use std::{ops::Range, rc::Rc, time::Duration};
use editor::{EditorSettings, ShowScrollbar, scroll::ScrollbarAutoHide};
use gpui::{
AppContext, Axis, Context, Entity, FocusHandle, Length, ListHorizontalSizingBehavior,
ListSizingBehavior, MouseButton, Point, Task, UniformListScrollHandle, WeakEntity,
transparent_black, uniform_list,
AbsoluteLength, AppContext, Axis, Context, DefiniteLength, DragMoveEvent, Entity, FocusHandle,
Length, ListHorizontalSizingBehavior, ListSizingBehavior, MouseButton, Point, Stateful, Task,
UniformListScrollHandle, WeakEntity, transparent_black, uniform_list,
};
use itertools::intersperse_with;
use settings::Settings as _;
use ui::{
ActiveTheme as _, AnyElement, App, Button, ButtonCommon as _, ButtonStyle, Color, Component,
ComponentScope, Div, ElementId, FixedWidth as _, FluentBuilder as _, Indicator,
InteractiveElement as _, IntoElement, ParentElement, Pixels, RegisterComponent, RenderOnce,
Scrollbar, ScrollbarState, StatefulInteractiveElement as _, Styled, StyledExt as _,
InteractiveElement, IntoElement, ParentElement, Pixels, RegisterComponent, RenderOnce,
Scrollbar, ScrollbarState, StatefulInteractiveElement, Styled, StyledExt as _,
StyledTypography, Window, div, example_group_with_title, h_flex, px, single_example, v_flex,
};
#[derive(Debug)]
struct DraggedColumn(usize);
struct UniformListData<const COLS: usize> {
render_item_fn: Box<dyn Fn(Range<usize>, &mut Window, &mut App) -> Vec<[AnyElement; COLS]>>,
element_id: ElementId,
@ -40,6 +45,10 @@ impl<const COLS: usize> TableContents<COLS> {
TableContents::UniformList(data) => data.row_count,
}
}
fn is_empty(&self) -> bool {
self.len() == 0
}
}
pub struct TableInteractionState {
@ -187,6 +196,87 @@ impl TableInteractionState {
}
}
fn render_resize_handles<const COLS: usize>(
&self,
column_widths: &[Length; COLS],
resizable_columns: &[ResizeBehavior; COLS],
initial_sizes: [DefiniteLength; COLS],
columns: Option<Entity<ColumnWidths<COLS>>>,
window: &mut Window,
cx: &mut App,
) -> AnyElement {
let spacers = column_widths
.iter()
.map(|width| base_cell_style(Some(*width)).into_any_element());
let mut column_ix = 0;
let resizable_columns_slice = *resizable_columns;
let mut resizable_columns = resizable_columns.into_iter();
let dividers = intersperse_with(spacers, || {
window.with_id(column_ix, |window| {
let mut resize_divider = div()
// This is required because this is evaluated at a different time than the use_state call above
.id(column_ix)
.relative()
.top_0()
.w_0p5()
.h_full()
.bg(cx.theme().colors().border.opacity(0.5));
let mut resize_handle = div()
.id("column-resize-handle")
.absolute()
.left_neg_0p5()
.w(px(5.0))
.h_full();
if resizable_columns
.next()
.is_some_and(ResizeBehavior::is_resizable)
{
let hovered = window.use_state(cx, |_window, _cx| false);
resize_divider = resize_divider.when(*hovered.read(cx), |div| {
div.bg(cx.theme().colors().border_focused)
});
resize_handle = resize_handle
.on_hover(move |&was_hovered, _, cx| hovered.write(cx, was_hovered))
.cursor_col_resize()
.when_some(columns.clone(), |this, columns| {
this.on_click(move |event, window, cx| {
if event.down.click_count >= 2 {
columns.update(cx, |columns, _| {
columns.on_double_click(
column_ix,
&initial_sizes,
&resizable_columns_slice,
window,
);
})
}
cx.stop_propagation();
})
})
.on_drag(DraggedColumn(column_ix), |_, _offset, _window, cx| {
cx.new(|_cx| gpui::Empty)
})
}
column_ix += 1;
resize_divider.child(resize_handle).into_any_element()
})
});
div()
.id("resize-handles")
.h_flex()
.absolute()
.w_full()
.inset_0()
.children(dividers)
.into_any_element()
}
fn render_vertical_scrollbar_track(
this: &Entity<Self>,
parent: Div,
@ -365,6 +455,242 @@ impl TableInteractionState {
}
}
#[derive(Debug, Copy, Clone, PartialEq)]
pub enum ResizeBehavior {
None,
Resizable,
MinSize(f32),
}
impl ResizeBehavior {
pub fn is_resizable(&self) -> bool {
*self != ResizeBehavior::None
}
pub fn min_size(&self) -> Option<f32> {
match self {
ResizeBehavior::None => None,
ResizeBehavior::Resizable => Some(0.05),
ResizeBehavior::MinSize(min_size) => Some(*min_size),
}
}
}
pub struct ColumnWidths<const COLS: usize> {
widths: [DefiniteLength; COLS],
cached_bounds_width: Pixels,
initialized: bool,
}
impl<const COLS: usize> ColumnWidths<COLS> {
pub fn new(_: &mut App) -> Self {
Self {
widths: [DefiniteLength::default(); COLS],
cached_bounds_width: Default::default(),
initialized: false,
}
}
fn get_fraction(length: &DefiniteLength, bounds_width: Pixels, rem_size: Pixels) -> f32 {
match length {
DefiniteLength::Absolute(AbsoluteLength::Pixels(pixels)) => *pixels / bounds_width,
DefiniteLength::Absolute(AbsoluteLength::Rems(rems_width)) => {
rems_width.to_pixels(rem_size) / bounds_width
}
DefiniteLength::Fraction(fraction) => *fraction,
}
}
fn on_double_click(
&mut self,
double_click_position: usize,
initial_sizes: &[DefiniteLength; COLS],
resize_behavior: &[ResizeBehavior; COLS],
window: &mut Window,
) {
let bounds_width = self.cached_bounds_width;
let rem_size = window.rem_size();
let initial_sizes =
initial_sizes.map(|length| Self::get_fraction(&length, bounds_width, rem_size));
let mut widths = self
.widths
.map(|length| Self::get_fraction(&length, bounds_width, rem_size));
let diff = initial_sizes[double_click_position] - widths[double_click_position];
if diff > 0.0 {
let diff_remaining = self.propagate_resize_diff_right(
diff,
double_click_position,
&mut widths,
resize_behavior,
);
if diff_remaining > 0.0 && double_click_position > 0 {
self.propagate_resize_diff_left(
-diff_remaining,
double_click_position - 1,
&mut widths,
resize_behavior,
);
}
} else if double_click_position > 0 {
let diff_remaining = self.propagate_resize_diff_left(
diff,
double_click_position,
&mut widths,
resize_behavior,
);
if diff_remaining < 0.0 {
self.propagate_resize_diff_right(
-diff_remaining,
double_click_position,
&mut widths,
resize_behavior,
);
}
}
self.widths = widths.map(DefiniteLength::Fraction);
}
fn on_drag_move(
&mut self,
drag_event: &DragMoveEvent<DraggedColumn>,
resize_behavior: &[ResizeBehavior; COLS],
window: &mut Window,
cx: &mut Context<Self>,
) {
let drag_position = drag_event.event.position;
let bounds = drag_event.bounds;
let mut col_position = 0.0;
let rem_size = window.rem_size();
let bounds_width = bounds.right() - bounds.left();
let col_idx = drag_event.drag(cx).0;
let mut widths = self
.widths
.map(|length| Self::get_fraction(&length, bounds_width, rem_size));
for length in widths[0..=col_idx].iter() {
col_position += length;
}
let mut total_length_ratio = col_position;
for length in widths[col_idx + 1..].iter() {
total_length_ratio += length;
}
let drag_fraction = (drag_position.x - bounds.left()) / bounds_width;
let drag_fraction = drag_fraction * total_length_ratio;
let diff = drag_fraction - col_position;
let is_dragging_right = diff > 0.0;
if is_dragging_right {
self.propagate_resize_diff_right(diff, col_idx, &mut widths, resize_behavior);
} else {
// Resize behavior should be improved in the future by also seeking to the right column when there's not enough space
self.propagate_resize_diff_left(diff, col_idx, &mut widths, resize_behavior);
}
self.widths = widths.map(DefiniteLength::Fraction);
}
fn propagate_resize_diff_right(
&self,
diff: f32,
col_idx: usize,
widths: &mut [f32; COLS],
resize_behavior: &[ResizeBehavior; COLS],
) -> f32 {
let mut diff_remaining = diff;
let mut curr_column = col_idx + 1;
while diff_remaining > 0.0 && curr_column < COLS {
let Some(min_size) = resize_behavior[curr_column - 1].min_size() else {
curr_column += 1;
continue;
};
let mut curr_width = widths[curr_column] - diff_remaining;
diff_remaining = 0.0;
if min_size > curr_width {
diff_remaining += min_size - curr_width;
curr_width = min_size;
}
widths[curr_column] = curr_width;
curr_column += 1;
}
widths[col_idx] = widths[col_idx] + (diff - diff_remaining);
return diff_remaining;
}
fn propagate_resize_diff_left(
&mut self,
diff: f32,
mut curr_column: usize,
widths: &mut [f32; COLS],
resize_behavior: &[ResizeBehavior; COLS],
) -> f32 {
let mut diff_remaining = diff;
let col_idx = curr_column;
while diff_remaining < 0.0 {
let Some(min_size) = resize_behavior[curr_column].min_size() else {
if curr_column == 0 {
break;
}
curr_column -= 1;
continue;
};
let mut curr_width = widths[curr_column] + diff_remaining;
diff_remaining = 0.0;
if curr_width < min_size {
diff_remaining = curr_width - min_size;
curr_width = min_size
}
widths[curr_column] = curr_width;
if curr_column == 0 {
break;
}
curr_column -= 1;
}
widths[col_idx + 1] = widths[col_idx + 1] - (diff - diff_remaining);
return diff_remaining;
}
}
pub struct TableWidths<const COLS: usize> {
initial: [DefiniteLength; COLS],
current: Option<Entity<ColumnWidths<COLS>>>,
resizable: [ResizeBehavior; COLS],
}
impl<const COLS: usize> TableWidths<COLS> {
pub fn new(widths: [impl Into<DefiniteLength>; COLS]) -> Self {
let widths = widths.map(Into::into);
TableWidths {
initial: widths,
current: None,
resizable: [ResizeBehavior::None; COLS],
}
}
fn lengths(&self, cx: &App) -> [Length; COLS] {
self.current
.as_ref()
.map(|entity| entity.read(cx).widths.map(Length::Definite))
.unwrap_or(self.initial.map(Length::Definite))
}
}
/// A table component
#[derive(RegisterComponent, IntoElement)]
pub struct Table<const COLS: usize = 3> {
@ -373,21 +699,23 @@ pub struct Table<const COLS: usize = 3> {
headers: Option<[AnyElement; COLS]>,
rows: TableContents<COLS>,
interaction_state: Option<WeakEntity<TableInteractionState>>,
column_widths: Option<[Length; COLS]>,
map_row: Option<Rc<dyn Fn((usize, Div), &mut Window, &mut App) -> AnyElement>>,
col_widths: Option<TableWidths<COLS>>,
map_row: Option<Rc<dyn Fn((usize, Stateful<Div>), &mut Window, &mut App) -> AnyElement>>,
empty_table_callback: Option<Rc<dyn Fn(&mut Window, &mut App) -> AnyElement>>,
}
impl<const COLS: usize> Table<COLS> {
/// number of headers provided.
pub fn new() -> Self {
Table {
Self {
striped: false,
width: None,
headers: None,
rows: TableContents::Vec(Vec::new()),
interaction_state: None,
column_widths: None,
map_row: None,
empty_table_callback: None,
col_widths: None,
}
}
@ -448,32 +776,68 @@ impl<const COLS: usize> Table<COLS> {
self
}
pub fn column_widths(mut self, widths: [impl Into<Length>; COLS]) -> Self {
self.column_widths = Some(widths.map(Into::into));
pub fn column_widths(mut self, widths: [impl Into<DefiniteLength>; COLS]) -> Self {
if self.col_widths.is_none() {
self.col_widths = Some(TableWidths::new(widths));
}
self
}
pub fn resizable_columns(
mut self,
resizable: [ResizeBehavior; COLS],
column_widths: &Entity<ColumnWidths<COLS>>,
cx: &mut App,
) -> Self {
if let Some(table_widths) = self.col_widths.as_mut() {
table_widths.resizable = resizable;
let column_widths = table_widths
.current
.get_or_insert_with(|| column_widths.clone());
column_widths.update(cx, |widths, _| {
if !widths.initialized {
widths.initialized = true;
widths.widths = table_widths.initial;
}
})
}
self
}
pub fn map_row(
mut self,
callback: impl Fn((usize, Div), &mut Window, &mut App) -> AnyElement + 'static,
callback: impl Fn((usize, Stateful<Div>), &mut Window, &mut App) -> AnyElement + 'static,
) -> Self {
self.map_row = Some(Rc::new(callback));
self
}
/// Provide a callback that is invoked when the table is rendered without any rows
pub fn empty_table_callback(
mut self,
callback: impl Fn(&mut Window, &mut App) -> AnyElement + 'static,
) -> Self {
self.empty_table_callback = Some(Rc::new(callback));
self
}
}
fn base_cell_style(width: Option<Length>, cx: &App) -> Div {
fn base_cell_style(width: Option<Length>) -> Div {
div()
.px_1p5()
.when_some(width, |this, width| this.w(width))
.when(width.is_none(), |this| this.flex_1())
.justify_start()
.text_ui(cx)
.whitespace_nowrap()
.text_ellipsis()
.overflow_hidden()
}
fn base_cell_style_text(width: Option<Length>, cx: &App) -> Div {
base_cell_style(width).text_ui(cx)
}
pub fn render_row<const COLS: usize>(
row_index: usize,
items: [impl IntoElement; COLS],
@ -492,33 +856,33 @@ pub fn render_row<const COLS: usize>(
.column_widths
.map_or([None; COLS], |widths| widths.map(Some));
let row = div().w_full().child(
h_flex()
.id("table_row")
.w_full()
.justify_between()
.px_1p5()
.py_1()
.when_some(bg, |row, bg| row.bg(bg))
.when(!is_striped, |row| {
row.border_b_1()
.border_color(transparent_black())
.when(!is_last, |row| row.border_color(cx.theme().colors().border))
})
.children(
items
.map(IntoElement::into_any_element)
.into_iter()
.zip(column_widths)
.map(|(cell, width)| base_cell_style(width, cx).child(cell)),
),
let mut row = h_flex()
.h_full()
.id(("table_row", row_index))
.w_full()
.justify_between()
.when_some(bg, |row, bg| row.bg(bg))
.when(!is_striped, |row| {
row.border_b_1()
.border_color(transparent_black())
.when(!is_last, |row| row.border_color(cx.theme().colors().border))
});
row = row.children(
items
.map(IntoElement::into_any_element)
.into_iter()
.zip(column_widths)
.map(|(cell, width)| base_cell_style_text(width, cx).px_1p5().py_1().child(cell)),
);
if let Some(map_row) = table_context.map_row {
let row = if let Some(map_row) = table_context.map_row {
map_row((row_index, row), window, cx)
} else {
row.into_any_element()
}
};
div().h_full().w_full().child(row).into_any_element()
}
pub fn render_header<const COLS: usize>(
@ -542,7 +906,7 @@ pub fn render_header<const COLS: usize>(
headers
.into_iter()
.zip(column_widths)
.map(|(h, width)| base_cell_style(width, cx).child(h)),
.map(|(h, width)| base_cell_style_text(width, cx).child(h)),
)
}
@ -551,15 +915,15 @@ pub struct TableRenderContext<const COLS: usize> {
pub striped: bool,
pub total_row_count: usize,
pub column_widths: Option<[Length; COLS]>,
pub map_row: Option<Rc<dyn Fn((usize, Div), &mut Window, &mut App) -> AnyElement>>,
pub map_row: Option<Rc<dyn Fn((usize, Stateful<Div>), &mut Window, &mut App) -> AnyElement>>,
}
impl<const COLS: usize> TableRenderContext<COLS> {
fn new(table: &Table<COLS>) -> Self {
fn new(table: &Table<COLS>, cx: &App) -> Self {
Self {
striped: table.striped,
total_row_count: table.rows.len(),
column_widths: table.column_widths,
column_widths: table.col_widths.as_ref().map(|widths| widths.lengths(cx)),
map_row: table.map_row.clone(),
}
}
@ -567,8 +931,13 @@ impl<const COLS: usize> TableRenderContext<COLS> {
impl<const COLS: usize> RenderOnce for Table<COLS> {
fn render(mut self, window: &mut Window, cx: &mut App) -> impl IntoElement {
let table_context = TableRenderContext::new(&self);
let table_context = TableRenderContext::new(&self, cx);
let interaction_state = self.interaction_state.and_then(|state| state.upgrade());
let current_widths = self
.col_widths
.as_ref()
.and_then(|widths| Some((widths.current.as_ref()?, widths.resizable)))
.map(|(curr, resize_behavior)| (curr.downgrade(), resize_behavior));
let scroll_track_size = px(16.);
let h_scroll_offset = if interaction_state
@ -582,6 +951,7 @@ impl<const COLS: usize> RenderOnce for Table<COLS> {
};
let width = self.width;
let no_rows_rendered = self.rows.is_empty();
let table = div()
.when_some(width, |this, width| this.w(width))
@ -590,6 +960,31 @@ impl<const COLS: usize> RenderOnce for Table<COLS> {
.when_some(self.headers.take(), |this, headers| {
this.child(render_header(headers, table_context.clone(), cx))
})
.when_some(current_widths, {
|this, (widths, resize_behavior)| {
this.on_drag_move::<DraggedColumn>({
let widths = widths.clone();
move |e, window, cx| {
widths
.update(cx, |widths, cx| {
widths.on_drag_move(e, &resize_behavior, window, cx);
})
.ok();
}
})
.on_children_prepainted(move |bounds, _, cx| {
widths
.update(cx, |widths, _| {
// This works because all children x axis bounds are the same
widths.cached_bounds_width = bounds[0].right() - bounds[0].left();
})
.ok();
})
}
})
.on_drop::<DraggedColumn>(|_, _, _| {
// Finish the resize operation
})
.child(
div()
.flex_grow()
@ -644,6 +1039,25 @@ impl<const COLS: usize> RenderOnce for Table<COLS> {
),
),
})
.when_some(
self.col_widths.as_ref().zip(interaction_state.as_ref()),
|parent, (table_widths, state)| {
parent.child(state.update(cx, |state, cx| {
let resizable_columns = table_widths.resizable;
let column_widths = table_widths.lengths(cx);
let columns = table_widths.current.clone();
let initial_sizes = table_widths.initial;
state.render_resize_handles(
&column_widths,
&resizable_columns,
initial_sizes,
columns,
window,
cx,
)
}))
},
)
.when_some(interaction_state.as_ref(), |this, interaction_state| {
this.map(|this| {
TableInteractionState::render_vertical_scrollbar_track(
@ -662,6 +1076,21 @@ impl<const COLS: usize> RenderOnce for Table<COLS> {
})
}),
)
.when_some(
no_rows_rendered
.then_some(self.empty_table_callback)
.flatten(),
|this, callback| {
this.child(
h_flex()
.size_full()
.p_3()
.items_start()
.justify_center()
.child(callback(window, cx)),
)
},
)
.when_some(
width.and(interaction_state.as_ref()),
|this, interaction_state| {

View file

@ -320,7 +320,39 @@ impl History {
last_edit_at: now,
suppress_grouping: false,
});
self.redo_stack.clear();
}
/// Differs from `push_transaction` in that it does not clear the redo
/// stack. Intended to be used to create a parent transaction to merge
/// potential child transactions into.
///
/// The caller is responsible for removing it from the undo history using
/// `forget_transaction` if no edits are merged into it. Otherwise, if edits
/// are merged into this transaction, the caller is responsible for ensuring
/// the redo stack is cleared. The easiest way to ensure the redo stack is
/// cleared is to create transactions with the usual `start_transaction` and
/// `end_transaction` methods and merging the resulting transactions into
/// the transaction created by this method
fn push_empty_transaction(
&mut self,
start: clock::Global,
now: Instant,
clock: &mut clock::Lamport,
) -> TransactionId {
assert_eq!(self.transaction_depth, 0);
let id = clock.tick();
let transaction = Transaction {
id,
start,
edit_ids: Vec::new(),
};
self.undo_stack.push(HistoryEntry {
transaction,
first_edit_at: now,
last_edit_at: now,
suppress_grouping: false,
});
id
}
fn push_undo(&mut self, op_id: clock::Lamport) {
@ -1495,6 +1527,24 @@ impl Buffer {
self.history.push_transaction(transaction, now);
}
/// Differs from `push_transaction` in that it does not clear the redo stack.
/// The caller responsible for
/// Differs from `push_transaction` in that it does not clear the redo
/// stack. Intended to be used to create a parent transaction to merge
/// potential child transactions into.
///
/// The caller is responsible for removing it from the undo history using
/// `forget_transaction` if no edits are merged into it. Otherwise, if edits
/// are merged into this transaction, the caller is responsible for ensuring
/// the redo stack is cleared. The easiest way to ensure the redo stack is
/// cleared is to create transactions with the usual `start_transaction` and
/// `end_transaction` methods and merging the resulting transactions into
/// the transaction created by this method
pub fn push_empty_transaction(&mut self, now: Instant) -> TransactionId {
self.history
.push_empty_transaction(self.version.clone(), now, &mut self.lamport_clock)
}
pub fn edited_ranges_for_transaction_id<D>(
&self,
transaction_id: TransactionId,

View file

@ -83,6 +83,8 @@ impl ThemeColors {
panel_indent_guide: neutral().light_alpha().step_5(),
panel_indent_guide_hover: neutral().light_alpha().step_6(),
panel_indent_guide_active: neutral().light_alpha().step_6(),
panel_overlay_background: neutral().light().step_2(),
panel_overlay_hover: neutral().light_alpha().step_4(),
pane_focused_border: blue().light().step_5(),
pane_group_border: neutral().light().step_6(),
scrollbar_thumb_background: neutral().light_alpha().step_3(),
@ -206,6 +208,8 @@ impl ThemeColors {
panel_indent_guide: neutral().dark_alpha().step_4(),
panel_indent_guide_hover: neutral().dark_alpha().step_6(),
panel_indent_guide_active: neutral().dark_alpha().step_6(),
panel_overlay_background: neutral().dark().step_2(),
panel_overlay_hover: neutral().dark_alpha().step_4(),
pane_focused_border: blue().dark().step_5(),
pane_group_border: neutral().dark().step_6(),
scrollbar_thumb_background: neutral().dark_alpha().step_3(),

View file

@ -59,6 +59,7 @@ pub(crate) fn zed_default_dark() -> Theme {
let bg = hsla(215. / 360., 12. / 100., 15. / 100., 1.);
let editor = hsla(220. / 360., 12. / 100., 18. / 100., 1.);
let elevated_surface = hsla(225. / 360., 12. / 100., 17. / 100., 1.);
let hover = hsla(225.0 / 360., 11.8 / 100., 26.7 / 100., 1.0);
let blue = hsla(207.8 / 360., 81. / 100., 66. / 100., 1.0);
let gray = hsla(218.8 / 360., 10. / 100., 40. / 100., 1.0);
@ -108,14 +109,14 @@ pub(crate) fn zed_default_dark() -> Theme {
surface_background: bg,
background: bg,
element_background: hsla(223.0 / 360., 13. / 100., 21. / 100., 1.0),
element_hover: hsla(225.0 / 360., 11.8 / 100., 26.7 / 100., 1.0),
element_hover: hover,
element_active: hsla(220.0 / 360., 11.8 / 100., 20.0 / 100., 1.0),
element_selected: hsla(224.0 / 360., 11.3 / 100., 26.1 / 100., 1.0),
element_disabled: SystemColors::default().transparent,
element_selection_background: player.local().selection.alpha(0.25),
drop_target_background: hsla(220.0 / 360., 8.3 / 100., 21.4 / 100., 1.0),
ghost_element_background: SystemColors::default().transparent,
ghost_element_hover: hsla(225.0 / 360., 11.8 / 100., 26.7 / 100., 1.0),
ghost_element_hover: hover,
ghost_element_active: hsla(220.0 / 360., 11.8 / 100., 20.0 / 100., 1.0),
ghost_element_selected: hsla(224.0 / 360., 11.3 / 100., 26.1 / 100., 1.0),
ghost_element_disabled: SystemColors::default().transparent,
@ -202,10 +203,12 @@ pub(crate) fn zed_default_dark() -> Theme {
panel_indent_guide: hsla(228. / 360., 8. / 100., 25. / 100., 1.),
panel_indent_guide_hover: hsla(225. / 360., 13. / 100., 12. / 100., 1.),
panel_indent_guide_active: hsla(225. / 360., 13. / 100., 12. / 100., 1.),
panel_overlay_background: bg,
panel_overlay_hover: hover,
pane_focused_border: blue,
pane_group_border: hsla(225. / 360., 13. / 100., 12. / 100., 1.),
scrollbar_thumb_background: gpui::transparent_black(),
scrollbar_thumb_hover_background: hsla(225.0 / 360., 11.8 / 100., 26.7 / 100., 1.0),
scrollbar_thumb_hover_background: hover,
scrollbar_thumb_active_background: hsla(
225.0 / 360.,
11.8 / 100.,

View file

@ -352,6 +352,12 @@ pub struct ThemeColorsContent {
#[serde(rename = "panel.indent_guide_active")]
pub panel_indent_guide_active: Option<String>,
#[serde(rename = "panel.overlay_background")]
pub panel_overlay_background: Option<String>,
#[serde(rename = "panel.overlay_hover")]
pub panel_overlay_hover: Option<String>,
#[serde(rename = "pane.focused_border")]
pub pane_focused_border: Option<String>,
@ -675,6 +681,14 @@ impl ThemeColorsContent {
.scrollbar_thumb_border
.as_ref()
.and_then(|color| try_parse_color(color).ok());
let element_hover = self
.element_hover
.as_ref()
.and_then(|color| try_parse_color(color).ok());
let panel_background = self
.panel_background
.as_ref()
.and_then(|color| try_parse_color(color).ok());
ThemeColorsRefinement {
border,
border_variant: self
@ -713,10 +727,7 @@ impl ThemeColorsContent {
.element_background
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
element_hover: self
.element_hover
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
element_hover,
element_active: self
.element_active
.as_ref()
@ -833,10 +844,7 @@ impl ThemeColorsContent {
.search_match_background
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
panel_background: self
.panel_background
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
panel_background,
panel_focused_border: self
.panel_focused_border
.as_ref()
@ -853,6 +861,16 @@ impl ThemeColorsContent {
.panel_indent_guide_active
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
panel_overlay_background: self
.panel_overlay_background
.as_ref()
.and_then(|color| try_parse_color(color).ok())
.or(panel_background),
panel_overlay_hover: self
.panel_overlay_hover
.as_ref()
.and_then(|color| try_parse_color(color).ok())
.or(element_hover),
pane_focused_border: self
.pane_focused_border
.as_ref()

View file

@ -131,6 +131,12 @@ pub struct ThemeColors {
pub panel_indent_guide: Hsla,
pub panel_indent_guide_hover: Hsla,
pub panel_indent_guide_active: Hsla,
/// The color of the overlay surface on top of panel.
pub panel_overlay_background: Hsla,
/// The color of the overlay surface on top of panel when hovered over.
pub panel_overlay_hover: Hsla,
pub pane_focused_border: Hsla,
pub pane_group_border: Hsla,
/// The color of the scrollbar thumb.
@ -326,6 +332,8 @@ pub enum ThemeColorField {
PanelIndentGuide,
PanelIndentGuideHover,
PanelIndentGuideActive,
PanelOverlayBackground,
PanelOverlayHover,
PaneFocusedBorder,
PaneGroupBorder,
ScrollbarThumbBackground,
@ -438,6 +446,8 @@ impl ThemeColors {
ThemeColorField::PanelIndentGuide => self.panel_indent_guide,
ThemeColorField::PanelIndentGuideHover => self.panel_indent_guide_hover,
ThemeColorField::PanelIndentGuideActive => self.panel_indent_guide_active,
ThemeColorField::PanelOverlayBackground => self.panel_overlay_background,
ThemeColorField::PanelOverlayHover => self.panel_overlay_hover,
ThemeColorField::PaneFocusedBorder => self.pane_focused_border,
ThemeColorField::PaneGroupBorder => self.pane_group_border,
ThemeColorField::ScrollbarThumbBackground => self.scrollbar_thumb_background,

View file

@ -40,6 +40,7 @@ rpc.workspace = true
schemars.workspace = true
serde.workspace = true
settings.workspace = true
settings_ui.workspace = true
smallvec.workspace = true
story = { workspace = true, optional = true }
telemetry.workspace = true

View file

@ -30,6 +30,7 @@ use onboarding_banner::OnboardingBanner;
use project::Project;
use rpc::proto;
use settings::Settings as _;
use settings_ui::keybindings;
use std::sync::Arc;
use theme::ActiveTheme;
use title_bar_settings::TitleBarSettings;
@ -683,7 +684,7 @@ impl TitleBar {
)
.separator()
.action("Settings", zed_actions::OpenSettings.boxed_clone())
.action("Key Bindings", Box::new(zed_actions::OpenKeymap))
.action("Key Bindings", Box::new(keybindings::OpenKeymapEditor))
.action(
"Themes…",
zed_actions::theme_selector::Toggle::default().boxed_clone(),
@ -727,7 +728,7 @@ impl TitleBar {
.menu(|window, cx| {
ContextMenu::build(window, cx, |menu, _, _| {
menu.action("Settings", zed_actions::OpenSettings.boxed_clone())
.action("Key Bindings", Box::new(zed_actions::OpenKeymap))
.action("Key Bindings", Box::new(keybindings::OpenKeymapEditor))
.action(
"Themes…",
zed_actions::theme_selector::Toggle::default().boxed_clone(),

View file

@ -972,12 +972,10 @@ impl ContextMenu {
.children(action.as_ref().and_then(|action| {
self.action_context
.as_ref()
.map(|focus| {
.and_then(|focus| {
KeyBinding::for_action_in(&**action, focus, window, cx)
})
.unwrap_or_else(|| {
KeyBinding::for_action(&**action, window, cx)
})
.or_else(|| KeyBinding::for_action(&**action, window, cx))
.map(|binding| {
div().ml_4().child(binding.disabled(*disabled)).when(
*disabled && documentation_aside.is_some(),

View file

@ -943,6 +943,8 @@ mod element {
pub struct PaneAxisElement {
axis: Axis,
basis: usize,
/// Equivalent to ColumnWidths (but in terms of flexes instead of percentages)
/// For example, flexes "1.33, 1, 1", instead of "40%, 30%, 30%"
flexes: Arc<Mutex<Vec<f32>>>,
bounding_boxes: Arc<Mutex<Vec<Option<Bounds<Pixels>>>>>,
children: SmallVec<[AnyElement; 2]>,
@ -998,6 +1000,7 @@ mod element {
let mut flexes = flexes.lock();
debug_assert!(flex_values_in_bounds(flexes.as_slice()));
// Math to convert a flex value to a pixel value
let size = move |ix, flexes: &[f32]| {
container_size.along(axis) * (flexes[ix] / flexes.len() as f32)
};
@ -1007,9 +1010,13 @@ mod element {
return;
}
// This is basically a "bucket" of pixel changes that need to be applied in response to this
// mouse event. Probably a small, fractional number like 0.5 or 1.5 pixels
let mut proposed_current_pixel_change =
(e.position - child_start).along(axis) - size(ix, flexes.as_slice());
// This takes a pixel change, and computes the flex changes that correspond to this pixel change
// as well as the next one, for some reason
let flex_changes = |pixel_dx, target_ix, next: isize, flexes: &[f32]| {
let flex_change = pixel_dx / container_size.along(axis);
let current_target_flex = flexes[target_ix] + flex_change;
@ -1017,6 +1024,9 @@ mod element {
(current_target_flex, next_target_flex)
};
// Generate the list of flex successors, from the current index.
// If you're dragging column 3 forward, out of 6 columns, then this code will produce [4, 5, 6]
// If you're dragging column 3 backward, out of 6 columns, then this code will produce [2, 1, 0]
let mut successors = iter::from_fn({
let forward = proposed_current_pixel_change > px(0.);
let mut ix_offset = 0;
@ -1034,6 +1044,7 @@ mod element {
}
});
// Now actually loop over these, and empty our bucket of pixel changes
while proposed_current_pixel_change.abs() > px(0.) {
let Some(current_ix) = successors.next() else {
break;

View file

@ -73,7 +73,7 @@ impl Workspace {
if let Some(terminal_provider) = self.terminal_provider.as_ref() {
let task_status = terminal_provider.spawn(spawn_in_terminal, window, cx);
cx.background_spawn(async move {
let task = cx.background_spawn(async move {
match task_status.await {
Some(Ok(status)) => {
if status.success() {
@ -82,11 +82,11 @@ impl Workspace {
log::debug!("Task spawn failed, code: {:?}", status.code());
}
}
Some(Err(e)) => log::error!("Task spawn failed: {e}"),
Some(Err(e)) => log::error!("Task spawn failed: {e:#}"),
None => log::debug!("Task spawn got cancelled"),
}
})
.detach();
});
self.scheduled_tasks.push(task);
}
}

View file

@ -1088,6 +1088,7 @@ pub struct Workspace {
serialized_ssh_project: Option<SerializedSshProject>,
_items_serializer: Task<Result<()>>,
session_id: Option<String>,
scheduled_tasks: Vec<Task<()>>,
}
impl EventEmitter<Event> for Workspace {}
@ -1420,6 +1421,7 @@ impl Workspace {
_items_serializer,
session_id: Some(session_id),
serialized_ssh_project: None,
scheduled_tasks: Vec::new(),
}
}

View file

@ -2,7 +2,7 @@
description = "The fast, collaborative code editor."
edition.workspace = true
name = "zed"
version = "0.196.0"
version = "0.196.7"
publish.workspace = true
license = "GPL-3.0-or-later"
authors = ["Zed Team <hi@zed.dev>"]

View file

@ -1 +1 @@
dev
stable

View file

@ -1,5 +1,6 @@
use collab_ui::collab_panel;
use gpui::{Menu, MenuItem, OsAction};
use settings_ui::keybindings;
use terminal_view::terminal_panel;
pub fn app_menus() -> Vec<Menu> {
@ -16,7 +17,7 @@ pub fn app_menus() -> Vec<Menu> {
name: "Settings".into(),
items: vec![
MenuItem::action("Open Settings", super::OpenSettings),
MenuItem::action("Open Key Bindings", zed_actions::OpenKeymap),
MenuItem::action("Open Key Bindings", keybindings::OpenKeymapEditor),
MenuItem::action("Open Default Settings", super::OpenDefaultSettings),
MenuItem::action(
"Open Default Key Bindings",

View file

@ -148,7 +148,7 @@ On some systems the file `/etc/prime-discrete` can be used to enforce the use of
On others, you may be able to the environment variable `DRI_PRIME=1` when running Zed to force the use of the discrete GPU.
If you're using an AMD GPU and Zed crashes when selecting long lines, try setting the `ZED_SAMPLE_COUNT=0` environment variable. (See [#26143](https://github.com/zed-industries/zed/issues/26143))
If you're using an AMD GPU and Zed crashes when selecting long lines, try setting the `ZED_PATH_SAMPLE_COUNT=0` environment variable. (See [#26143](https://github.com/zed-industries/zed/issues/26143))
If you're using an AMD GPU, you might get a 'Broken Pipe' error. Try using the RADV or Mesa drivers. (See [#13880](https://github.com/zed-industries/zed/issues/13880))