Compare commits

...
Sign in to create a new pull request.

51 commits

Author SHA1 Message Date
gcp-cherry-pick-bot[bot]
53bc5714d6
Fix racy leaked extension server adapters handling (cherry-pick #35319) (#35321)
Cherry-picked Kb/wasm panics (#35319)

Follow-up of https://github.com/zed-industries/zed/pull/34208
Closes https://github.com/zed-industries/zed/issues/35185

Previous code assumed that extensions' language server wrappers may leak
only in static data (e.g. fields that were not cleared on deinit), but
we seem to have a race that breaks this assumption.

1. We do clean `all_lsp_adapters` field after
https://github.com/zed-industries/zed/pull/34334 and it's called for
every extension that is unregistered.
2. `LspStore::maintain_workspace_config` ->
`LspStore::refresh_workspace_configurations` chain is triggered
independently, apparently on `ToolchainStoreEvent::ToolchainActivated`
event which means somewhere behind there's potentially a Python code
that gets executed to activate the toolchian, making
`refresh_workspace_configurations` start timings unpredictable.
3. Seems that toolchain activation overlaps with plugin reload, as 
`2025-07-28T12:16:19+03:00 INFO [extension_host] extensions updated.
loading 0, reloading 1, unloading 0` suggests in the issue logs.

The plugin reload seem to happen faster than workspace configuration
refresh in



c65da547c9/crates/project/src/lsp_store.rs (L7426-L7456)

as the language servers are just starting and take extra time to respond
to the notification.

At least one of the `.clone()`d `adapter`s there is the adapter that got
removed during plugin reload and has its channel closed, which causes a
panic later.

----------------------------

A good fix would be to re-architect the workspace refresh approach, same
as other accesses to the language server collections.
One way could be to use `Weak`-based structures instead, as definitely
the extension server data belongs to extension, not the `LspStore`.
This is quite a large undertaking near the extension core though, so is
not done yet.

Currently, to stop the excessive panics, no more `.expect` is done on
the channel result, as indeed, it now can be closed very dynamically.
This will result in more errors (and backtraces, presumably) printed in
the logs and no panics.

More logging and comments are added, and workspace querying is replaced
to the concurrent one: no need to wait until a previous server had
processed the notification to send the same to the next one.

Release Notes:

- Fixed warm-related panic happening during startup

Co-authored-by: Kirill Bulatov <kirill@zed.dev>
2025-07-30 12:56:03 +03:00
Joseph T. Lyons
e1ae2a5334 zed 0.196.7 2025-07-29 14:38:01 -04:00
Kirill Bulatov
ff9da9d2d9 Add more data to see which extension got leaked (#35272)
Part of https://github.com/zed-industries/zed/issues/35185

Release Notes:

- N/A
2025-07-29 14:32:46 -04:00
gcp-cherry-pick-bot[bot]
271055a9bf
client: Send User-Agent header on WebSocket connection requests (cherry-pick #35280) (#35284)
Cherry-picked client: Send `User-Agent` header on WebSocket connection
requests (#35280)

This PR makes it so we send the `User-Agent` header on the WebSocket
connection requests when connecting to Collab.

We use the user agent set on the parent HTTP client.

Release Notes:

- N/A

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-07-29 13:26:02 -04:00
gcp-cherry-pick-bot[bot]
a9d8558b63
Cache LSP code lens requests (cherry-pick #35207) (#35258) 2025-07-29 10:21:51 +03:00
gcp-cherry-pick-bot[bot]
6589ce9b9b
Fix tasks leaked despite workspace window close (cherry-pick #35246) (#35251) 2025-07-29 10:21:25 +03:00
gcp-cherry-pick-bot[bot]
43fc3bdaa0
keymap_ui: Fix bug introduced in #35208 (cherry-pick #35237) (#35239)
Cherry-picked keymap_ui: Fix bug introduced in #35208 (#35237)

Closes #ISSUE

Fixes a bug that was cherry picked onto stable and preview branches
introduced in #35208 whereby modifier keys would show up and not be
removable when editing a keybind

Release Notes:

- (preview only) Keymap Editor: Fixed an issue introduced in v0.197.2
whereby modifier keys would show up and not be removable while recording
keystrokes in the keybind edit modal

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-28 18:02:05 -04:00
gcp-cherry-pick-bot[bot]
b4d6629de2
keymap_ui: Additional keystroke input polish (cherry-pick #35208) (#35219)
Cherry-picked keymap_ui: Additional keystroke input polish (#35208)

Closes #ISSUE

Fixed various issues and improved UX around the keystroke input
primarily when used for keystroke search.

Release Notes:

- Keymap Editor: FIxed an issue where the modifiers used to activate
keystroke search would appear in the keystroke search
- Keymap Editor: Made it possible to search for repeat modifiers, such
as a binding with `cmd-shift cmd`
- Keymap Editor: Made keystroke search matches match based on ordered
(not necessarily contiguous) runs. For example, searching for `cmd
shift-j` will match `cmd-k cmd-shift-j alt-q` and `cmd-i g shift-j` but
not `alt-k shift-j` or `cmd-k alt-j`
- Keymap Editor: Fixed the clear keystrokes binding (`delete` by
default) not working in the keystroke input

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-28 16:11:28 -04:00
Joseph T. Lyons
f80bd3a66b zed 0.196.6 2025-07-24 11:54:19 -04:00
Richard Feldman
6f04cb7296 Don't auto-retry in certain circumstances (#35037)
Someone encountered this in production, which should not happen:

<img width="1266" height="623" alt="Screenshot 2025-07-24 at 10 38
40 AM"
src="https://github.com/user-attachments/assets/40f3f977-5110-4808-a456-7e708d953b3b"
/>

This moves certain errors into the category of "never retry" and reduces
the number of retries for some others. Also it adds some diagnostic
logging for retry policy.

It's not a complete fix for the above, because the underlying issues is
that the server is sending a HTTP 403 response and although we were
already treating 403s as "do not retry" it was deciding to retry with 2
attempts anyway. So further debugging is needed to figure out why it
wasn't going down the 403 branch by the time the request got here.

Release Notes:

- N/A
2025-07-24 11:52:10 -04:00
Richard Feldman
92e7d84710 Auto-retry agent errors by default (#34842)
Now we explicitly carve out exceptions for which HTTP responses we do
*not* retry for, and retry at least once on all others.

Release Notes:

- The Agent panel now automatically retries failed requests under more
circumstances.
2025-07-24 11:51:49 -04:00
Oleksiy Syvokon
1c0bc89664
linux: Fix ctrl-0..9, ctrl-[, ctrl-^ (#35028)
There were two different underlying reasons for the issues with
ctrl-number and ctrl-punctuation:

1. Some keys in the ctrl-0..9 range send codes in the `\1b`..`\1f`
range. For example, `ctrl-2` sends keycode for `ctrl-[` (0x1b), but we
want to map it to `2`, not to `[`.

2. `ctrl-[` and four other ctrl-punctuation were incorrectly mapped,
since the expected conversion is by adding 0x40

Closes #35012

Release Notes:

- N/A
2025-07-24 10:11:04 -04:00
gcp-cherry-pick-bot[bot]
3720b6f908
agent: Fix double-lease panic when clicking on thread to jump (cherry-pick #34843) (#34874)
Cherry-picked agent: Fix double-lease panic when clicking on thread to
jump (#34843)

Release Notes:

- N/A

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-07-24 00:30:07 +02:00
Joseph T. Lyons
cc82f1eacd v0.196.x stable 2025-07-23 13:48:22 -04:00
Anthony Eid
acd9ab460c keymap ui: Improve resize columns on double click (#34961)
This PR splits the resize logic into separate left/right propagation
methods and improve code organization around column width adjustments.
It also allows resize to work for both the left and right sides as well,
instead of only checking the right side for room

Release Notes:

- N/A *or* Added/Fixed/Improved ...

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-23 13:46:22 -04:00
gcp-cherry-pick-bot[bot]
6badbf0369
keymap ui: Resizable column follow up (cherry-pick #34955) (#34956)
Cherry-picked keymap ui: Resizable column follow up (#34955)

I cherry picked a small fix that didn't get into the original column
resizable branch PR because I turned on auto merge.

Release Notes:

- N/A

Co-authored-by: Anthony Eid <56899983+Anthony-Eid@users.noreply.github.com>
2025-07-23 18:44:48 +02:00
Mikayla Maki
5ed98bfd9d gpui: Add use state APIs (#34741)
This PR adds a component level state API to GPUI, as well as a few
utilities for simplified interactions with entities

Release Notes:

- N/A
2025-07-23 12:16:25 -04:00
Finn Evers
12d6ddef16 keymap_ui: Dim keybinds that are overridden by other keybinds (#34952)
This change dims rows in the keymap editor for which the corresponding
keybind is overridden by other keybinds coming from higher priority
sources.

Release Notes:

- N/A
2025-07-23 12:08:24 -04:00
Mikayla Maki
b7fb970929 Resizable columns (#34794)
This PR adds resizable columns to the keymap editor and the ability to
double-click on a resizable column to set a column back to its default
size.

The table uses a column's width to calculate what position it should be
laid out at. So `column[i]` x position is calculated by the summation of
`column[..i]`. When resizing `column[i]`, `column[i+1]`’s size is
adjusted to keep all columns’ relative positions the same. If
`column[i+1]` is at its minimum size, we keep seeking to the right to
find a column with space left to take.

An improvement to resizing behavior and double-clicking could be made by
checking both column ranges `0..i-1` and `i+1..COLS`, since only one
range of columns is checked for resize capacity.

Release Notes:

- N/A

---------

Co-authored-by: Anthony <anthony@zed.dev>
Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-07-23 12:08:20 -04:00
gcp-cherry-pick-bot[bot]
780db4ce76
Fix redo after noop format (cherry-pick #34898) (#34903)
Cherry-picked Fix redo after noop format (#34898)

Closes #31917

Previously, as of #28457 we used a hack, creating an empty transaction
in the history that we then merged formatting changes into in order to
correctly identify concurrent edits to the buffer while formatting was
happening. This caused issues with noop formatting however, as using the
normal API of the buffer history (in an albeit weird way) resulted in
the redo stack being cleared, regardless of whether the formatting
transaction included edits or not, which is the correct behavior in all
other contexts.

This PR fixes the redo issue by codifying the behavior formatting wants,
that being the ability to push an empty transaction to the history with
no other side-effects (i.e. clearing the redo stack) to detect
concurrent edits, with the tradeoff being that it must then manually
remove the transaction later if no changes occurred from the formatting.
The redo stack is still cleared when there are formatting edits, as the
individual format steps use the normal `{start,end}_transaction` methods
which clear the redo stack if the finished transaction isn't empty.

Release Notes:

- Fixed an issue where redo would not work after buffer formatting
(including formatting on save) when the formatting did not result in any
changes

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-22 13:56:45 -04:00
gcp-cherry-pick-bot[bot]
dfcf9a2b16
keymap_ui: Fix panic in clear keystrokes (cherry-pick #34909) (#34913)
Cherry-picked keymap_ui: Fix panic in clear keystrokes (#34909)

Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-22 13:56:37 -04:00
gcp-cherry-pick-bot[bot]
9eeb7a325e
theme: Add panel.overlay_background and panel.overlay_hover (cherry-pick #34655) (#34878)
Cherry-picked theme: Add `panel.overlay_background` and
`panel.overlay_hover` (#34655)

In https://github.com/zed-industries/zed/pull/33994 sticky scroll was
added to project_panel.

I love this feature! 

This introduces a new element layering not seen before. On themes that
use transparency, the overlapping elements can make it difficult to read
project panel entries. This PR introduces a new selector:
~~panel.sticky_entry.background~~ `panel.overlay_background` This
selector lets you set the background of entries when they become sticky.

Closes https://github.com/zed-industries/zed/issues/34654

Before:

<img width="373" height="104" alt="Screenshot 2025-07-17 at 10 19 11 AM"

src="https://github.com/user-attachments/assets/d5bab065-53ca-4b27-b5d8-3b3f8d1f7a81"
/>

After:

<img width="292" height="445" alt="Screenshot 2025-07-17 at 11 46 57 AM"

src="https://github.com/user-attachments/assets/4cd2b87b-2989-4489-972f-872d2dc13a33"
/>

<img width="348" height="390" alt="Screenshot 2025-07-17 at 11 39 57 AM"

src="https://github.com/user-attachments/assets/49c0757f-2c50-4e01-92c6-2ae7e4132a53"
/>

<img width="668" height="187" alt="Screenshot 2025-07-17 at 11 39 29 AM"

src="https://github.com/user-attachments/assets/167536c2-5872-4306-90c6-c6b68276b618"
/>

Release Notes:

- Add `panel.sticky_entry.background` theme selector for modifying
project panel entries when they become sticky when scrolling and overlap
with entries below them.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>

Co-authored-by: Bret Comnes <166301+bcomnes@users.noreply.github.com>
Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-07-22 20:55:09 +05:30
gcp-cherry-pick-bot[bot]
dc4fc962f0
Fix an issue where xkb defined hotkeys for arrows would not work (cherry-pick #34823) (#34858)
Cherry-picked Fix an issue where xkb defined hotkeys for arrows would
not work (#34823)

Addresses
https://github.com/zed-industries/zed/pull/34053#issuecomment-3096447601
where custom-defined arrows would stop working in Zed.

How to reproduce:

1. Define custom keyboard layout

```bash
cd /usr/share/X11/xkb/symbols/
sudo nano mykbd
```

```
default partial alphanumeric_keys
xkb_symbols "custom" {

    name[Group1]= "Custom Layout";

    key <AD01> { [ q,  Q,  Escape,     Escape      ] };
    key <AD02> { [ w,  W,  Home,       Home        ] };
    key <AD03> { [ e,  E,  Up,         Up          ] };
    key <AD04> { [ r,  R,  End,        End         ] };
    key <AD05> { [ t,  T,  Tab,        Tab         ] };

    key <AC01> { [ a,  A,  Return,     Return      ] };
    key <AC02> { [ s,  S,  Left,       Left        ] };
    key <AC03> { [ d,  D,  Down,       Down        ] };
    key <AC04> { [ f,  F,  Right,      Right       ] };
    key <AC05> { [ g,  G,  BackSpace,  BackSpace   ] };

    // include a base layout to inherit the rest
    include "us(basic)"
};
```

2. Activate custom layout with win-key as AltGr

```bash
setxkbmap mykbd -variant custom -option lv3:win_switch
```

3. Now Win-S should produce left arrow, Win-F right arrow
4. Test whether it works in Zed

Release Notes:

 - linux: xkb-defined hotkeys for arrow keys should behave as expected.

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>

Co-authored-by: Sergei Surovtsev <97428129+stillonearth@users.noreply.github.com>
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-07-21 19:19:17 -06:00
Peter Tripp
fab9da0b93
zed 0.196.5 2025-07-21 10:04:09 -04:00
Oleksandr Mykhailenko
44946231aa
agent: Fix Mistral tool use error message (#34692)
Closes #32675

Exactly the same changes as in #33640 by @sviande

The PR has been in WIP state for 3 weeks with no activity, and the issue
basically makes Mistral models unusable. I have tested the changes
locally, and it does indeed work. Full credit goes to @sviande, I just
want this feature to be finished.

Release Notes:

- agent: Fixed an issue with tool calling with the Mistral provider
(thanks [@sviande](https://github.com/sviande) and
[@armyhaylenko](https://github.com/armyhaylenko))

Co-authored-by: sviande <sviande@gmail.com>
2025-07-21 09:22:18 -04:00
gcp-cherry-pick-bot[bot]
8da6604165
keymap_ui: Auto complete action arguments (cherry-pick #34785) (#34790)
Cherry-picked keymap_ui: Auto complete action arguments (#34785)

Supersedes: #34242

Creates an `ActionArgumentsEditor` that implements the required logic to
have a JSON language server run when editing keybinds so that there is
auto-complete for action arguments.

This is the first time action argument schemas are required by
themselves rather than inlined in the keymap schema. Rather than add all
action schemas to the configuration options we send to the JSON LSP on
startup, this PR implements support for the
`vscode-json-language-server` extension to the LSP whereby the server
will request the client (Zed) to resolve URLs with URI schemes it does
not recognize, in our case `zed://`. This limits the impact on the size
of the configuration options to ~1KB as we send URLs for the language
server to resolve on demand rather than the schema itself. My
understanding is that this is how VSCode handles JSON schemas as well. I
plan to investigate converting the rest of our schema generation logic
to this method in a follow up PR.

Co-Authored-By: Cole <cole@zed.dev>

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-20 17:07:39 -04:00
Vitaly Slobodin
1c95a2ccee
Fix Tailwind support for HTML/ERB files (#34743)
Closes #27118
Closes #34165

Fix a small issue after we landed
https://github.com/zed-extensions/ruby/pull/113+ where we introduced
`HTML/ERB` and `YAML/ERB` language IDs to improve user experience. Sorry
about that. Thanks!

Release Notes:

- N/A
2025-07-19 11:08:41 -04:00
gcp-cherry-pick-bot[bot]
234a4f86ba
keymap ui: Fix remove key mapping bug (cherry-pick #34683) (#34730)
Cherry-picked keymap ui: Fix remove key mapping bug (#34683)

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>

Co-authored-by: Anthony Eid <56899983+Anthony-Eid@users.noreply.github.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-18 15:52:36 -04:00
Peter Tripp
3f305fa805
ci: Skip generating Windows release artifacts (#34704)
Release Notes:

- N/A
2025-07-18 14:32:01 -04:00
Finn Evers
b83965285d editor: Ensure topmost buffer header can be properly folded (#34721)
This PR fixes an issue where the topmost header in a multibuffer would
jump when the corresponding buffer was folded.
The issue arose because for the topmost header, the offset within the
scroll anchor is negative, as the corresponding buffer only starts below
the header itself and thus the offset for the scroll position has to be
negative.
However, upon collapsing that buffer, we end up with a negative vertical
scroll position, which causes all kinds of different problems. The issue
has been present for a long time, but became more visible after
https://github.com/zed-industries/zed/pull/34295 landed, as that change
removed the case distinction for buffers scrolled all the way to the
top.

This PR fixes this by clamping just the vertical scroll position upon
return, which ensures the negative offset works as expected when the
buffer is expanded, but the vertical scroll position does not turn
negative once the buffer is folded.

Release Notes:

- Fixed an issue where folding the topmost buffer in a multibuffer would
cause the header to jump slightly.
2025-07-18 13:38:32 -04:00
Danilo Leal
0acd108e7f keymap_ui: Add some design refinements (#34673)
Mostly small stuff over here.

Release Notes:

- N/A
2025-07-18 13:04:46 -04:00
Joseph T. Lyons
5deb404135 zed 0.196.4 2025-07-18 12:40:43 -04:00
Joseph T. Lyons
619282a8ed Revert "gpui: Improve path rendering & global multisample anti-aliasing" (#34722)
Reverts zed-industries/zed#29718

We've noticed some issues with Zed on Intel-based Macs where typing has
become sluggish, and git bisect has seemed to point towards this PR.
Reverting for now, until we can understand why it is causing this issue.
2025-07-18 12:36:45 -04:00
gcp-cherry-pick-bot[bot]
3f32020785
keymap_ui: Don't panic on KeybindSource::from_meta (cherry-pick #34652) (#34677)
Cherry-picked keymap_ui: Don't panic on `KeybindSource::from_meta`
(#34652)

Closes #ISSUE

Log error instead of panicking when `from_meta` is passed an invalid
value

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-18 12:45:44 +03:00
gcp-cherry-pick-bot[bot]
ce0de10147
keymap_ui: Fix various keymap editor issues (cherry-pick #34647) (#34670)
Cherry-picked keymap_ui: Fix various keymap editor issues (#34647)

This PR tackles miscellaneous nits for the new keymap editor UI.

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>

Co-authored-by: Finn Evers <finn@zed.dev>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-17 20:03:03 -04:00
Richard Feldman
c9b9b3194e
zed 0.196.3 2025-07-17 19:25:14 -04:00
Richard Feldman
eeb9e242b4
Retry on burn mode (#34669)
Now we only auto-retry if burn mode is enabled. We also show a "Retry"
button (so you don't have to type "continue") if you think that's the
right remedy, and additionally we show a "Retry and Enable Burn Mode"
button if you don't have it enabled.

<img width="484" height="260" alt="Screenshot 2025-07-17 at 6 25 27 PM"
src="https://github.com/user-attachments/assets/dc5bf1f6-8b11-4041-87aa-4f37c95ea9f0"
/>

<img width="478" height="307" alt="Screenshot 2025-07-17 at 6 22 36 PM"
src="https://github.com/user-attachments/assets/1ed6578a-1696-449d-96d1-e447d11959fa"
/>


Release Notes:

- Only auto-retry Agent requests when Burn Mode is enabled
2025-07-17 19:23:38 -04:00
Richard Feldman
f9c498318d
Improve upstream error reporting (#34668)
Now we handle more upstream error cases using the same auto-retry logic.

Release Notes:

- N/A
2025-07-17 19:23:34 -04:00
gcp-cherry-pick-bot[bot]
cb40bb755e
keymap ui: Fix keymap editor search bugs (cherry-pick #34579) (#34588)
Cherry-picked keymap ui: Fix keymap editor search bugs (#34579)

Keystroke input now gets cleared when toggling to normal search mode
Main search bar is focused when toggling to normal search mode

This also gets rid of highlight on focus from keystroke_editor because
it also matched the search bool field and was redundant

Release Notes:

- N/A

Co-authored-by: Anthony Eid <56899983+Anthony-Eid@users.noreply.github.com>
2025-07-17 18:16:33 -04:00
Umesh Yadav
991887a3ea keymap_ui: Open Keymap editor from settings dropdown (#34576)
@probably-neb I guess we should be opening the keymap editor from title
bar and menu as well. I believe this got missed in this: #34568.

Release Notes:

- Open Keymap editor from settings from menu and title bar.
2025-07-17 13:37:58 -04:00
Anthony Eid
f249ee481d keymap ui: Fix keymap editor search bugs (#34579)
Keystroke input now gets cleared when toggling to normal search mode
Main search bar is focused when toggling to normal search mode

This also gets rid of highlight on focus from keystroke_editor because
it also matched the search bool field and was redundant

Release Notes:

- N/A
2025-07-17 13:37:43 -04:00
gcp-cherry-pick-bot[bot]
484e39dcba
keymap_ui: Show edit icon on hovered and selected row (cherry-pick #34630) (#34635)
Cherry-picked keymap_ui: Show edit icon on hovered and selected row
(#34630)

Closes #ISSUE

Improves the behavior of the edit icon in the far left column of the
keymap UI table. It is now shown in both the selected and the hovered
row as an indicator that the row is editable in this configuration. When
hovered a row can be double clicked or the edit icon can be clicked, and
when selected it can be edited via keyboard shortcuts. Additionally, the
edit icon and all other hover tooltips will now disappear when the table
is navigated via keyboard shortcuts.

<details><summary>Video</summary>




https://github.com/user-attachments/assets/6584810f-4c6d-4e6f-bdca-25b16c920cfc

</details>

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-17 12:21:49 -04:00
gcp-cherry-pick-bot[bot]
ec7d6631a4
Add keymap editor UI telemetry events (cherry-pick #34571) (#34589)
Cherry-picked Add keymap editor UI telemetry events (#34571)

- Search queries
- Keybinding update or removed
- Copy action name
- Copy context name

cc @katie-z-geer 

Release Notes:

- N/A

Co-authored-by: Ben Kunkle <ben@zed.dev>

Co-authored-by: Anthony Eid <56899983+Anthony-Eid@users.noreply.github.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-07-17 11:58:31 -04:00
gcp-cherry-pick-bot[bot]
27691613c1
keymap_ui: Improve keybind display in menus (cherry-pick #34587) (#34632)
Cherry-picked keymap_ui: Improve keybind display in menus (#34587)

Closes #ISSUE

Defines keybindings for `keymap_editor::EditBinding` and
`keymap_editor::CreateBinding`, making sure those actions are used in
tooltips.

Release Notes:

- N/A *or* Added/Fixed/Improved ...

---------

Co-authored-by: Finn <dev@bahn.sh>

Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Finn <dev@bahn.sh>
2025-07-17 11:34:44 -04:00
Zed Bot
5f11e09a4b Bump to 0.196.2 for @osyvokon 2025-07-17 12:14:36 +00:00
gcp-cherry-pick-bot[bot]
34e63f9e55
agent: Disable project_notifications by default (cherry-pick #34615) (#34619)
Cherry-picked agent: Disable `project_notifications` by default (#34615)

This tool needs more polishing before being generally available.

Release Notes:

- agent: Disabled `project_notifications` tool by default for the time
being

Co-authored-by: Oleksiy Syvokon <oleksiy@zed.dev>
2025-07-17 15:09:12 +03:00
gcp-cherry-pick-bot[bot]
cbdca4e090
Fix shortcuts with Shift (cherry-pick #34614) (#34616)
Cherry-picked Fix shortcuts with `Shift` (#34614)

Closes #34605, #34606, #34609

Release Notes:

- (Preview only) Fixed shortcuts involving Shift

Co-authored-by: Oleksiy Syvokon <oleksiy@zed.dev>
2025-07-17 14:31:57 +03:00
Conrad Irwin
92105e92c3 Fix ctrl-q on AZERTY on Linux (#34597)
Closes #ISSUE

Release Notes:

- N/A
2025-07-16 21:28:51 -06:00
Zed Bot
632f09efd6 Bump to 0.196.1 for @ConradIrwin 2025-07-17 02:18:34 +00:00
gcp-cherry-pick-bot[bot]
192e0e32dd
Don't override ascii graphical shortcuts (cherry-pick #34592) (#34595)
Cherry-picked Don't override ascii graphical shortcuts (#34592)

Closes #34536

Release Notes:

- (preview only) Fix shortcuts on Extended Latin keyboards on Linux

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-07-16 20:16:34 -06:00
Joseph T. Lyons
30cc8bd824 v0.196.x preview 2025-07-16 14:30:48 -04:00
73 changed files with 4413 additions and 1542 deletions

View file

@ -748,7 +748,7 @@ jobs:
timeout-minutes: 120 timeout-minutes: 120
name: Create a Windows installer name: Create a Windows installer
runs-on: [self-hosted, Windows, X64] runs-on: [self-hosted, Windows, X64]
if: ${{ startsWith(github.ref, 'refs/tags/v') || contains(github.event.pull_request.labels.*.name, 'run-bundling') }} if: false && (startsWith(github.ref, 'refs/tags/v') || contains(github.event.pull_request.labels.*.name, 'run-bundling'))
needs: [windows_tests] needs: [windows_tests]
env: env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }} AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
@ -787,7 +787,7 @@ jobs:
- name: Upload Artifacts to release - name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1 uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
# Re-enable when we are ready to publish windows preview releases # Re-enable when we are ready to publish windows preview releases
if: false && ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) && env.RELEASE_CHANNEL == 'preview' }} # upload only preview if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) && env.RELEASE_CHANNEL == 'preview' }} # upload only preview
with: with:
draft: true draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }} prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}

12
Cargo.lock generated
View file

@ -2148,7 +2148,7 @@ dependencies = [
[[package]] [[package]]
name = "blade-graphics" name = "blade-graphics"
version = "0.6.0" version = "0.6.0"
source = "git+https://github.com/kvark/blade?rev=416375211bb0b5826b3584dccdb6a43369e499ad#416375211bb0b5826b3584dccdb6a43369e499ad" source = "git+https://github.com/kvark/blade?rev=e0ec4e720957edd51b945b64dd85605ea54bcfe5#e0ec4e720957edd51b945b64dd85605ea54bcfe5"
dependencies = [ dependencies = [
"ash", "ash",
"ash-window", "ash-window",
@ -2181,7 +2181,7 @@ dependencies = [
[[package]] [[package]]
name = "blade-macros" name = "blade-macros"
version = "0.3.0" version = "0.3.0"
source = "git+https://github.com/kvark/blade?rev=416375211bb0b5826b3584dccdb6a43369e499ad#416375211bb0b5826b3584dccdb6a43369e499ad" source = "git+https://github.com/kvark/blade?rev=e0ec4e720957edd51b945b64dd85605ea54bcfe5#e0ec4e720957edd51b945b64dd85605ea54bcfe5"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -2191,7 +2191,7 @@ dependencies = [
[[package]] [[package]]
name = "blade-util" name = "blade-util"
version = "0.2.0" version = "0.2.0"
source = "git+https://github.com/kvark/blade?rev=416375211bb0b5826b3584dccdb6a43369e499ad#416375211bb0b5826b3584dccdb6a43369e499ad" source = "git+https://github.com/kvark/blade?rev=e0ec4e720957edd51b945b64dd85605ea54bcfe5#e0ec4e720957edd51b945b64dd85605ea54bcfe5"
dependencies = [ dependencies = [
"blade-graphics", "blade-graphics",
"bytemuck", "bytemuck",
@ -14709,6 +14709,7 @@ dependencies = [
"fs", "fs",
"fuzzy", "fuzzy",
"gpui", "gpui",
"itertools 0.14.0",
"language", "language",
"log", "log",
"menu", "menu",
@ -14720,6 +14721,8 @@ dependencies = [
"serde", "serde",
"serde_json", "serde_json",
"settings", "settings",
"telemetry",
"tempfile",
"theme", "theme",
"tree-sitter-json", "tree-sitter-json",
"tree-sitter-rust", "tree-sitter-rust",
@ -16451,6 +16454,7 @@ dependencies = [
"schemars", "schemars",
"serde", "serde",
"settings", "settings",
"settings_ui",
"smallvec", "smallvec",
"story", "story",
"telemetry", "telemetry",
@ -20095,7 +20099,7 @@ dependencies = [
[[package]] [[package]]
name = "zed" name = "zed"
version = "0.196.0" version = "0.196.7"
dependencies = [ dependencies = [
"activity_indicator", "activity_indicator",
"agent", "agent",

View file

@ -434,9 +434,9 @@ aws-smithy-runtime-api = { version = "1.7.4", features = ["http-1x", "client"] }
aws-smithy-types = { version = "1.3.0", features = ["http-body-1-x"] } aws-smithy-types = { version = "1.3.0", features = ["http-body-1-x"] }
base64 = "0.22" base64 = "0.22"
bitflags = "2.6.0" bitflags = "2.6.0"
blade-graphics = { git = "https://github.com/kvark/blade", rev = "416375211bb0b5826b3584dccdb6a43369e499ad" } blade-graphics = { git = "https://github.com/kvark/blade", rev = "e0ec4e720957edd51b945b64dd85605ea54bcfe5" }
blade-macros = { git = "https://github.com/kvark/blade", rev = "416375211bb0b5826b3584dccdb6a43369e499ad" } blade-macros = { git = "https://github.com/kvark/blade", rev = "e0ec4e720957edd51b945b64dd85605ea54bcfe5" }
blade-util = { git = "https://github.com/kvark/blade", rev = "416375211bb0b5826b3584dccdb6a43369e499ad" } blade-util = { git = "https://github.com/kvark/blade", rev = "e0ec4e720957edd51b945b64dd85605ea54bcfe5" }
blake3 = "1.5.3" blake3 = "1.5.3"
bytes = "1.0" bytes = "1.0"
cargo_metadata = "0.19" cargo_metadata = "0.19"
@ -489,7 +489,7 @@ json_dotpath = "1.1"
jsonschema = "0.30.0" jsonschema = "0.30.0"
jsonwebtoken = "9.3" jsonwebtoken = "9.3"
jupyter-protocol = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" } jupyter-protocol = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
jupyter-websocket-client = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" } jupyter-websocket-client = { git = "https://github.com/ConradIrwin/runtimed" ,rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
libc = "0.2" libc = "0.2"
libsqlite3-sys = { version = "0.30.1", features = ["bundled"] } libsqlite3-sys = { version = "0.30.1", features = ["bundled"] }
linkify = "0.10.0" linkify = "0.10.0"
@ -500,7 +500,7 @@ metal = "0.29"
moka = { version = "0.12.10", features = ["sync"] } moka = { version = "0.12.10", features = ["sync"] }
naga = { version = "25.0", features = ["wgsl-in"] } naga = { version = "25.0", features = ["wgsl-in"] }
nanoid = "0.4" nanoid = "0.4"
nbformat = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" } nbformat = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
nix = "0.29" nix = "0.29"
num-format = "0.4.4" num-format = "0.4.4"
objc = "0.2" objc = "0.2"
@ -541,7 +541,7 @@ reqwest = { git = "https://github.com/zed-industries/reqwest.git", rev = "951c77
"stream", "stream",
] } ] }
rsa = "0.9.6" rsa = "0.9.6"
runtimelib = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734", default-features = false, features = [ runtimelib = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734", default-features = false, features = [
"async-dispatcher-runtime", "async-dispatcher-runtime",
] } ] }
rust-embed = { version = "8.4", features = ["include-exclude"] } rust-embed = { version = "8.4", features = ["include-exclude"] }

View file

@ -1118,7 +1118,12 @@
"ctrl-f": "search::FocusSearch", "ctrl-f": "search::FocusSearch",
"alt-find": "keymap_editor::ToggleKeystrokeSearch", "alt-find": "keymap_editor::ToggleKeystrokeSearch",
"alt-ctrl-f": "keymap_editor::ToggleKeystrokeSearch", "alt-ctrl-f": "keymap_editor::ToggleKeystrokeSearch",
"alt-c": "keymap_editor::ToggleConflictFilter" "alt-c": "keymap_editor::ToggleConflictFilter",
"enter": "keymap_editor::EditBinding",
"alt-enter": "keymap_editor::CreateBinding",
"ctrl-c": "keymap_editor::CopyAction",
"ctrl-shift-c": "keymap_editor::CopyContext",
"ctrl-t": "keymap_editor::ShowMatchingKeybinds"
} }
}, },
{ {

View file

@ -1216,8 +1216,14 @@
"context": "KeymapEditor", "context": "KeymapEditor",
"use_key_equivalents": true, "use_key_equivalents": true,
"bindings": { "bindings": {
"cmd-f": "search::FocusSearch",
"cmd-alt-f": "keymap_editor::ToggleKeystrokeSearch", "cmd-alt-f": "keymap_editor::ToggleKeystrokeSearch",
"cmd-alt-c": "keymap_editor::ToggleConflictFilter" "cmd-alt-c": "keymap_editor::ToggleConflictFilter",
"enter": "keymap_editor::EditBinding",
"alt-enter": "keymap_editor::CreateBinding",
"cmd-c": "keymap_editor::CopyAction",
"cmd-shift-c": "keymap_editor::CopyContext",
"cmd-t": "keymap_editor::ShowMatchingKeybinds"
} }
}, },
{ {

View file

@ -817,7 +817,7 @@
"edit_file": true, "edit_file": true,
"fetch": true, "fetch": true,
"list_directory": true, "list_directory": true,
"project_notifications": true, "project_notifications": false,
"move_path": true, "move_path": true,
"now": true, "now": true,
"find_path": true, "find_path": true,
@ -837,7 +837,7 @@
"diagnostics": true, "diagnostics": true,
"fetch": true, "fetch": true,
"list_directory": true, "list_directory": true,
"project_notifications": true, "project_notifications": false,
"now": true, "now": true,
"find_path": true, "find_path": true,
"read_file": true, "read_file": true,

View file

@ -51,7 +51,7 @@ use util::{ResultExt as _, debug_panic, post_inc};
use uuid::Uuid; use uuid::Uuid;
use zed_llm_client::{CompletionIntent, CompletionRequestStatus, UsageLimit}; use zed_llm_client::{CompletionIntent, CompletionRequestStatus, UsageLimit};
const MAX_RETRY_ATTEMPTS: u8 = 3; const MAX_RETRY_ATTEMPTS: u8 = 4;
const BASE_RETRY_DELAY: Duration = Duration::from_secs(5); const BASE_RETRY_DELAY: Duration = Duration::from_secs(5);
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
@ -396,6 +396,7 @@ pub struct Thread {
remaining_turns: u32, remaining_turns: u32,
configured_model: Option<ConfiguredModel>, configured_model: Option<ConfiguredModel>,
profile: AgentProfile, profile: AgentProfile,
last_error_context: Option<(Arc<dyn LanguageModel>, CompletionIntent)>,
} }
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
@ -489,10 +490,11 @@ impl Thread {
retry_state: None, retry_state: None,
message_feedback: HashMap::default(), message_feedback: HashMap::default(),
last_auto_capture_at: None, last_auto_capture_at: None,
last_error_context: None,
last_received_chunk_at: None, last_received_chunk_at: None,
request_callback: None, request_callback: None,
remaining_turns: u32::MAX, remaining_turns: u32::MAX,
configured_model, configured_model: configured_model.clone(),
profile: AgentProfile::new(profile_id, tools), profile: AgentProfile::new(profile_id, tools),
} }
} }
@ -613,6 +615,7 @@ impl Thread {
feedback: None, feedback: None,
message_feedback: HashMap::default(), message_feedback: HashMap::default(),
last_auto_capture_at: None, last_auto_capture_at: None,
last_error_context: None,
last_received_chunk_at: None, last_received_chunk_at: None,
request_callback: None, request_callback: None,
remaining_turns: u32::MAX, remaining_turns: u32::MAX,
@ -1264,9 +1267,58 @@ impl Thread {
self.flush_notifications(model.clone(), intent, cx); self.flush_notifications(model.clone(), intent, cx);
let request = self.to_completion_request(model.clone(), intent, cx); let _checkpoint = self.finalize_pending_checkpoint(cx);
self.stream_completion(
self.to_completion_request(model.clone(), intent, cx),
model,
intent,
window,
cx,
);
}
self.stream_completion(request, model, intent, window, cx); pub fn retry_last_completion(
&mut self,
window: Option<AnyWindowHandle>,
cx: &mut Context<Self>,
) {
// Clear any existing error state
self.retry_state = None;
// Use the last error context if available, otherwise fall back to configured model
let (model, intent) = if let Some((model, intent)) = self.last_error_context.take() {
(model, intent)
} else if let Some(configured_model) = self.configured_model.as_ref() {
let model = configured_model.model.clone();
let intent = if self.has_pending_tool_uses() {
CompletionIntent::ToolResults
} else {
CompletionIntent::UserPrompt
};
(model, intent)
} else if let Some(configured_model) = self.get_or_init_configured_model(cx) {
let model = configured_model.model.clone();
let intent = if self.has_pending_tool_uses() {
CompletionIntent::ToolResults
} else {
CompletionIntent::UserPrompt
};
(model, intent)
} else {
return;
};
self.send_to_model(model, intent, window, cx);
}
pub fn enable_burn_mode_and_retry(
&mut self,
window: Option<AnyWindowHandle>,
cx: &mut Context<Self>,
) {
self.completion_mode = CompletionMode::Burn;
cx.emit(ThreadEvent::ProfileChanged);
self.retry_last_completion(window, cx);
} }
pub fn used_tools_since_last_user_message(&self) -> bool { pub fn used_tools_since_last_user_message(&self) -> bool {
@ -1987,6 +2039,12 @@ impl Thread {
if let Some(retry_strategy) = if let Some(retry_strategy) =
Thread::get_retry_strategy(completion_error) Thread::get_retry_strategy(completion_error)
{ {
log::info!(
"Retrying with {:?} for language model completion error {:?}",
retry_strategy,
completion_error
);
retry_scheduled = thread retry_scheduled = thread
.handle_retryable_error_with_delay( .handle_retryable_error_with_delay(
&completion_error, &completion_error,
@ -2130,8 +2188,8 @@ impl Thread {
// General strategy here: // General strategy here:
// - If retrying won't help (e.g. invalid API key or payload too large), return None so we don't retry at all. // - If retrying won't help (e.g. invalid API key or payload too large), return None so we don't retry at all.
// - If it's a time-based issue (e.g. server overloaded, rate limit exceeded), try multiple times with exponential backoff. // - If it's a time-based issue (e.g. server overloaded, rate limit exceeded), retry up to 4 times with exponential backoff.
// - If it's an issue that *might* be fixed by retrying (e.g. internal server error), just retry once. // - If it's an issue that *might* be fixed by retrying (e.g. internal server error), retry up to 3 times.
match error { match error {
HttpResponseError { HttpResponseError {
status_code: StatusCode::TOO_MANY_REQUESTS, status_code: StatusCode::TOO_MANY_REQUESTS,
@ -2146,16 +2204,48 @@ impl Thread {
max_attempts: MAX_RETRY_ATTEMPTS, max_attempts: MAX_RETRY_ATTEMPTS,
}) })
} }
UpstreamProviderError {
status,
retry_after,
..
} => match *status {
StatusCode::TOO_MANY_REQUESTS | StatusCode::SERVICE_UNAVAILABLE => {
Some(RetryStrategy::Fixed {
delay: retry_after.unwrap_or(BASE_RETRY_DELAY),
max_attempts: MAX_RETRY_ATTEMPTS,
})
}
StatusCode::INTERNAL_SERVER_ERROR => Some(RetryStrategy::Fixed {
delay: retry_after.unwrap_or(BASE_RETRY_DELAY),
// Internal Server Error could be anything, retry up to 3 times.
max_attempts: 3,
}),
status => {
// There is no StatusCode variant for the unofficial HTTP 529 ("The service is overloaded"),
// but we frequently get them in practice. See https://http.dev/529
if status.as_u16() == 529 {
Some(RetryStrategy::Fixed {
delay: retry_after.unwrap_or(BASE_RETRY_DELAY),
max_attempts: MAX_RETRY_ATTEMPTS,
})
} else {
Some(RetryStrategy::Fixed {
delay: retry_after.unwrap_or(BASE_RETRY_DELAY),
max_attempts: 2,
})
}
}
},
ApiInternalServerError { .. } => Some(RetryStrategy::Fixed { ApiInternalServerError { .. } => Some(RetryStrategy::Fixed {
delay: BASE_RETRY_DELAY, delay: BASE_RETRY_DELAY,
max_attempts: 1, max_attempts: 3,
}), }),
ApiReadResponseError { .. } ApiReadResponseError { .. }
| HttpSend { .. } | HttpSend { .. }
| DeserializeResponse { .. } | DeserializeResponse { .. }
| BadRequestFormat { .. } => Some(RetryStrategy::Fixed { | BadRequestFormat { .. } => Some(RetryStrategy::Fixed {
delay: BASE_RETRY_DELAY, delay: BASE_RETRY_DELAY,
max_attempts: 1, max_attempts: 3,
}), }),
// Retrying these errors definitely shouldn't help. // Retrying these errors definitely shouldn't help.
HttpResponseError { HttpResponseError {
@ -2163,24 +2253,30 @@ impl Thread {
StatusCode::PAYLOAD_TOO_LARGE | StatusCode::FORBIDDEN | StatusCode::UNAUTHORIZED, StatusCode::PAYLOAD_TOO_LARGE | StatusCode::FORBIDDEN | StatusCode::UNAUTHORIZED,
.. ..
} }
| SerializeRequest { .. }
| BuildRequestBody { .. }
| PromptTooLarge { .. }
| AuthenticationError { .. } | AuthenticationError { .. }
| PermissionError { .. } | PermissionError { .. }
| NoApiKey { .. }
| ApiEndpointNotFound { .. } | ApiEndpointNotFound { .. }
| NoApiKey { .. } => None, | PromptTooLarge { .. } => None,
// These errors might be transient, so retry them
SerializeRequest { .. } | BuildRequestBody { .. } => Some(RetryStrategy::Fixed {
delay: BASE_RETRY_DELAY,
max_attempts: 1,
}),
// Retry all other 4xx and 5xx errors once. // Retry all other 4xx and 5xx errors once.
HttpResponseError { status_code, .. } HttpResponseError { status_code, .. }
if status_code.is_client_error() || status_code.is_server_error() => if status_code.is_client_error() || status_code.is_server_error() =>
{ {
Some(RetryStrategy::Fixed { Some(RetryStrategy::Fixed {
delay: BASE_RETRY_DELAY, delay: BASE_RETRY_DELAY,
max_attempts: 1, max_attempts: 3,
}) })
} }
// Conservatively assume that any other errors are non-retryable // Conservatively assume that any other errors are non-retryable
HttpResponseError { .. } | Other(..) => None, HttpResponseError { .. } | Other(..) => Some(RetryStrategy::Fixed {
delay: BASE_RETRY_DELAY,
max_attempts: 2,
}),
} }
} }
@ -2193,6 +2289,23 @@ impl Thread {
window: Option<AnyWindowHandle>, window: Option<AnyWindowHandle>,
cx: &mut Context<Self>, cx: &mut Context<Self>,
) -> bool { ) -> bool {
// Store context for the Retry button
self.last_error_context = Some((model.clone(), intent));
// Only auto-retry if Burn Mode is enabled
if self.completion_mode != CompletionMode::Burn {
// Show error with retry options
cx.emit(ThreadEvent::ShowError(ThreadError::RetryableError {
message: format!(
"{}\n\nTo automatically retry when similar errors happen, enable Burn Mode.",
error
)
.into(),
can_enable_burn_mode: true,
}));
return false;
}
let Some(strategy) = strategy.or_else(|| Self::get_retry_strategy(error)) else { let Some(strategy) = strategy.or_else(|| Self::get_retry_strategy(error)) else {
return false; return false;
}; };
@ -2273,6 +2386,13 @@ impl Thread {
// Stop generating since we're giving up on retrying. // Stop generating since we're giving up on retrying.
self.pending_completions.clear(); self.pending_completions.clear();
// Show error alongside a Retry button, but no
// Enable Burn Mode button (since it's already enabled)
cx.emit(ThreadEvent::ShowError(ThreadError::RetryableError {
message: format!("Failed after retrying: {}", error).into(),
can_enable_burn_mode: false,
}));
false false
} }
} }
@ -3183,6 +3303,11 @@ pub enum ThreadError {
header: SharedString, header: SharedString,
message: SharedString, message: SharedString,
}, },
#[error("Retryable error: {message}")]
RetryableError {
message: SharedString,
can_enable_burn_mode: bool,
},
} }
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
@ -3583,6 +3708,7 @@ fn main() {{
} }
#[gpui::test] #[gpui::test]
#[ignore] // turn this test on when project_notifications tool is re-enabled
async fn test_stale_buffer_notification(cx: &mut TestAppContext) { async fn test_stale_buffer_notification(cx: &mut TestAppContext) {
init_test_settings(cx); init_test_settings(cx);
@ -4137,6 +4263,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await; let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await; let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create model that returns overloaded error // Create model that returns overloaded error
let model = Arc::new(ErrorInjector::new(TestError::Overloaded)); let model = Arc::new(ErrorInjector::new(TestError::Overloaded));
@ -4210,6 +4341,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await; let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await; let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create model that returns internal server error // Create model that returns internal server error
let model = Arc::new(ErrorInjector::new(TestError::InternalServerError)); let model = Arc::new(ErrorInjector::new(TestError::InternalServerError));
@ -4231,7 +4367,7 @@ fn main() {{
let retry_state = thread.retry_state.as_ref().unwrap(); let retry_state = thread.retry_state.as_ref().unwrap();
assert_eq!(retry_state.attempt, 1, "Should be first retry attempt"); assert_eq!(retry_state.attempt, 1, "Should be first retry attempt");
assert_eq!( assert_eq!(
retry_state.max_attempts, 1, retry_state.max_attempts, 3,
"Should have correct max attempts" "Should have correct max attempts"
); );
}); });
@ -4247,8 +4383,9 @@ fn main() {{
if let MessageSegment::Text(text) = seg { if let MessageSegment::Text(text) = seg {
text.contains("internal") text.contains("internal")
&& text.contains("Fake") && text.contains("Fake")
&& text.contains("Retrying in") && text.contains("Retrying")
&& !text.contains("attempt") && text.contains("attempt 1 of 3")
&& text.contains("seconds")
} else { } else {
false false
} }
@ -4286,6 +4423,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await; let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await; let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create model that returns internal server error // Create model that returns internal server error
let model = Arc::new(ErrorInjector::new(TestError::InternalServerError)); let model = Arc::new(ErrorInjector::new(TestError::InternalServerError));
@ -4338,8 +4480,8 @@ fn main() {{
let retry_state = thread.retry_state.as_ref().unwrap(); let retry_state = thread.retry_state.as_ref().unwrap();
assert_eq!(retry_state.attempt, 1, "Should be first retry attempt"); assert_eq!(retry_state.attempt, 1, "Should be first retry attempt");
assert_eq!( assert_eq!(
retry_state.max_attempts, 1, retry_state.max_attempts, 3,
"Internal server errors should only retry once" "Internal server errors should retry up to 3 times"
); );
}); });
@ -4347,7 +4489,15 @@ fn main() {{
cx.executor().advance_clock(BASE_RETRY_DELAY); cx.executor().advance_clock(BASE_RETRY_DELAY);
cx.run_until_parked(); cx.run_until_parked();
// Should have scheduled second retry - count retry messages // Advance clock for second retry
cx.executor().advance_clock(BASE_RETRY_DELAY);
cx.run_until_parked();
// Advance clock for third retry
cx.executor().advance_clock(BASE_RETRY_DELAY);
cx.run_until_parked();
// Should have completed all retries - count retry messages
let retry_count = thread.update(cx, |thread, _| { let retry_count = thread.update(cx, |thread, _| {
thread thread
.messages .messages
@ -4365,24 +4515,24 @@ fn main() {{
.count() .count()
}); });
assert_eq!( assert_eq!(
retry_count, 1, retry_count, 3,
"Should have only one retry for internal server errors" "Should have 3 retries for internal server errors"
); );
// For internal server errors, we only retry once and then give up // For internal server errors, we retry 3 times and then give up
// Check that retry_state is cleared after the single retry // Check that retry_state is cleared after all retries
thread.read_with(cx, |thread, _| { thread.read_with(cx, |thread, _| {
assert!( assert!(
thread.retry_state.is_none(), thread.retry_state.is_none(),
"Retry state should be cleared after single retry" "Retry state should be cleared after all retries"
); );
}); });
// Verify total attempts (1 initial + 1 retry) // Verify total attempts (1 initial + 3 retries)
assert_eq!( assert_eq!(
*completion_count.lock(), *completion_count.lock(),
2, 4,
"Should have attempted once plus 1 retry" "Should have attempted once plus 3 retries"
); );
} }
@ -4393,6 +4543,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await; let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await; let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create model that returns overloaded error // Create model that returns overloaded error
let model = Arc::new(ErrorInjector::new(TestError::Overloaded)); let model = Arc::new(ErrorInjector::new(TestError::Overloaded));
@ -4479,6 +4634,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await; let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await; let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// We'll use a wrapper to switch behavior after first failure // We'll use a wrapper to switch behavior after first failure
struct RetryTestModel { struct RetryTestModel {
inner: Arc<FakeLanguageModel>, inner: Arc<FakeLanguageModel>,
@ -4647,6 +4807,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await; let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await; let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create a model that fails once then succeeds // Create a model that fails once then succeeds
struct FailOnceModel { struct FailOnceModel {
inner: Arc<FakeLanguageModel>, inner: Arc<FakeLanguageModel>,
@ -4808,6 +4973,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await; let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await; let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create a model that returns rate limit error with retry_after // Create a model that returns rate limit error with retry_after
struct RateLimitModel { struct RateLimitModel {
inner: Arc<FakeLanguageModel>, inner: Arc<FakeLanguageModel>,
@ -5081,6 +5251,79 @@ fn main() {{
); );
} }
#[gpui::test]
async fn test_no_retry_without_burn_mode(cx: &mut TestAppContext) {
init_test_settings(cx);
let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Ensure we're in Normal mode (not Burn mode)
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Normal);
});
// Track error events
let error_events = Arc::new(Mutex::new(Vec::new()));
let error_events_clone = error_events.clone();
let _subscription = thread.update(cx, |_, cx| {
cx.subscribe(&thread, move |_, _, event: &ThreadEvent, _| {
if let ThreadEvent::ShowError(error) = event {
error_events_clone.lock().push(error.clone());
}
})
});
// Create model that returns overloaded error
let model = Arc::new(ErrorInjector::new(TestError::Overloaded));
// Insert a user message
thread.update(cx, |thread, cx| {
thread.insert_user_message("Hello!", ContextLoadResult::default(), None, vec![], cx);
});
// Start completion
thread.update(cx, |thread, cx| {
thread.send_to_model(model.clone(), CompletionIntent::UserPrompt, None, cx);
});
cx.run_until_parked();
// Verify no retry state was created
thread.read_with(cx, |thread, _| {
assert!(
thread.retry_state.is_none(),
"Should not have retry state in Normal mode"
);
});
// Check that a retryable error was reported
let errors = error_events.lock();
assert!(!errors.is_empty(), "Should have received an error event");
if let ThreadError::RetryableError {
message: _,
can_enable_burn_mode,
} = &errors[0]
{
assert!(
*can_enable_burn_mode,
"Error should indicate burn mode can be enabled"
);
} else {
panic!("Expected RetryableError, got {:?}", errors[0]);
}
// Verify the thread is no longer generating
thread.read_with(cx, |thread, _| {
assert!(
!thread.is_generating(),
"Should not be generating after error without retry"
);
});
}
#[gpui::test] #[gpui::test]
async fn test_retry_cancelled_on_stop(cx: &mut TestAppContext) { async fn test_retry_cancelled_on_stop(cx: &mut TestAppContext) {
init_test_settings(cx); init_test_settings(cx);
@ -5088,6 +5331,11 @@ fn main() {{
let project = create_test_project(cx, json!({})).await; let project = create_test_project(cx, json!({})).await;
let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await; let (_, _, thread, _, _base_model) = setup_test_environment(cx, project.clone()).await;
// Enable Burn Mode to allow retries
thread.update(cx, |thread, _| {
thread.set_completion_mode(CompletionMode::Burn);
});
// Create model that returns overloaded error // Create model that returns overloaded error
let model = Arc::new(ErrorInjector::new(TestError::Overloaded)); let model = Arc::new(ErrorInjector::new(TestError::Overloaded));

View file

@ -1036,7 +1036,7 @@ impl ActiveThread {
.collect::<Vec<_>>() .collect::<Vec<_>>()
.join("\n"); .join("\n");
self.last_error = Some(ThreadError::Message { self.last_error = Some(ThreadError::Message {
header: "Error interacting with language model".into(), header: "Error".into(),
message: error_message.into(), message: error_message.into(),
}); });
} }
@ -3722,8 +3722,11 @@ pub(crate) fn open_context(
AgentContextHandle::Thread(thread_context) => workspace.update(cx, |workspace, cx| { AgentContextHandle::Thread(thread_context) => workspace.update(cx, |workspace, cx| {
if let Some(panel) = workspace.panel::<AgentPanel>(cx) { if let Some(panel) = workspace.panel::<AgentPanel>(cx) {
panel.update(cx, |panel, cx| { let thread = thread_context.thread.clone();
panel.open_thread(thread_context.thread.clone(), window, cx); window.defer(cx, move |window, cx| {
panel.update(cx, |panel, cx| {
panel.open_thread(thread, window, cx);
});
}); });
} }
}), }),
@ -3731,8 +3734,11 @@ pub(crate) fn open_context(
AgentContextHandle::TextThread(text_thread_context) => { AgentContextHandle::TextThread(text_thread_context) => {
workspace.update(cx, |workspace, cx| { workspace.update(cx, |workspace, cx| {
if let Some(panel) = workspace.panel::<AgentPanel>(cx) { if let Some(panel) = workspace.panel::<AgentPanel>(cx) {
panel.update(cx, |panel, cx| { let context = text_thread_context.context.clone();
panel.open_prompt_editor(text_thread_context.context.clone(), window, cx) window.defer(cx, move |window, cx| {
panel.update(cx, |panel, cx| {
panel.open_prompt_editor(context, window, cx)
});
}); });
} }
}) })

View file

@ -64,8 +64,9 @@ use theme::ThemeSettings;
use time::UtcOffset; use time::UtcOffset;
use ui::utils::WithRemSize; use ui::utils::WithRemSize;
use ui::{ use ui::{
Banner, Callout, CheckboxWithLabel, ContextMenu, ElevationIndex, KeyBinding, PopoverMenu, Banner, Button, Callout, CheckboxWithLabel, ContextMenu, ElevationIndex, IconPosition,
PopoverMenuHandle, ProgressBar, Tab, Tooltip, Vector, VectorName, prelude::*, KeyBinding, PopoverMenu, PopoverMenuHandle, ProgressBar, Tab, Tooltip, Vector, VectorName,
prelude::*,
}; };
use util::ResultExt as _; use util::ResultExt as _;
use workspace::{ use workspace::{
@ -2913,6 +2914,21 @@ impl AgentPanel {
.size(IconSize::Small) .size(IconSize::Small)
.color(Color::Error); .color(Color::Error);
let retry_button = Button::new("retry", "Retry")
.icon(IconName::RotateCw)
.icon_position(IconPosition::Start)
.on_click({
let thread = thread.clone();
move |_, window, cx| {
thread.update(cx, |thread, cx| {
thread.clear_last_error();
thread.thread().update(cx, |thread, cx| {
thread.retry_last_completion(Some(window.window_handle()), cx);
});
});
}
});
div() div()
.border_t_1() .border_t_1()
.border_color(cx.theme().colors().border) .border_color(cx.theme().colors().border)
@ -2921,13 +2937,72 @@ impl AgentPanel {
.icon(icon) .icon(icon)
.title(header) .title(header)
.description(message.clone()) .description(message.clone())
.primary_action(self.dismiss_error_button(thread, cx)) .primary_action(retry_button)
.secondary_action(self.create_copy_button(message_with_header)) .secondary_action(self.dismiss_error_button(thread, cx))
.tertiary_action(self.create_copy_button(message_with_header))
.bg_color(self.error_callout_bg(cx)), .bg_color(self.error_callout_bg(cx)),
) )
.into_any_element() .into_any_element()
} }
fn render_retryable_error(
&self,
message: SharedString,
can_enable_burn_mode: bool,
thread: &Entity<ActiveThread>,
cx: &mut Context<Self>,
) -> AnyElement {
let icon = Icon::new(IconName::XCircle)
.size(IconSize::Small)
.color(Color::Error);
let retry_button = Button::new("retry", "Retry")
.icon(IconName::RotateCw)
.icon_position(IconPosition::Start)
.on_click({
let thread = thread.clone();
move |_, window, cx| {
thread.update(cx, |thread, cx| {
thread.clear_last_error();
thread.thread().update(cx, |thread, cx| {
thread.retry_last_completion(Some(window.window_handle()), cx);
});
});
}
});
let mut callout = Callout::new()
.icon(icon)
.title("Error")
.description(message.clone())
.bg_color(self.error_callout_bg(cx))
.primary_action(retry_button);
if can_enable_burn_mode {
let burn_mode_button = Button::new("enable_burn_retry", "Enable Burn Mode and Retry")
.icon(IconName::ZedBurnMode)
.icon_position(IconPosition::Start)
.on_click({
let thread = thread.clone();
move |_, window, cx| {
thread.update(cx, |thread, cx| {
thread.clear_last_error();
thread.thread().update(cx, |thread, cx| {
thread.enable_burn_mode_and_retry(Some(window.window_handle()), cx);
});
});
}
});
callout = callout.secondary_action(burn_mode_button);
}
div()
.border_t_1()
.border_color(cx.theme().colors().border)
.child(callout)
.into_any_element()
}
fn render_prompt_editor( fn render_prompt_editor(
&self, &self,
context_editor: &Entity<TextThreadEditor>, context_editor: &Entity<TextThreadEditor>,
@ -3169,6 +3244,15 @@ impl Render for AgentPanel {
ThreadError::Message { header, message } => { ThreadError::Message { header, message } => {
self.render_error_message(header, message, thread, cx) self.render_error_message(header, message, thread, cx)
} }
ThreadError::RetryableError {
message,
can_enable_burn_mode,
} => self.render_retryable_error(
message,
can_enable_burn_mode,
thread,
cx,
),
}) })
.into_any(), .into_any(),
) )

View file

@ -12,6 +12,7 @@ use collections::HashMap;
use fs::FakeFs; use fs::FakeFs;
use futures::{FutureExt, future::LocalBoxFuture}; use futures::{FutureExt, future::LocalBoxFuture};
use gpui::{AppContext, TestAppContext, Timer}; use gpui::{AppContext, TestAppContext, Timer};
use http_client::StatusCode;
use indoc::{formatdoc, indoc}; use indoc::{formatdoc, indoc};
use language_model::{ use language_model::{
LanguageModelRegistry, LanguageModelRequestTool, LanguageModelToolResult, LanguageModelRegistry, LanguageModelRequestTool, LanguageModelToolResult,
@ -1675,6 +1676,30 @@ async fn retry_on_rate_limit<R>(mut request: impl AsyncFnMut() -> Result<R>) ->
Timer::after(retry_after + jitter).await; Timer::after(retry_after + jitter).await;
continue; continue;
} }
LanguageModelCompletionError::UpstreamProviderError {
status,
retry_after,
..
} => {
// Only retry for specific status codes
let should_retry = matches!(
*status,
StatusCode::TOO_MANY_REQUESTS | StatusCode::SERVICE_UNAVAILABLE
) || status.as_u16() == 529;
if !should_retry {
return Err(err.into());
}
// Use server-provided retry_after if available, otherwise use default
let retry_after = retry_after.unwrap_or(Duration::from_secs(5));
let jitter = retry_after.mul_f64(rand::thread_rng().gen_range(0.0..1.0));
eprintln!(
"Attempt #{attempt}: {err}. Retry after {retry_after:?} + jitter of {jitter:?}"
);
Timer::after(retry_after + jitter).await;
continue;
}
_ => return Err(err.into()), _ => return Err(err.into()),
}, },
Err(err) => return Err(err), Err(err) => return Err(err),

View file

@ -21,7 +21,7 @@ use futures::{
channel::oneshot, future::BoxFuture, channel::oneshot, future::BoxFuture,
}; };
use gpui::{App, AsyncApp, Entity, Global, Task, WeakEntity, actions}; use gpui::{App, AsyncApp, Entity, Global, Task, WeakEntity, actions};
use http_client::{AsyncBody, HttpClient, HttpClientWithUrl}; use http_client::{AsyncBody, HttpClient, HttpClientWithUrl, http};
use parking_lot::RwLock; use parking_lot::RwLock;
use postage::watch; use postage::watch;
use proxy::connect_proxy_stream; use proxy::connect_proxy_stream;
@ -1123,6 +1123,7 @@ impl Client {
let http = self.http.clone(); let http = self.http.clone();
let proxy = http.proxy().cloned(); let proxy = http.proxy().cloned();
let user_agent = http.user_agent().cloned();
let credentials = credentials.clone(); let credentials = credentials.clone();
let rpc_url = self.rpc_url(http, release_channel); let rpc_url = self.rpc_url(http, release_channel);
let system_id = self.telemetry.system_id(); let system_id = self.telemetry.system_id();
@ -1174,7 +1175,7 @@ impl Client {
// We then modify the request to add our desired headers. // We then modify the request to add our desired headers.
let request_headers = request.headers_mut(); let request_headers = request.headers_mut();
request_headers.insert( request_headers.insert(
"Authorization", http::header::AUTHORIZATION,
HeaderValue::from_str(&credentials.authorization_header())?, HeaderValue::from_str(&credentials.authorization_header())?,
); );
request_headers.insert( request_headers.insert(
@ -1186,6 +1187,9 @@ impl Client {
"x-zed-release-channel", "x-zed-release-channel",
HeaderValue::from_str(release_channel.map(|r| r.dev_name()).unwrap_or("unknown"))?, HeaderValue::from_str(release_channel.map(|r| r.dev_name()).unwrap_or("unknown"))?,
); );
if let Some(user_agent) = user_agent {
request_headers.insert(http::header::USER_AGENT, user_agent);
}
if let Some(system_id) = system_id { if let Some(system_id) = system_id {
request_headers.insert("x-zed-system-id", HeaderValue::from_str(&system_id)?); request_headers.insert("x-zed-system-id", HeaderValue::from_str(&system_id)?);
} }

View file

@ -1772,7 +1772,7 @@ impl Editor {
) -> Self { ) -> Self {
debug_assert!( debug_assert!(
display_map.is_none() || mode.is_minimap(), display_map.is_none() || mode.is_minimap(),
"Providing a display map for a new editor is only intended for the minimap and might have unindended side effects otherwise!" "Providing a display map for a new editor is only intended for the minimap and might have unintended side effects otherwise!"
); );
let full_mode = mode.is_full(); let full_mode = mode.is_full();
@ -8193,8 +8193,7 @@ impl Editor {
return; return;
}; };
// Try to find a closest, enclosing node using tree-sitter that has a // Try to find a closest, enclosing node using tree-sitter that has a task
// task
let Some((buffer, buffer_row, tasks)) = self let Some((buffer, buffer_row, tasks)) = self
.find_enclosing_node_task(cx) .find_enclosing_node_task(cx)
// Or find the task that's closest in row-distance. // Or find the task that's closest in row-distance.
@ -21732,11 +21731,11 @@ impl CodeActionProvider for Entity<Project> {
cx: &mut App, cx: &mut App,
) -> Task<Result<Vec<CodeAction>>> { ) -> Task<Result<Vec<CodeAction>>> {
self.update(cx, |project, cx| { self.update(cx, |project, cx| {
let code_lens = project.code_lens(buffer, range.clone(), cx); let code_lens_actions = project.code_lens_actions(buffer, range.clone(), cx);
let code_actions = project.code_actions(buffer, range, None, cx); let code_actions = project.code_actions(buffer, range, None, cx);
cx.background_spawn(async move { cx.background_spawn(async move {
let (code_lens, code_actions) = join(code_lens, code_actions).await; let (code_lens_actions, code_actions) = join(code_lens_actions, code_actions).await;
Ok(code_lens Ok(code_lens_actions
.context("code lens fetch")? .context("code lens fetch")?
.into_iter() .into_iter()
.chain(code_actions.context("code action fetch")?) .chain(code_actions.context("code action fetch")?)

View file

@ -9570,6 +9570,74 @@ async fn test_document_format_during_save(cx: &mut TestAppContext) {
} }
} }
#[gpui::test]
async fn test_redo_after_noop_format(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.ensure_final_newline_on_save = Some(false);
});
let fs = FakeFs::new(cx.executor());
fs.insert_file(path!("/file.txt"), "foo".into()).await;
let project = Project::test(fs, [path!("/file.txt").as_ref()], cx).await;
let buffer = project
.update(cx, |project, cx| {
project.open_local_buffer(path!("/file.txt"), cx)
})
.await
.unwrap();
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let (editor, cx) = cx.add_window_view(|window, cx| {
build_editor_with_project(project.clone(), buffer, window, cx)
});
editor.update_in(cx, |editor, window, cx| {
editor.change_selections(SelectionEffects::default(), window, cx, |s| {
s.select_ranges([0..0])
});
});
assert!(!cx.read(|cx| editor.is_dirty(cx)));
editor.update_in(cx, |editor, window, cx| {
editor.handle_input("\n", window, cx)
});
cx.run_until_parked();
save(&editor, &project, cx).await;
assert_eq!("\nfoo", editor.read_with(cx, |editor, cx| editor.text(cx)));
editor.update_in(cx, |editor, window, cx| {
editor.undo(&Default::default(), window, cx);
});
save(&editor, &project, cx).await;
assert_eq!("foo", editor.read_with(cx, |editor, cx| editor.text(cx)));
editor.update_in(cx, |editor, window, cx| {
editor.redo(&Default::default(), window, cx);
});
cx.run_until_parked();
assert_eq!("\nfoo", editor.read_with(cx, |editor, cx| editor.text(cx)));
async fn save(editor: &Entity<Editor>, project: &Entity<Project>, cx: &mut VisualTestContext) {
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(
SaveOptions {
format: true,
autosave: false,
},
project.clone(),
window,
cx,
)
})
.unwrap();
cx.executor().start_waiting();
save.await;
assert!(!cx.read(|cx| editor.is_dirty(cx)));
}
}
#[gpui::test] #[gpui::test]
async fn test_multibuffer_format_during_save(cx: &mut TestAppContext) { async fn test_multibuffer_format_during_save(cx: &mut TestAppContext) {
init_test(cx, |_| {}); init_test(cx, |_| {});
@ -9955,8 +10023,14 @@ async fn test_autosave_with_dirty_buffers(cx: &mut TestAppContext) {
); );
} }
#[gpui::test] async fn setup_range_format_test(
async fn test_range_format_during_save(cx: &mut TestAppContext) { cx: &mut TestAppContext,
) -> (
Entity<Project>,
Entity<Editor>,
&mut gpui::VisualTestContext,
lsp::FakeLanguageServer,
) {
init_test(cx, |_| {}); init_test(cx, |_| {});
let fs = FakeFs::new(cx.executor()); let fs = FakeFs::new(cx.executor());
@ -9971,9 +10045,9 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
FakeLspAdapter { FakeLspAdapter {
capabilities: lsp::ServerCapabilities { capabilities: lsp::ServerCapabilities {
document_range_formatting_provider: Some(lsp::OneOf::Left(true)), document_range_formatting_provider: Some(lsp::OneOf::Left(true)),
..Default::default() ..lsp::ServerCapabilities::default()
}, },
..Default::default() ..FakeLspAdapter::default()
}, },
); );
@ -9988,14 +10062,22 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
let (editor, cx) = cx.add_window_view(|window, cx| { let (editor, cx) = cx.add_window_view(|window, cx| {
build_editor_with_project(project.clone(), buffer, window, cx) build_editor_with_project(project.clone(), buffer, window, cx)
}); });
cx.executor().start_waiting();
let fake_server = fake_servers.next().await.unwrap();
(project, editor, cx, fake_server)
}
#[gpui::test]
async fn test_range_format_on_save_success(cx: &mut TestAppContext) {
let (project, editor, cx, fake_server) = setup_range_format_test(cx).await;
editor.update_in(cx, |editor, window, cx| { editor.update_in(cx, |editor, window, cx| {
editor.set_text("one\ntwo\nthree\n", window, cx) editor.set_text("one\ntwo\nthree\n", window, cx)
}); });
assert!(cx.read(|cx| editor.is_dirty(cx))); assert!(cx.read(|cx| editor.is_dirty(cx)));
cx.executor().start_waiting();
let fake_server = fake_servers.next().await.unwrap();
let save = editor let save = editor
.update_in(cx, |editor, window, cx| { .update_in(cx, |editor, window, cx| {
editor.save( editor.save(
@ -10030,13 +10112,18 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
"one, two\nthree\n" "one, two\nthree\n"
); );
assert!(!cx.read(|cx| editor.is_dirty(cx))); assert!(!cx.read(|cx| editor.is_dirty(cx)));
}
#[gpui::test]
async fn test_range_format_on_save_timeout(cx: &mut TestAppContext) {
let (project, editor, cx, fake_server) = setup_range_format_test(cx).await;
editor.update_in(cx, |editor, window, cx| { editor.update_in(cx, |editor, window, cx| {
editor.set_text("one\ntwo\nthree\n", window, cx) editor.set_text("one\ntwo\nthree\n", window, cx)
}); });
assert!(cx.read(|cx| editor.is_dirty(cx))); assert!(cx.read(|cx| editor.is_dirty(cx)));
// Ensure we can still save even if formatting hangs. // Test that save still works when formatting hangs
fake_server.set_request_handler::<lsp::request::RangeFormatting, _, _>( fake_server.set_request_handler::<lsp::request::RangeFormatting, _, _>(
move |params, _| async move { move |params, _| async move {
assert_eq!( assert_eq!(
@ -10068,8 +10155,13 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
"one\ntwo\nthree\n" "one\ntwo\nthree\n"
); );
assert!(!cx.read(|cx| editor.is_dirty(cx))); assert!(!cx.read(|cx| editor.is_dirty(cx)));
}
// For non-dirty buffer, no formatting request should be sent #[gpui::test]
async fn test_range_format_not_called_for_clean_buffer(cx: &mut TestAppContext) {
let (project, editor, cx, fake_server) = setup_range_format_test(cx).await;
// Buffer starts clean, no formatting should be requested
let save = editor let save = editor
.update_in(cx, |editor, window, cx| { .update_in(cx, |editor, window, cx| {
editor.save( editor.save(
@ -10090,6 +10182,12 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
.next(); .next();
cx.executor().start_waiting(); cx.executor().start_waiting();
save.await; save.await;
cx.run_until_parked();
}
#[gpui::test]
async fn test_range_format_respects_language_tab_size_override(cx: &mut TestAppContext) {
let (project, editor, cx, fake_server) = setup_range_format_test(cx).await;
// Set Rust language override and assert overridden tabsize is sent to language server // Set Rust language override and assert overridden tabsize is sent to language server
update_test_language_settings(cx, |settings| { update_test_language_settings(cx, |settings| {
@ -10103,7 +10201,7 @@ async fn test_range_format_during_save(cx: &mut TestAppContext) {
}); });
editor.update_in(cx, |editor, window, cx| { editor.update_in(cx, |editor, window, cx| {
editor.set_text("somehting_new\n", window, cx) editor.set_text("something_new\n", window, cx)
}); });
assert!(cx.read(|cx| editor.is_dirty(cx))); assert!(cx.read(|cx| editor.is_dirty(cx)));
let save = editor let save = editor
@ -21188,16 +21286,32 @@ async fn test_apply_code_lens_actions_with_commands(cx: &mut gpui::TestAppContex
}, },
); );
let (buffer, _handle) = project let editor = workspace
.update(cx, |p, cx| { .update(cx, |workspace, window, cx| {
p.open_local_buffer_with_lsp(path!("/dir/a.ts"), cx) workspace.open_abs_path(
PathBuf::from(path!("/dir/a.ts")),
OpenOptions::default(),
window,
cx,
)
}) })
.unwrap()
.await .await
.unwrap()
.downcast::<Editor>()
.unwrap(); .unwrap();
cx.executor().run_until_parked(); cx.executor().run_until_parked();
let fake_server = fake_language_servers.next().await.unwrap(); let fake_server = fake_language_servers.next().await.unwrap();
let buffer = editor.update(cx, |editor, cx| {
editor
.buffer()
.read(cx)
.as_singleton()
.expect("have opened a single file by path")
});
let buffer_snapshot = buffer.update(cx, |buffer, _| buffer.snapshot()); let buffer_snapshot = buffer.update(cx, |buffer, _| buffer.snapshot());
let anchor = buffer_snapshot.anchor_at(0, text::Bias::Left); let anchor = buffer_snapshot.anchor_at(0, text::Bias::Left);
drop(buffer_snapshot); drop(buffer_snapshot);
@ -21255,7 +21369,7 @@ async fn test_apply_code_lens_actions_with_commands(cx: &mut gpui::TestAppContex
assert_eq!( assert_eq!(
actions.len(), actions.len(),
1, 1,
"Should have only one valid action for the 0..0 range" "Should have only one valid action for the 0..0 range, got: {actions:#?}"
); );
let action = actions[0].clone(); let action = actions[0].clone();
let apply = project.update(cx, |project, cx| { let apply = project.update(cx, |project, cx| {
@ -21301,7 +21415,7 @@ async fn test_apply_code_lens_actions_with_commands(cx: &mut gpui::TestAppContex
.into_iter() .into_iter()
.collect(), .collect(),
), ),
..Default::default() ..lsp::WorkspaceEdit::default()
}, },
}, },
) )
@ -21324,6 +21438,38 @@ async fn test_apply_code_lens_actions_with_commands(cx: &mut gpui::TestAppContex
buffer.undo(cx); buffer.undo(cx);
assert_eq!(buffer.text(), "a"); assert_eq!(buffer.text(), "a");
}); });
let actions_after_edits = cx
.update_window(*workspace, |_, window, cx| {
project.code_actions(&buffer, anchor..anchor, window, cx)
})
.unwrap()
.await
.unwrap();
assert_eq!(
actions, actions_after_edits,
"For the same selection, same code lens actions should be returned"
);
let _responses =
fake_server.set_request_handler::<lsp::request::CodeLensRequest, _, _>(|_, _| async move {
panic!("No more code lens requests are expected");
});
editor.update_in(cx, |editor, window, cx| {
editor.select_all(&SelectAll, window, cx);
});
cx.executor().run_until_parked();
let new_actions = cx
.update_window(*workspace, |_, window, cx| {
project.code_actions(&buffer, anchor..anchor, window, cx)
})
.unwrap()
.await
.unwrap();
assert_eq!(
actions, new_actions,
"Code lens are queried for the same range and should get the same set back, but without additional LSP queries now"
);
} }
#[gpui::test] #[gpui::test]
@ -22708,7 +22854,7 @@ pub(crate) fn init_test(cx: &mut TestAppContext, f: fn(&mut AllLanguageSettingsC
workspace::init_settings(cx); workspace::init_settings(cx);
crate::init(cx); crate::init(cx);
}); });
zlog::init_test();
update_test_language_settings(cx, f); update_test_language_settings(cx, f);
} }

View file

@ -6,7 +6,7 @@ use gpui::{Hsla, Rgba};
use itertools::Itertools; use itertools::Itertools;
use language::point_from_lsp; use language::point_from_lsp;
use multi_buffer::Anchor; use multi_buffer::Anchor;
use project::{DocumentColor, lsp_store::ColorFetchStrategy}; use project::{DocumentColor, lsp_store::LspFetchStrategy};
use settings::Settings as _; use settings::Settings as _;
use text::{Bias, BufferId, OffsetRangeExt as _}; use text::{Bias, BufferId, OffsetRangeExt as _};
use ui::{App, Context, Window}; use ui::{App, Context, Window};
@ -180,9 +180,9 @@ impl Editor {
.filter_map(|buffer| { .filter_map(|buffer| {
let buffer_id = buffer.read(cx).remote_id(); let buffer_id = buffer.read(cx).remote_id();
let fetch_strategy = if ignore_cache { let fetch_strategy = if ignore_cache {
ColorFetchStrategy::IgnoreCache LspFetchStrategy::IgnoreCache
} else { } else {
ColorFetchStrategy::UseCache { LspFetchStrategy::UseCache {
known_cache_version: self.colors.as_ref().and_then(|colors| { known_cache_version: self.colors.as_ref().and_then(|colors| {
Some(colors.buffer_colors.get(&buffer_id)?.cache_version_used) Some(colors.buffer_colors.get(&buffer_id)?.cache_version_used)
}), }),

View file

@ -12,7 +12,7 @@ use crate::{
}; };
pub use autoscroll::{Autoscroll, AutoscrollStrategy}; pub use autoscroll::{Autoscroll, AutoscrollStrategy};
use core::fmt::Debug; use core::fmt::Debug;
use gpui::{App, Axis, Context, Global, Pixels, Task, Window, point, px}; use gpui::{Along, App, Axis, Context, Global, Pixels, Task, Window, point, px};
use language::language_settings::{AllLanguageSettings, SoftWrap}; use language::language_settings::{AllLanguageSettings, SoftWrap};
use language::{Bias, Point}; use language::{Bias, Point};
pub use scroll_amount::ScrollAmount; pub use scroll_amount::ScrollAmount;
@ -49,14 +49,14 @@ impl ScrollAnchor {
} }
pub fn scroll_position(&self, snapshot: &DisplaySnapshot) -> gpui::Point<f32> { pub fn scroll_position(&self, snapshot: &DisplaySnapshot) -> gpui::Point<f32> {
let mut scroll_position = self.offset; self.offset.apply_along(Axis::Vertical, |offset| {
if self.anchor == Anchor::min() { if self.anchor == Anchor::min() {
scroll_position.y = 0.; 0.
} else { } else {
let scroll_top = self.anchor.to_display_point(snapshot).row().as_f32(); let scroll_top = self.anchor.to_display_point(snapshot).row().as_f32();
scroll_position.y += scroll_top; (offset + scroll_top).max(0.)
} }
scroll_position })
} }
pub fn top_row(&self, buffer: &MultiBufferSnapshot) -> u32 { pub fn top_row(&self, buffer: &MultiBufferSnapshot) -> u32 {

View file

@ -422,6 +422,13 @@ impl AppContext for ExampleContext {
self.app.update_entity(handle, update) self.app.update_entity(handle, update)
} }
fn as_mut<'a, T>(&'a mut self, handle: &Entity<T>) -> Self::Result<gpui::GpuiBorrow<'a, T>>
where
T: 'static,
{
self.app.as_mut(handle)
}
fn read_entity<T, R>( fn read_entity<T, R>(
&self, &self,
handle: &Entity<T>, handle: &Entity<T>,

View file

@ -102,7 +102,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn language_server_initialization_options( async fn language_server_initialization_options(
@ -127,7 +127,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn language_server_workspace_configuration( async fn language_server_workspace_configuration(
@ -150,7 +150,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn language_server_additional_initialization_options( async fn language_server_additional_initialization_options(
@ -175,7 +175,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn language_server_additional_workspace_configuration( async fn language_server_additional_workspace_configuration(
@ -200,7 +200,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn labels_for_completions( async fn labels_for_completions(
@ -226,7 +226,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn labels_for_symbols( async fn labels_for_symbols(
@ -252,7 +252,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn complete_slash_command_argument( async fn complete_slash_command_argument(
@ -271,7 +271,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn run_slash_command( async fn run_slash_command(
@ -297,7 +297,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn context_server_command( async fn context_server_command(
@ -316,7 +316,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn context_server_configuration( async fn context_server_configuration(
@ -343,7 +343,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn suggest_docs_packages(&self, provider: Arc<str>) -> Result<Vec<String>> { async fn suggest_docs_packages(&self, provider: Arc<str>) -> Result<Vec<String>> {
@ -358,7 +358,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn index_docs( async fn index_docs(
@ -384,7 +384,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn get_dap_binary( async fn get_dap_binary(
@ -406,7 +406,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn dap_request_kind( async fn dap_request_kind(
&self, &self,
@ -423,7 +423,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn dap_config_to_scenario(&self, config: ZedDebugConfig) -> Result<DebugScenario> { async fn dap_config_to_scenario(&self, config: ZedDebugConfig) -> Result<DebugScenario> {
@ -437,7 +437,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn dap_locator_create_scenario( async fn dap_locator_create_scenario(
@ -461,7 +461,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
async fn run_dap_locator( async fn run_dap_locator(
&self, &self,
@ -477,7 +477,7 @@ impl extension::Extension for WasmExtension {
} }
.boxed() .boxed()
}) })
.await .await?
} }
} }
@ -739,7 +739,7 @@ impl WasmExtension {
.with_context(|| format!("failed to load wasm extension {}", manifest.id)) .with_context(|| format!("failed to load wasm extension {}", manifest.id))
} }
pub async fn call<T, Fn>(&self, f: Fn) -> T pub async fn call<T, Fn>(&self, f: Fn) -> Result<T>
where where
T: 'static + Send, T: 'static + Send,
Fn: 'static Fn: 'static
@ -755,8 +755,19 @@ impl WasmExtension {
} }
.boxed() .boxed()
})) }))
.expect("wasm extension channel should not be closed yet"); .map_err(|_| {
return_rx.await.expect("wasm extension channel") anyhow!(
"wasm extension channel should not be closed yet, extension {} (id {})",
self.manifest.name,
self.manifest.id,
)
})?;
return_rx.await.with_context(|| {
format!(
"wasm extension channel, extension {} (id {})",
self.manifest.name, self.manifest.id,
)
})
} }
} }
@ -777,8 +788,19 @@ impl WasmState {
} }
.boxed_local() .boxed_local()
})) }))
.expect("main thread message channel should not be closed yet"); .unwrap_or_else(|_| {
async move { return_rx.await.expect("main thread message channel") } panic!(
"main thread message channel should not be closed yet, extension {} (id {})",
self.manifest.name, self.manifest.id,
)
});
let name = self.manifest.name.clone();
let id = self.manifest.id.clone();
async move {
return_rx.await.unwrap_or_else(|_| {
panic!("main thread message channel, extension {name} (id {id})")
})
}
} }
fn work_dir(&self) -> PathBuf { fn work_dir(&self) -> PathBuf {

View file

@ -126,7 +126,7 @@ mod macos {
"ContentMask".into(), "ContentMask".into(),
"Uniforms".into(), "Uniforms".into(),
"AtlasTile".into(), "AtlasTile".into(),
"PathInputIndex".into(), "PathRasterizationInputIndex".into(),
"PathVertex_ScaledPixels".into(), "PathVertex_ScaledPixels".into(),
"ShadowInputIndex".into(), "ShadowInputIndex".into(),
"Shadow".into(), "Shadow".into(),

View file

@ -1,13 +1,9 @@
use gpui::{ use gpui::{
Application, Background, Bounds, ColorSpace, Context, MouseDownEvent, Path, PathBuilder, Application, Background, Bounds, ColorSpace, Context, MouseDownEvent, Path, PathBuilder,
PathStyle, Pixels, Point, Render, SharedString, StrokeOptions, Window, WindowBounds, PathStyle, Pixels, Point, Render, SharedString, StrokeOptions, Window, WindowOptions, canvas,
WindowOptions, canvas, div, linear_color_stop, linear_gradient, point, prelude::*, px, rgb, div, linear_color_stop, linear_gradient, point, prelude::*, px, rgb, size,
size,
}; };
const DEFAULT_WINDOW_WIDTH: Pixels = px(1024.0);
const DEFAULT_WINDOW_HEIGHT: Pixels = px(768.0);
struct PaintingViewer { struct PaintingViewer {
default_lines: Vec<(Path<Pixels>, Background)>, default_lines: Vec<(Path<Pixels>, Background)>,
lines: Vec<Vec<Point<Pixels>>>, lines: Vec<Vec<Point<Pixels>>>,
@ -151,6 +147,8 @@ impl PaintingViewer {
px(320.0 + (i as f32 * 10.0).sin() * 40.0), px(320.0 + (i as f32 * 10.0).sin() * 40.0),
)); ));
} }
let path = builder.build().unwrap();
lines.push((path, gpui::green().into()));
Self { Self {
default_lines: lines.clone(), default_lines: lines.clone(),
@ -185,13 +183,9 @@ fn button(
} }
impl Render for PaintingViewer { impl Render for PaintingViewer {
fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement { fn render(&mut self, _: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
window.request_animation_frame();
let default_lines = self.default_lines.clone(); let default_lines = self.default_lines.clone();
let lines = self.lines.clone(); let lines = self.lines.clone();
let window_size = window.bounds().size;
let scale = window_size.width / DEFAULT_WINDOW_WIDTH;
let dashed = self.dashed; let dashed = self.dashed;
div() div()
@ -228,7 +222,7 @@ impl Render for PaintingViewer {
move |_, _, _| {}, move |_, _, _| {},
move |_, _, window, _| { move |_, _, window, _| {
for (path, color) in default_lines { for (path, color) in default_lines {
window.paint_path(path.clone().scale(scale), color); window.paint_path(path, color);
} }
for points in lines { for points in lines {
@ -304,11 +298,6 @@ fn main() {
cx.open_window( cx.open_window(
WindowOptions { WindowOptions {
focus: true, focus: true,
window_bounds: Some(WindowBounds::Windowed(Bounds::centered(
None,
size(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT),
cx,
))),
..Default::default() ..Default::default()
}, },
|window, cx| cx.new(|cx| PaintingViewer::new(window, cx)), |window, cx| cx.new(|cx| PaintingViewer::new(window, cx)),

View file

@ -448,15 +448,23 @@ impl App {
} }
pub(crate) fn update<R>(&mut self, update: impl FnOnce(&mut Self) -> R) -> R { pub(crate) fn update<R>(&mut self, update: impl FnOnce(&mut Self) -> R) -> R {
self.pending_updates += 1; self.start_update();
let result = update(self); let result = update(self);
self.finish_update();
result
}
pub(crate) fn start_update(&mut self) {
self.pending_updates += 1;
}
pub(crate) fn finish_update(&mut self) {
if !self.flushing_effects && self.pending_updates == 1 { if !self.flushing_effects && self.pending_updates == 1 {
self.flushing_effects = true; self.flushing_effects = true;
self.flush_effects(); self.flush_effects();
self.flushing_effects = false; self.flushing_effects = false;
} }
self.pending_updates -= 1; self.pending_updates -= 1;
result
} }
/// Arrange a callback to be invoked when the given entity calls `notify` on its respective context. /// Arrange a callback to be invoked when the given entity calls `notify` on its respective context.
@ -868,7 +876,6 @@ impl App {
loop { loop {
self.release_dropped_entities(); self.release_dropped_entities();
self.release_dropped_focus_handles(); self.release_dropped_focus_handles();
if let Some(effect) = self.pending_effects.pop_front() { if let Some(effect) = self.pending_effects.pop_front() {
match effect { match effect {
Effect::Notify { emitter } => { Effect::Notify { emitter } => {
@ -1819,6 +1826,13 @@ impl AppContext for App {
}) })
} }
fn as_mut<'a, T>(&'a mut self, handle: &Entity<T>) -> GpuiBorrow<'a, T>
where
T: 'static,
{
GpuiBorrow::new(handle.clone(), self)
}
fn read_entity<T, R>( fn read_entity<T, R>(
&self, &self,
handle: &Entity<T>, handle: &Entity<T>,
@ -2007,6 +2021,10 @@ impl HttpClient for NullHttpClient {
.boxed() .boxed()
} }
fn user_agent(&self) -> Option<&http_client::http::HeaderValue> {
None
}
fn proxy(&self) -> Option<&Url> { fn proxy(&self) -> Option<&Url> {
None None
} }
@ -2015,3 +2033,79 @@ impl HttpClient for NullHttpClient {
type_name::<Self>() type_name::<Self>()
} }
} }
/// A mutable reference to an entity owned by GPUI
pub struct GpuiBorrow<'a, T> {
inner: Option<Lease<T>>,
app: &'a mut App,
}
impl<'a, T: 'static> GpuiBorrow<'a, T> {
fn new(inner: Entity<T>, app: &'a mut App) -> Self {
app.start_update();
let lease = app.entities.lease(&inner);
Self {
inner: Some(lease),
app,
}
}
}
impl<'a, T: 'static> std::borrow::Borrow<T> for GpuiBorrow<'a, T> {
fn borrow(&self) -> &T {
self.inner.as_ref().unwrap().borrow()
}
}
impl<'a, T: 'static> std::borrow::BorrowMut<T> for GpuiBorrow<'a, T> {
fn borrow_mut(&mut self) -> &mut T {
self.inner.as_mut().unwrap().borrow_mut()
}
}
impl<'a, T> Drop for GpuiBorrow<'a, T> {
fn drop(&mut self) {
let lease = self.inner.take().unwrap();
self.app.notify(lease.id);
self.app.entities.end_lease(lease);
self.app.finish_update();
}
}
#[cfg(test)]
mod test {
use std::{cell::RefCell, rc::Rc};
use crate::{AppContext, TestAppContext};
#[test]
fn test_gpui_borrow() {
let cx = TestAppContext::single();
let observation_count = Rc::new(RefCell::new(0));
let state = cx.update(|cx| {
let state = cx.new(|_| false);
cx.observe(&state, {
let observation_count = observation_count.clone();
move |_, _| {
let mut count = observation_count.borrow_mut();
*count += 1;
}
})
.detach();
state
});
cx.update(|cx| {
// Calling this like this so that we don't clobber the borrow_mut above
*std::borrow::BorrowMut::borrow_mut(&mut state.as_mut(cx)) = true;
});
cx.update(|cx| {
state.write(cx, false);
});
assert_eq!(*observation_count.borrow(), 2);
}
}

View file

@ -3,7 +3,7 @@ use crate::{
Entity, EventEmitter, Focusable, ForegroundExecutor, Global, PromptButton, PromptLevel, Render, Entity, EventEmitter, Focusable, ForegroundExecutor, Global, PromptButton, PromptLevel, Render,
Reservation, Result, Subscription, Task, VisualContext, Window, WindowHandle, Reservation, Result, Subscription, Task, VisualContext, Window, WindowHandle,
}; };
use anyhow::Context as _; use anyhow::{Context as _, anyhow};
use derive_more::{Deref, DerefMut}; use derive_more::{Deref, DerefMut};
use futures::channel::oneshot; use futures::channel::oneshot;
use std::{future::Future, rc::Weak}; use std::{future::Future, rc::Weak};
@ -58,6 +58,15 @@ impl AppContext for AsyncApp {
Ok(app.update_entity(handle, update)) Ok(app.update_entity(handle, update))
} }
fn as_mut<'a, T>(&'a mut self, _handle: &Entity<T>) -> Self::Result<super::GpuiBorrow<'a, T>>
where
T: 'static,
{
Err(anyhow!(
"Cannot as_mut with an async context. Try calling update() first"
))
}
fn read_entity<T, R>( fn read_entity<T, R>(
&self, &self,
handle: &Entity<T>, handle: &Entity<T>,
@ -364,6 +373,15 @@ impl AppContext for AsyncWindowContext {
.update(self, |_, _, cx| cx.update_entity(handle, update)) .update(self, |_, _, cx| cx.update_entity(handle, update))
} }
fn as_mut<'a, T>(&'a mut self, _: &Entity<T>) -> Self::Result<super::GpuiBorrow<'a, T>>
where
T: 'static,
{
Err(anyhow!(
"Cannot use as_mut() from an async context, call `update`"
))
}
fn read_entity<T, R>( fn read_entity<T, R>(
&self, &self,
handle: &Entity<T>, handle: &Entity<T>,

View file

@ -726,6 +726,13 @@ impl<T> AppContext for Context<'_, T> {
self.app.update_entity(handle, update) self.app.update_entity(handle, update)
} }
fn as_mut<'a, E>(&'a mut self, handle: &Entity<E>) -> Self::Result<super::GpuiBorrow<'a, E>>
where
E: 'static,
{
self.app.as_mut(handle)
}
fn read_entity<U, R>( fn read_entity<U, R>(
&self, &self,
handle: &Entity<U>, handle: &Entity<U>,

View file

@ -1,4 +1,4 @@
use crate::{App, AppContext, VisualContext, Window, seal::Sealed}; use crate::{App, AppContext, GpuiBorrow, VisualContext, Window, seal::Sealed};
use anyhow::{Context as _, Result}; use anyhow::{Context as _, Result};
use collections::FxHashSet; use collections::FxHashSet;
use derive_more::{Deref, DerefMut}; use derive_more::{Deref, DerefMut};
@ -105,7 +105,7 @@ impl EntityMap {
/// Move an entity to the stack. /// Move an entity to the stack.
#[track_caller] #[track_caller]
pub fn lease<'a, T>(&mut self, pointer: &'a Entity<T>) -> Lease<'a, T> { pub fn lease<T>(&mut self, pointer: &Entity<T>) -> Lease<T> {
self.assert_valid_context(pointer); self.assert_valid_context(pointer);
let mut accessed_entities = self.accessed_entities.borrow_mut(); let mut accessed_entities = self.accessed_entities.borrow_mut();
accessed_entities.insert(pointer.entity_id); accessed_entities.insert(pointer.entity_id);
@ -117,15 +117,14 @@ impl EntityMap {
); );
Lease { Lease {
entity, entity,
pointer, id: pointer.entity_id,
entity_type: PhantomData, entity_type: PhantomData,
} }
} }
/// Returns an entity after moving it to the stack. /// Returns an entity after moving it to the stack.
pub fn end_lease<T>(&mut self, mut lease: Lease<T>) { pub fn end_lease<T>(&mut self, mut lease: Lease<T>) {
self.entities self.entities.insert(lease.id, lease.entity.take().unwrap());
.insert(lease.pointer.entity_id, lease.entity.take().unwrap());
} }
pub fn read<T: 'static>(&self, entity: &Entity<T>) -> &T { pub fn read<T: 'static>(&self, entity: &Entity<T>) -> &T {
@ -187,13 +186,13 @@ fn double_lease_panic<T>(operation: &str) -> ! {
) )
} }
pub(crate) struct Lease<'a, T> { pub(crate) struct Lease<T> {
entity: Option<Box<dyn Any>>, entity: Option<Box<dyn Any>>,
pub pointer: &'a Entity<T>, pub id: EntityId,
entity_type: PhantomData<T>, entity_type: PhantomData<T>,
} }
impl<T: 'static> core::ops::Deref for Lease<'_, T> { impl<T: 'static> core::ops::Deref for Lease<T> {
type Target = T; type Target = T;
fn deref(&self) -> &Self::Target { fn deref(&self) -> &Self::Target {
@ -201,13 +200,13 @@ impl<T: 'static> core::ops::Deref for Lease<'_, T> {
} }
} }
impl<T: 'static> core::ops::DerefMut for Lease<'_, T> { impl<T: 'static> core::ops::DerefMut for Lease<T> {
fn deref_mut(&mut self) -> &mut Self::Target { fn deref_mut(&mut self) -> &mut Self::Target {
self.entity.as_mut().unwrap().downcast_mut().unwrap() self.entity.as_mut().unwrap().downcast_mut().unwrap()
} }
} }
impl<T> Drop for Lease<'_, T> { impl<T> Drop for Lease<T> {
fn drop(&mut self) { fn drop(&mut self) {
if self.entity.is_some() && !panicking() { if self.entity.is_some() && !panicking() {
panic!("Leases must be ended with EntityMap::end_lease") panic!("Leases must be ended with EntityMap::end_lease")
@ -437,6 +436,19 @@ impl<T: 'static> Entity<T> {
cx.update_entity(self, update) cx.update_entity(self, update)
} }
/// Updates the entity referenced by this handle with the given function.
pub fn as_mut<'a, C: AppContext>(&self, cx: &'a mut C) -> C::Result<GpuiBorrow<'a, T>> {
cx.as_mut(self)
}
/// Updates the entity referenced by this handle with the given function.
pub fn write<C: AppContext>(&self, cx: &mut C, value: T) -> C::Result<()> {
self.update(cx, |entity, cx| {
*entity = value;
cx.notify();
})
}
/// Updates the entity referenced by this handle with the given function if /// Updates the entity referenced by this handle with the given function if
/// the referenced entity still exists, within a visual context that has a window. /// the referenced entity still exists, within a visual context that has a window.
/// Returns an error if the entity has been released. /// Returns an error if the entity has been released.

View file

@ -9,6 +9,7 @@ use crate::{
}; };
use anyhow::{anyhow, bail}; use anyhow::{anyhow, bail};
use futures::{Stream, StreamExt, channel::oneshot}; use futures::{Stream, StreamExt, channel::oneshot};
use rand::{SeedableRng, rngs::StdRng};
use std::{cell::RefCell, future::Future, ops::Deref, rc::Rc, sync::Arc, time::Duration}; use std::{cell::RefCell, future::Future, ops::Deref, rc::Rc, sync::Arc, time::Duration};
/// A TestAppContext is provided to tests created with `#[gpui::test]`, it provides /// A TestAppContext is provided to tests created with `#[gpui::test]`, it provides
@ -63,6 +64,13 @@ impl AppContext for TestAppContext {
app.update_entity(handle, update) app.update_entity(handle, update)
} }
fn as_mut<'a, T>(&'a mut self, _: &Entity<T>) -> Self::Result<super::GpuiBorrow<'a, T>>
where
T: 'static,
{
panic!("Cannot use as_mut with a test app context. Try calling update() first")
}
fn read_entity<T, R>( fn read_entity<T, R>(
&self, &self,
handle: &Entity<T>, handle: &Entity<T>,
@ -134,6 +142,12 @@ impl TestAppContext {
} }
} }
/// Create a single TestAppContext, for non-multi-client tests
pub fn single() -> Self {
let dispatcher = TestDispatcher::new(StdRng::from_entropy());
Self::build(dispatcher, None)
}
/// The name of the test function that created this `TestAppContext` /// The name of the test function that created this `TestAppContext`
pub fn test_function_name(&self) -> Option<&'static str> { pub fn test_function_name(&self) -> Option<&'static str> {
self.fn_name self.fn_name
@ -914,6 +928,13 @@ impl AppContext for VisualTestContext {
self.cx.update_entity(handle, update) self.cx.update_entity(handle, update)
} }
fn as_mut<'a, T>(&'a mut self, handle: &Entity<T>) -> Self::Result<super::GpuiBorrow<'a, T>>
where
T: 'static,
{
self.cx.as_mut(handle)
}
fn read_entity<T, R>( fn read_entity<T, R>(
&self, &self,
handle: &Entity<T>, handle: &Entity<T>,

View file

@ -39,7 +39,7 @@ use crate::{
use derive_more::{Deref, DerefMut}; use derive_more::{Deref, DerefMut};
pub(crate) use smallvec::SmallVec; pub(crate) use smallvec::SmallVec;
use std::{ use std::{
any::Any, any::{Any, type_name},
fmt::{self, Debug, Display}, fmt::{self, Debug, Display},
mem, panic, mem, panic,
}; };
@ -220,14 +220,17 @@ impl<C: RenderOnce> Element for Component<C> {
window: &mut Window, window: &mut Window,
cx: &mut App, cx: &mut App,
) -> (LayoutId, Self::RequestLayoutState) { ) -> (LayoutId, Self::RequestLayoutState) {
let mut element = self window.with_global_id(ElementId::Name(type_name::<C>().into()), |_, window| {
.component let mut element = self
.take() .component
.unwrap() .take()
.render(window, cx) .unwrap()
.into_any_element(); .render(window, cx)
let layout_id = element.request_layout(window, cx); .into_any_element();
(layout_id, element)
let layout_id = element.request_layout(window, cx);
(layout_id, element)
})
} }
fn prepaint( fn prepaint(
@ -239,7 +242,9 @@ impl<C: RenderOnce> Element for Component<C> {
window: &mut Window, window: &mut Window,
cx: &mut App, cx: &mut App,
) { ) {
element.prepaint(window, cx); window.with_global_id(ElementId::Name(type_name::<C>().into()), |_, window| {
element.prepaint(window, cx);
})
} }
fn paint( fn paint(
@ -252,7 +257,9 @@ impl<C: RenderOnce> Element for Component<C> {
window: &mut Window, window: &mut Window,
cx: &mut App, cx: &mut App,
) { ) {
element.paint(window, cx); window.with_global_id(ElementId::Name(type_name::<C>().into()), |_, window| {
element.paint(window, cx);
})
} }
} }

View file

@ -197,6 +197,11 @@ pub trait AppContext {
where where
T: 'static; T: 'static;
/// Update a entity in the app context.
fn as_mut<'a, T>(&'a mut self, handle: &Entity<T>) -> Self::Result<GpuiBorrow<'a, T>>
where
T: 'static;
/// Read a entity from the app context. /// Read a entity from the app context.
fn read_entity<T, R>( fn read_entity<T, R>(
&self, &self,

View file

@ -336,7 +336,10 @@ impl PathBuilder {
let v1 = buf.vertices[i1]; let v1 = buf.vertices[i1];
let v2 = buf.vertices[i2]; let v2 = buf.vertices[i2];
path.push_triangle((v0.into(), v1.into(), v2.into())); path.push_triangle(
(v0.into(), v1.into(), v2.into()),
(point(0., 1.), point(0., 1.), point(0., 1.)),
);
} }
path path

View file

@ -794,6 +794,7 @@ pub(crate) struct AtlasTextureId {
pub(crate) enum AtlasTextureKind { pub(crate) enum AtlasTextureKind {
Monochrome = 0, Monochrome = 0,
Polychrome = 1, Polychrome = 1,
Path = 2,
} }
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)] #[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]

View file

@ -10,6 +10,8 @@ use etagere::BucketedAtlasAllocator;
use parking_lot::Mutex; use parking_lot::Mutex;
use std::{borrow::Cow, ops, sync::Arc}; use std::{borrow::Cow, ops, sync::Arc};
pub(crate) const PATH_TEXTURE_FORMAT: gpu::TextureFormat = gpu::TextureFormat::R16Float;
pub(crate) struct BladeAtlas(Mutex<BladeAtlasState>); pub(crate) struct BladeAtlas(Mutex<BladeAtlasState>);
struct PendingUpload { struct PendingUpload {
@ -25,6 +27,7 @@ struct BladeAtlasState {
tiles_by_key: FxHashMap<AtlasKey, AtlasTile>, tiles_by_key: FxHashMap<AtlasKey, AtlasTile>,
initializations: Vec<AtlasTextureId>, initializations: Vec<AtlasTextureId>,
uploads: Vec<PendingUpload>, uploads: Vec<PendingUpload>,
path_sample_count: u32,
} }
#[cfg(gles)] #[cfg(gles)]
@ -38,13 +41,13 @@ impl BladeAtlasState {
} }
pub struct BladeTextureInfo { pub struct BladeTextureInfo {
#[allow(dead_code)]
pub size: gpu::Extent, pub size: gpu::Extent,
pub raw_view: gpu::TextureView, pub raw_view: gpu::TextureView,
pub msaa_view: Option<gpu::TextureView>,
} }
impl BladeAtlas { impl BladeAtlas {
pub(crate) fn new(gpu: &Arc<gpu::Context>) -> Self { pub(crate) fn new(gpu: &Arc<gpu::Context>, path_sample_count: u32) -> Self {
BladeAtlas(Mutex::new(BladeAtlasState { BladeAtlas(Mutex::new(BladeAtlasState {
gpu: Arc::clone(gpu), gpu: Arc::clone(gpu),
upload_belt: BufferBelt::new(BufferBeltDescriptor { upload_belt: BufferBelt::new(BufferBeltDescriptor {
@ -56,6 +59,7 @@ impl BladeAtlas {
tiles_by_key: Default::default(), tiles_by_key: Default::default(),
initializations: Vec::new(), initializations: Vec::new(),
uploads: Vec::new(), uploads: Vec::new(),
path_sample_count,
})) }))
} }
@ -63,7 +67,6 @@ impl BladeAtlas {
self.0.lock().destroy(); self.0.lock().destroy();
} }
#[allow(dead_code)]
pub(crate) fn clear_textures(&self, texture_kind: AtlasTextureKind) { pub(crate) fn clear_textures(&self, texture_kind: AtlasTextureKind) {
let mut lock = self.0.lock(); let mut lock = self.0.lock();
let textures = &mut lock.storage[texture_kind]; let textures = &mut lock.storage[texture_kind];
@ -72,6 +75,19 @@ impl BladeAtlas {
} }
} }
/// Allocate a rectangle and make it available for rendering immediately (without waiting for `before_frame`)
pub fn allocate_for_rendering(
&self,
size: Size<DevicePixels>,
texture_kind: AtlasTextureKind,
gpu_encoder: &mut gpu::CommandEncoder,
) -> AtlasTile {
let mut lock = self.0.lock();
let tile = lock.allocate(size, texture_kind);
lock.flush_initializations(gpu_encoder);
tile
}
pub fn before_frame(&self, gpu_encoder: &mut gpu::CommandEncoder) { pub fn before_frame(&self, gpu_encoder: &mut gpu::CommandEncoder) {
let mut lock = self.0.lock(); let mut lock = self.0.lock();
lock.flush(gpu_encoder); lock.flush(gpu_encoder);
@ -93,6 +109,7 @@ impl BladeAtlas {
depth: 1, depth: 1,
}, },
raw_view: texture.raw_view, raw_view: texture.raw_view,
msaa_view: texture.msaa_view,
} }
} }
} }
@ -183,8 +200,48 @@ impl BladeAtlasState {
format = gpu::TextureFormat::Bgra8UnormSrgb; format = gpu::TextureFormat::Bgra8UnormSrgb;
usage = gpu::TextureUsage::COPY | gpu::TextureUsage::RESOURCE; usage = gpu::TextureUsage::COPY | gpu::TextureUsage::RESOURCE;
} }
AtlasTextureKind::Path => {
format = PATH_TEXTURE_FORMAT;
usage = gpu::TextureUsage::COPY
| gpu::TextureUsage::RESOURCE
| gpu::TextureUsage::TARGET;
}
} }
// We currently only enable MSAA for path textures.
let (msaa, msaa_view) = if self.path_sample_count > 1 && kind == AtlasTextureKind::Path {
let msaa = self.gpu.create_texture(gpu::TextureDesc {
name: "msaa path texture",
format,
size: gpu::Extent {
width: size.width.into(),
height: size.height.into(),
depth: 1,
},
array_layer_count: 1,
mip_level_count: 1,
sample_count: self.path_sample_count,
dimension: gpu::TextureDimension::D2,
usage: gpu::TextureUsage::TARGET,
external: None,
});
(
Some(msaa),
Some(self.gpu.create_texture_view(
msaa,
gpu::TextureViewDesc {
name: "msaa texture view",
format,
dimension: gpu::ViewDimension::D2,
subresources: &Default::default(),
},
)),
)
} else {
(None, None)
};
let raw = self.gpu.create_texture(gpu::TextureDesc { let raw = self.gpu.create_texture(gpu::TextureDesc {
name: "atlas", name: "atlas",
format, format,
@ -222,6 +279,8 @@ impl BladeAtlasState {
format, format,
raw, raw,
raw_view, raw_view,
msaa,
msaa_view,
live_atlas_keys: 0, live_atlas_keys: 0,
}; };
@ -281,6 +340,7 @@ impl BladeAtlasState {
struct BladeAtlasStorage { struct BladeAtlasStorage {
monochrome_textures: AtlasTextureList<BladeAtlasTexture>, monochrome_textures: AtlasTextureList<BladeAtlasTexture>,
polychrome_textures: AtlasTextureList<BladeAtlasTexture>, polychrome_textures: AtlasTextureList<BladeAtlasTexture>,
path_textures: AtlasTextureList<BladeAtlasTexture>,
} }
impl ops::Index<AtlasTextureKind> for BladeAtlasStorage { impl ops::Index<AtlasTextureKind> for BladeAtlasStorage {
@ -289,6 +349,7 @@ impl ops::Index<AtlasTextureKind> for BladeAtlasStorage {
match kind { match kind {
crate::AtlasTextureKind::Monochrome => &self.monochrome_textures, crate::AtlasTextureKind::Monochrome => &self.monochrome_textures,
crate::AtlasTextureKind::Polychrome => &self.polychrome_textures, crate::AtlasTextureKind::Polychrome => &self.polychrome_textures,
crate::AtlasTextureKind::Path => &self.path_textures,
} }
} }
} }
@ -298,6 +359,7 @@ impl ops::IndexMut<AtlasTextureKind> for BladeAtlasStorage {
match kind { match kind {
crate::AtlasTextureKind::Monochrome => &mut self.monochrome_textures, crate::AtlasTextureKind::Monochrome => &mut self.monochrome_textures,
crate::AtlasTextureKind::Polychrome => &mut self.polychrome_textures, crate::AtlasTextureKind::Polychrome => &mut self.polychrome_textures,
crate::AtlasTextureKind::Path => &mut self.path_textures,
} }
} }
} }
@ -308,6 +370,7 @@ impl ops::Index<AtlasTextureId> for BladeAtlasStorage {
let textures = match id.kind { let textures = match id.kind {
crate::AtlasTextureKind::Monochrome => &self.monochrome_textures, crate::AtlasTextureKind::Monochrome => &self.monochrome_textures,
crate::AtlasTextureKind::Polychrome => &self.polychrome_textures, crate::AtlasTextureKind::Polychrome => &self.polychrome_textures,
crate::AtlasTextureKind::Path => &self.path_textures,
}; };
textures[id.index as usize].as_ref().unwrap() textures[id.index as usize].as_ref().unwrap()
} }
@ -321,6 +384,9 @@ impl BladeAtlasStorage {
for mut texture in self.polychrome_textures.drain().flatten() { for mut texture in self.polychrome_textures.drain().flatten() {
texture.destroy(gpu); texture.destroy(gpu);
} }
for mut texture in self.path_textures.drain().flatten() {
texture.destroy(gpu);
}
} }
} }
@ -329,6 +395,8 @@ struct BladeAtlasTexture {
allocator: BucketedAtlasAllocator, allocator: BucketedAtlasAllocator,
raw: gpu::Texture, raw: gpu::Texture,
raw_view: gpu::TextureView, raw_view: gpu::TextureView,
msaa: Option<gpu::Texture>,
msaa_view: Option<gpu::TextureView>,
format: gpu::TextureFormat, format: gpu::TextureFormat,
live_atlas_keys: u32, live_atlas_keys: u32,
} }
@ -356,6 +424,12 @@ impl BladeAtlasTexture {
fn destroy(&mut self, gpu: &gpu::Context) { fn destroy(&mut self, gpu: &gpu::Context) {
gpu.destroy_texture(self.raw); gpu.destroy_texture(self.raw);
gpu.destroy_texture_view(self.raw_view); gpu.destroy_texture_view(self.raw_view);
if let Some(msaa) = self.msaa {
gpu.destroy_texture(msaa);
}
if let Some(msaa_view) = self.msaa_view {
gpu.destroy_texture_view(msaa_view);
}
} }
fn bytes_per_pixel(&self) -> u8 { fn bytes_per_pixel(&self) -> u8 {

View file

@ -1,19 +1,24 @@
// Doing `if let` gives you nice scoping with passes/encoders // Doing `if let` gives you nice scoping with passes/encoders
#![allow(irrefutable_let_patterns)] #![allow(irrefutable_let_patterns)]
use super::{BladeAtlas, BladeContext}; use super::{BladeAtlas, BladeContext, PATH_TEXTURE_FORMAT};
use crate::{ use crate::{
Background, Bounds, ContentMask, DevicePixels, GpuSpecs, MonochromeSprite, PathVertex, AtlasTextureKind, AtlasTile, Background, Bounds, ContentMask, DevicePixels, GpuSpecs,
PolychromeSprite, PrimitiveBatch, Quad, ScaledPixels, Scene, Shadow, Size, Underline, MonochromeSprite, Path, PathId, PathVertex, PolychromeSprite, PrimitiveBatch, Quad,
ScaledPixels, Scene, Shadow, Size, Underline,
}; };
use blade_graphics::{self as gpu}; use blade_graphics as gpu;
use blade_util::{BufferBelt, BufferBeltDescriptor}; use blade_util::{BufferBelt, BufferBeltDescriptor};
use bytemuck::{Pod, Zeroable}; use bytemuck::{Pod, Zeroable};
use collections::HashMap;
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
use media::core_video::CVMetalTextureCache; use media::core_video::CVMetalTextureCache;
use std::{mem, sync::Arc}; use std::{mem, sync::Arc};
const MAX_FRAME_TIME_MS: u32 = 10000; const MAX_FRAME_TIME_MS: u32 = 10000;
// Use 4x MSAA, all devices support it.
// https://developer.apple.com/documentation/metal/mtldevice/1433355-supportstexturesamplecount
const DEFAULT_PATH_SAMPLE_COUNT: u32 = 4;
#[repr(C)] #[repr(C)]
#[derive(Clone, Copy, Pod, Zeroable)] #[derive(Clone, Copy, Pod, Zeroable)]
@ -61,9 +66,16 @@ struct ShaderShadowsData {
} }
#[derive(blade_macros::ShaderData)] #[derive(blade_macros::ShaderData)]
struct ShaderPathsData { struct ShaderPathRasterizationData {
globals: GlobalParams, globals: GlobalParams,
b_path_vertices: gpu::BufferPiece, b_path_vertices: gpu::BufferPiece,
}
#[derive(blade_macros::ShaderData)]
struct ShaderPathsData {
globals: GlobalParams,
t_sprite: gpu::TextureView,
s_sprite: gpu::Sampler,
b_path_sprites: gpu::BufferPiece, b_path_sprites: gpu::BufferPiece,
} }
@ -103,27 +115,13 @@ struct ShaderSurfacesData {
struct PathSprite { struct PathSprite {
bounds: Bounds<ScaledPixels>, bounds: Bounds<ScaledPixels>,
color: Background, color: Background,
} tile: AtlasTile,
/// Argument buffer layout for `draw_indirect` commands.
#[repr(C)]
#[derive(Copy, Clone, Debug, Default, Pod, Zeroable)]
pub struct DrawIndirectArgs {
/// The number of vertices to draw.
pub vertex_count: u32,
/// The number of instances to draw.
pub instance_count: u32,
/// The Index of the first vertex to draw.
pub first_vertex: u32,
/// The instance ID of the first instance to draw.
///
/// Has to be 0, unless [`Features::INDIRECT_FIRST_INSTANCE`](crate::Features::INDIRECT_FIRST_INSTANCE) is enabled.
pub first_instance: u32,
} }
struct BladePipelines { struct BladePipelines {
quads: gpu::RenderPipeline, quads: gpu::RenderPipeline,
shadows: gpu::RenderPipeline, shadows: gpu::RenderPipeline,
path_rasterization: gpu::RenderPipeline,
paths: gpu::RenderPipeline, paths: gpu::RenderPipeline,
underlines: gpu::RenderPipeline, underlines: gpu::RenderPipeline,
mono_sprites: gpu::RenderPipeline, mono_sprites: gpu::RenderPipeline,
@ -132,7 +130,7 @@ struct BladePipelines {
} }
impl BladePipelines { impl BladePipelines {
fn new(gpu: &gpu::Context, surface_info: gpu::SurfaceInfo, sample_count: u32) -> Self { fn new(gpu: &gpu::Context, surface_info: gpu::SurfaceInfo, path_sample_count: u32) -> Self {
use gpu::ShaderData as _; use gpu::ShaderData as _;
log::info!( log::info!(
@ -180,10 +178,7 @@ impl BladePipelines {
depth_stencil: None, depth_stencil: None,
fragment: Some(shader.at("fs_quad")), fragment: Some(shader.at("fs_quad")),
color_targets, color_targets,
multisample_state: gpu::MultisampleState { multisample_state: gpu::MultisampleState::default(),
sample_count,
..Default::default()
},
}), }),
shadows: gpu.create_render_pipeline(gpu::RenderPipelineDesc { shadows: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "shadows", name: "shadows",
@ -197,8 +192,26 @@ impl BladePipelines {
depth_stencil: None, depth_stencil: None,
fragment: Some(shader.at("fs_shadow")), fragment: Some(shader.at("fs_shadow")),
color_targets, color_targets,
multisample_state: gpu::MultisampleState::default(),
}),
path_rasterization: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "path_rasterization",
data_layouts: &[&ShaderPathRasterizationData::layout()],
vertex: shader.at("vs_path_rasterization"),
vertex_fetches: &[],
primitive: gpu::PrimitiveState {
topology: gpu::PrimitiveTopology::TriangleList,
..Default::default()
},
depth_stencil: None,
fragment: Some(shader.at("fs_path_rasterization")),
color_targets: &[gpu::ColorTargetState {
format: PATH_TEXTURE_FORMAT,
blend: Some(gpu::BlendState::ADDITIVE),
write_mask: gpu::ColorWrites::default(),
}],
multisample_state: gpu::MultisampleState { multisample_state: gpu::MultisampleState {
sample_count, sample_count: path_sample_count,
..Default::default() ..Default::default()
}, },
}), }),
@ -208,16 +221,13 @@ impl BladePipelines {
vertex: shader.at("vs_path"), vertex: shader.at("vs_path"),
vertex_fetches: &[], vertex_fetches: &[],
primitive: gpu::PrimitiveState { primitive: gpu::PrimitiveState {
topology: gpu::PrimitiveTopology::TriangleList, topology: gpu::PrimitiveTopology::TriangleStrip,
..Default::default() ..Default::default()
}, },
depth_stencil: None, depth_stencil: None,
fragment: Some(shader.at("fs_path")), fragment: Some(shader.at("fs_path")),
color_targets, color_targets,
multisample_state: gpu::MultisampleState { multisample_state: gpu::MultisampleState::default(),
sample_count,
..Default::default()
},
}), }),
underlines: gpu.create_render_pipeline(gpu::RenderPipelineDesc { underlines: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "underlines", name: "underlines",
@ -231,10 +241,7 @@ impl BladePipelines {
depth_stencil: None, depth_stencil: None,
fragment: Some(shader.at("fs_underline")), fragment: Some(shader.at("fs_underline")),
color_targets, color_targets,
multisample_state: gpu::MultisampleState { multisample_state: gpu::MultisampleState::default(),
sample_count,
..Default::default()
},
}), }),
mono_sprites: gpu.create_render_pipeline(gpu::RenderPipelineDesc { mono_sprites: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "mono-sprites", name: "mono-sprites",
@ -248,10 +255,7 @@ impl BladePipelines {
depth_stencil: None, depth_stencil: None,
fragment: Some(shader.at("fs_mono_sprite")), fragment: Some(shader.at("fs_mono_sprite")),
color_targets, color_targets,
multisample_state: gpu::MultisampleState { multisample_state: gpu::MultisampleState::default(),
sample_count,
..Default::default()
},
}), }),
poly_sprites: gpu.create_render_pipeline(gpu::RenderPipelineDesc { poly_sprites: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "poly-sprites", name: "poly-sprites",
@ -265,10 +269,7 @@ impl BladePipelines {
depth_stencil: None, depth_stencil: None,
fragment: Some(shader.at("fs_poly_sprite")), fragment: Some(shader.at("fs_poly_sprite")),
color_targets, color_targets,
multisample_state: gpu::MultisampleState { multisample_state: gpu::MultisampleState::default(),
sample_count,
..Default::default()
},
}), }),
surfaces: gpu.create_render_pipeline(gpu::RenderPipelineDesc { surfaces: gpu.create_render_pipeline(gpu::RenderPipelineDesc {
name: "surfaces", name: "surfaces",
@ -282,10 +283,7 @@ impl BladePipelines {
depth_stencil: None, depth_stencil: None,
fragment: Some(shader.at("fs_surface")), fragment: Some(shader.at("fs_surface")),
color_targets, color_targets,
multisample_state: gpu::MultisampleState { multisample_state: gpu::MultisampleState::default(),
sample_count,
..Default::default()
},
}), }),
} }
} }
@ -293,6 +291,7 @@ impl BladePipelines {
fn destroy(&mut self, gpu: &gpu::Context) { fn destroy(&mut self, gpu: &gpu::Context) {
gpu.destroy_render_pipeline(&mut self.quads); gpu.destroy_render_pipeline(&mut self.quads);
gpu.destroy_render_pipeline(&mut self.shadows); gpu.destroy_render_pipeline(&mut self.shadows);
gpu.destroy_render_pipeline(&mut self.path_rasterization);
gpu.destroy_render_pipeline(&mut self.paths); gpu.destroy_render_pipeline(&mut self.paths);
gpu.destroy_render_pipeline(&mut self.underlines); gpu.destroy_render_pipeline(&mut self.underlines);
gpu.destroy_render_pipeline(&mut self.mono_sprites); gpu.destroy_render_pipeline(&mut self.mono_sprites);
@ -318,13 +317,12 @@ pub struct BladeRenderer {
last_sync_point: Option<gpu::SyncPoint>, last_sync_point: Option<gpu::SyncPoint>,
pipelines: BladePipelines, pipelines: BladePipelines,
instance_belt: BufferBelt, instance_belt: BufferBelt,
path_tiles: HashMap<PathId, AtlasTile>,
atlas: Arc<BladeAtlas>, atlas: Arc<BladeAtlas>,
atlas_sampler: gpu::Sampler, atlas_sampler: gpu::Sampler,
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
core_video_texture_cache: CVMetalTextureCache, core_video_texture_cache: CVMetalTextureCache,
sample_count: u32, path_sample_count: u32,
texture_msaa: Option<gpu::Texture>,
texture_view_msaa: Option<gpu::TextureView>,
} }
impl BladeRenderer { impl BladeRenderer {
@ -333,18 +331,6 @@ impl BladeRenderer {
window: &I, window: &I,
config: BladeSurfaceConfig, config: BladeSurfaceConfig,
) -> anyhow::Result<Self> { ) -> anyhow::Result<Self> {
// workaround for https://github.com/zed-industries/zed/issues/26143
let sample_count = std::env::var("ZED_SAMPLE_COUNT")
.ok()
.or_else(|| std::env::var("ZED_PATH_SAMPLE_COUNT").ok())
.and_then(|v| v.parse().ok())
.or_else(|| {
[4, 2, 1]
.into_iter()
.find(|count| context.gpu.supports_texture_sample_count(*count))
})
.unwrap_or(1);
let surface_config = gpu::SurfaceConfig { let surface_config = gpu::SurfaceConfig {
size: config.size, size: config.size,
usage: gpu::TextureUsage::TARGET, usage: gpu::TextureUsage::TARGET,
@ -358,27 +344,22 @@ impl BladeRenderer {
.create_surface_configured(window, surface_config) .create_surface_configured(window, surface_config)
.map_err(|err| anyhow::anyhow!("Failed to create surface: {err:?}"))?; .map_err(|err| anyhow::anyhow!("Failed to create surface: {err:?}"))?;
let (texture_msaa, texture_view_msaa) = create_msaa_texture_if_needed(
&context.gpu,
surface.info().format,
config.size.width,
config.size.height,
sample_count,
)
.unzip();
let command_encoder = context.gpu.create_command_encoder(gpu::CommandEncoderDesc { let command_encoder = context.gpu.create_command_encoder(gpu::CommandEncoderDesc {
name: "main", name: "main",
buffer_count: 2, buffer_count: 2,
}); });
// workaround for https://github.com/zed-industries/zed/issues/26143
let pipelines = BladePipelines::new(&context.gpu, surface.info(), sample_count); let path_sample_count = std::env::var("ZED_PATH_SAMPLE_COUNT")
.ok()
.and_then(|v| v.parse().ok())
.unwrap_or(DEFAULT_PATH_SAMPLE_COUNT);
let pipelines = BladePipelines::new(&context.gpu, surface.info(), path_sample_count);
let instance_belt = BufferBelt::new(BufferBeltDescriptor { let instance_belt = BufferBelt::new(BufferBeltDescriptor {
memory: gpu::Memory::Shared, memory: gpu::Memory::Shared,
min_chunk_size: 0x1000, min_chunk_size: 0x1000,
alignment: 0x40, // Vulkan `minStorageBufferOffsetAlignment` on Intel Xe alignment: 0x40, // Vulkan `minStorageBufferOffsetAlignment` on Intel Xe
}); });
let atlas = Arc::new(BladeAtlas::new(&context.gpu)); let atlas = Arc::new(BladeAtlas::new(&context.gpu, path_sample_count));
let atlas_sampler = context.gpu.create_sampler(gpu::SamplerDesc { let atlas_sampler = context.gpu.create_sampler(gpu::SamplerDesc {
name: "atlas", name: "atlas",
mag_filter: gpu::FilterMode::Linear, mag_filter: gpu::FilterMode::Linear,
@ -402,13 +383,12 @@ impl BladeRenderer {
last_sync_point: None, last_sync_point: None,
pipelines, pipelines,
instance_belt, instance_belt,
path_tiles: HashMap::default(),
atlas, atlas,
atlas_sampler, atlas_sampler,
#[cfg(target_os = "macos")] #[cfg(target_os = "macos")]
core_video_texture_cache, core_video_texture_cache,
sample_count, path_sample_count,
texture_msaa,
texture_view_msaa,
}) })
} }
@ -461,24 +441,6 @@ impl BladeRenderer {
self.surface_config.size = gpu_size; self.surface_config.size = gpu_size;
self.gpu self.gpu
.reconfigure_surface(&mut self.surface, self.surface_config); .reconfigure_surface(&mut self.surface, self.surface_config);
if let Some(texture_msaa) = self.texture_msaa {
self.gpu.destroy_texture(texture_msaa);
}
if let Some(texture_view_msaa) = self.texture_view_msaa {
self.gpu.destroy_texture_view(texture_view_msaa);
}
let (texture_msaa, texture_view_msaa) = create_msaa_texture_if_needed(
&self.gpu,
self.surface.info().format,
gpu_size.width,
gpu_size.height,
self.sample_count,
)
.unzip();
self.texture_msaa = texture_msaa;
self.texture_view_msaa = texture_view_msaa;
} }
} }
@ -489,7 +451,8 @@ impl BladeRenderer {
self.gpu self.gpu
.reconfigure_surface(&mut self.surface, self.surface_config); .reconfigure_surface(&mut self.surface, self.surface_config);
self.pipelines.destroy(&self.gpu); self.pipelines.destroy(&self.gpu);
self.pipelines = BladePipelines::new(&self.gpu, self.surface.info(), self.sample_count); self.pipelines =
BladePipelines::new(&self.gpu, self.surface.info(), self.path_sample_count);
} }
} }
@ -527,6 +490,80 @@ impl BladeRenderer {
objc2::rc::Retained::as_ptr(&self.surface.metal_layer()) as *mut _ objc2::rc::Retained::as_ptr(&self.surface.metal_layer()) as *mut _
} }
#[profiling::function]
fn rasterize_paths(&mut self, paths: &[Path<ScaledPixels>]) {
self.path_tiles.clear();
let mut vertices_by_texture_id = HashMap::default();
for path in paths {
let clipped_bounds = path
.bounds
.intersect(&path.content_mask.bounds)
.map_origin(|origin| origin.floor())
.map_size(|size| size.ceil());
let tile = self.atlas.allocate_for_rendering(
clipped_bounds.size.map(Into::into),
AtlasTextureKind::Path,
&mut self.command_encoder,
);
vertices_by_texture_id
.entry(tile.texture_id)
.or_insert(Vec::new())
.extend(path.vertices.iter().map(|vertex| PathVertex {
xy_position: vertex.xy_position - clipped_bounds.origin
+ tile.bounds.origin.map(Into::into),
st_position: vertex.st_position,
content_mask: ContentMask {
bounds: tile.bounds.map(Into::into),
},
}));
self.path_tiles.insert(path.id, tile);
}
for (texture_id, vertices) in vertices_by_texture_id {
let tex_info = self.atlas.get_texture_info(texture_id);
let globals = GlobalParams {
viewport_size: [tex_info.size.width as f32, tex_info.size.height as f32],
premultiplied_alpha: 0,
pad: 0,
};
let vertex_buf = unsafe { self.instance_belt.alloc_typed(&vertices, &self.gpu) };
let frame_view = tex_info.raw_view;
let color_target = if let Some(msaa_view) = tex_info.msaa_view {
gpu::RenderTarget {
view: msaa_view,
init_op: gpu::InitOp::Clear(gpu::TextureColor::OpaqueBlack),
finish_op: gpu::FinishOp::ResolveTo(frame_view),
}
} else {
gpu::RenderTarget {
view: frame_view,
init_op: gpu::InitOp::Clear(gpu::TextureColor::OpaqueBlack),
finish_op: gpu::FinishOp::Store,
}
};
if let mut pass = self.command_encoder.render(
"paths",
gpu::RenderTargetSet {
colors: &[color_target],
depth_stencil: None,
},
) {
let mut encoder = pass.with(&self.pipelines.path_rasterization);
encoder.bind(
0,
&ShaderPathRasterizationData {
globals,
b_path_vertices: vertex_buf,
},
);
encoder.draw(0, vertices.len() as u32, 0, 1);
}
}
}
pub fn destroy(&mut self) { pub fn destroy(&mut self) {
self.wait_for_gpu(); self.wait_for_gpu();
self.atlas.destroy(); self.atlas.destroy();
@ -535,26 +572,17 @@ impl BladeRenderer {
self.gpu.destroy_command_encoder(&mut self.command_encoder); self.gpu.destroy_command_encoder(&mut self.command_encoder);
self.pipelines.destroy(&self.gpu); self.pipelines.destroy(&self.gpu);
self.gpu.destroy_surface(&mut self.surface); self.gpu.destroy_surface(&mut self.surface);
if let Some(texture_msaa) = self.texture_msaa {
self.gpu.destroy_texture(texture_msaa);
}
if let Some(texture_view_msaa) = self.texture_view_msaa {
self.gpu.destroy_texture_view(texture_view_msaa);
}
} }
pub fn draw(&mut self, scene: &Scene) { pub fn draw(&mut self, scene: &Scene) {
self.command_encoder.start(); self.command_encoder.start();
self.atlas.before_frame(&mut self.command_encoder); self.atlas.before_frame(&mut self.command_encoder);
self.rasterize_paths(scene.paths());
let frame = { let frame = {
profiling::scope!("acquire frame"); profiling::scope!("acquire frame");
self.surface.acquire_frame() self.surface.acquire_frame()
}; };
let frame_view = frame.texture_view();
if let Some(texture_msaa) = self.texture_msaa {
self.command_encoder.init_texture(texture_msaa);
}
self.command_encoder.init_texture(frame.texture()); self.command_encoder.init_texture(frame.texture());
let globals = GlobalParams { let globals = GlobalParams {
@ -569,25 +597,14 @@ impl BladeRenderer {
pad: 0, pad: 0,
}; };
let target = if let Some(texture_view_msaa) = self.texture_view_msaa {
gpu::RenderTarget {
view: texture_view_msaa,
init_op: gpu::InitOp::Clear(gpu::TextureColor::TransparentBlack),
finish_op: gpu::FinishOp::ResolveTo(frame_view),
}
} else {
gpu::RenderTarget {
view: frame_view,
init_op: gpu::InitOp::Clear(gpu::TextureColor::TransparentBlack),
finish_op: gpu::FinishOp::Store,
}
};
// draw to the target texture
if let mut pass = self.command_encoder.render( if let mut pass = self.command_encoder.render(
"main", "main",
gpu::RenderTargetSet { gpu::RenderTargetSet {
colors: &[target], colors: &[gpu::RenderTarget {
view: frame.texture_view(),
init_op: gpu::InitOp::Clear(gpu::TextureColor::TransparentBlack),
finish_op: gpu::FinishOp::Store,
}],
depth_stencil: None, depth_stencil: None,
}, },
) { ) {
@ -622,55 +639,32 @@ impl BladeRenderer {
} }
PrimitiveBatch::Paths(paths) => { PrimitiveBatch::Paths(paths) => {
let mut encoder = pass.with(&self.pipelines.paths); let mut encoder = pass.with(&self.pipelines.paths);
// todo(linux): group by texture ID
let mut vertices = Vec::new(); for path in paths {
let mut sprites = Vec::with_capacity(paths.len()); let tile = &self.path_tiles[&path.id];
let mut draw_indirect_commands = Vec::with_capacity(paths.len()); let tex_info = self.atlas.get_texture_info(tile.texture_id);
let mut first_vertex = 0; let origin = path.bounds.intersect(&path.content_mask.bounds).origin;
let sprites = [PathSprite {
for (i, path) in paths.iter().enumerate() { bounds: Bounds {
draw_indirect_commands.push(DrawIndirectArgs { origin: origin.map(|p| p.floor()),
vertex_count: path.vertices.len() as u32, size: tile.bounds.size.map(Into::into),
instance_count: 1,
first_vertex,
first_instance: i as u32,
});
first_vertex += path.vertices.len() as u32;
vertices.extend(path.vertices.iter().map(|v| PathVertex {
xy_position: v.xy_position,
content_mask: ContentMask {
bounds: path.content_mask.bounds,
}, },
}));
sprites.push(PathSprite {
bounds: path.bounds,
color: path.color, color: path.color,
}); tile: (*tile).clone(),
} }];
let b_path_vertices = let instance_buf =
unsafe { self.instance_belt.alloc_typed(&vertices, &self.gpu) }; unsafe { self.instance_belt.alloc_typed(&sprites, &self.gpu) };
let instance_buf = encoder.bind(
unsafe { self.instance_belt.alloc_typed(&sprites, &self.gpu) }; 0,
let indirect_buf = unsafe { &ShaderPathsData {
self.instance_belt globals,
.alloc_typed(&draw_indirect_commands, &self.gpu) t_sprite: tex_info.raw_view,
}; s_sprite: self.atlas_sampler,
b_path_sprites: instance_buf,
encoder.bind( },
0, );
&ShaderPathsData { encoder.draw(0, 4, 0, sprites.len() as u32);
globals,
b_path_vertices,
b_path_sprites: instance_buf,
},
);
for i in 0..paths.len() {
encoder.draw_indirect(indirect_buf.buffer.at(indirect_buf.offset
+ (i * mem::size_of::<DrawIndirectArgs>()) as u64));
} }
} }
PrimitiveBatch::Underlines(underlines) => { PrimitiveBatch::Underlines(underlines) => {
@ -823,47 +817,9 @@ impl BladeRenderer {
profiling::scope!("finish"); profiling::scope!("finish");
self.instance_belt.flush(&sync_point); self.instance_belt.flush(&sync_point);
self.atlas.after_frame(&sync_point); self.atlas.after_frame(&sync_point);
self.atlas.clear_textures(AtlasTextureKind::Path);
self.wait_for_gpu(); self.wait_for_gpu();
self.last_sync_point = Some(sync_point); self.last_sync_point = Some(sync_point);
} }
} }
fn create_msaa_texture_if_needed(
gpu: &gpu::Context,
format: gpu::TextureFormat,
width: u32,
height: u32,
sample_count: u32,
) -> Option<(gpu::Texture, gpu::TextureView)> {
if sample_count <= 1 {
return None;
}
let texture_msaa = gpu.create_texture(gpu::TextureDesc {
name: "msaa",
format,
size: gpu::Extent {
width,
height,
depth: 1,
},
array_layer_count: 1,
mip_level_count: 1,
sample_count,
dimension: gpu::TextureDimension::D2,
usage: gpu::TextureUsage::TARGET,
external: None,
});
let texture_view_msaa = gpu.create_texture_view(
texture_msaa,
gpu::TextureViewDesc {
name: "msaa view",
format,
dimension: gpu::ViewDimension::D2,
subresources: &Default::default(),
},
);
Some((texture_msaa, texture_view_msaa))
}

View file

@ -922,23 +922,59 @@ fn fs_shadow(input: ShadowVarying) -> @location(0) vec4<f32> {
return blend_color(input.color, alpha); return blend_color(input.color, alpha);
} }
// --- paths --- // // --- path rasterization --- //
struct PathVertex { struct PathVertex {
xy_position: vec2<f32>, xy_position: vec2<f32>,
st_position: vec2<f32>,
content_mask: Bounds, content_mask: Bounds,
} }
var<storage, read> b_path_vertices: array<PathVertex>;
struct PathRasterizationVarying {
@builtin(position) position: vec4<f32>,
@location(0) st_position: vec2<f32>,
//TODO: use `clip_distance` once Naga supports it
@location(3) clip_distances: vec4<f32>,
}
@vertex
fn vs_path_rasterization(@builtin(vertex_index) vertex_id: u32) -> PathRasterizationVarying {
let v = b_path_vertices[vertex_id];
var out = PathRasterizationVarying();
out.position = to_device_position_impl(v.xy_position);
out.st_position = v.st_position;
out.clip_distances = distance_from_clip_rect_impl(v.xy_position, v.content_mask);
return out;
}
@fragment
fn fs_path_rasterization(input: PathRasterizationVarying) -> @location(0) f32 {
let dx = dpdx(input.st_position);
let dy = dpdy(input.st_position);
if (any(input.clip_distances < vec4<f32>(0.0))) {
return 0.0;
}
let gradient = 2.0 * input.st_position.xx * vec2<f32>(dx.x, dy.x) - vec2<f32>(dx.y, dy.y);
let f = input.st_position.x * input.st_position.x - input.st_position.y;
let distance = f / length(gradient);
return saturate(0.5 - distance);
}
// --- paths --- //
struct PathSprite { struct PathSprite {
bounds: Bounds, bounds: Bounds,
color: Background, color: Background,
tile: AtlasTile,
} }
var<storage, read> b_path_vertices: array<PathVertex>;
var<storage, read> b_path_sprites: array<PathSprite>; var<storage, read> b_path_sprites: array<PathSprite>;
struct PathVarying { struct PathVarying {
@builtin(position) position: vec4<f32>, @builtin(position) position: vec4<f32>,
@location(0) clip_distances: vec4<f32>, @location(0) tile_position: vec2<f32>,
@location(1) @interpolate(flat) instance_id: u32, @location(1) @interpolate(flat) instance_id: u32,
@location(2) @interpolate(flat) color_solid: vec4<f32>, @location(2) @interpolate(flat) color_solid: vec4<f32>,
@location(3) @interpolate(flat) color0: vec4<f32>, @location(3) @interpolate(flat) color0: vec4<f32>,
@ -947,12 +983,13 @@ struct PathVarying {
@vertex @vertex
fn vs_path(@builtin(vertex_index) vertex_id: u32, @builtin(instance_index) instance_id: u32) -> PathVarying { fn vs_path(@builtin(vertex_index) vertex_id: u32, @builtin(instance_index) instance_id: u32) -> PathVarying {
let v = b_path_vertices[vertex_id]; let unit_vertex = vec2<f32>(f32(vertex_id & 1u), 0.5 * f32(vertex_id & 2u));
let sprite = b_path_sprites[instance_id]; let sprite = b_path_sprites[instance_id];
// Don't apply content mask because it was already accounted for when rasterizing the path.
var out = PathVarying(); var out = PathVarying();
out.position = to_device_position_impl(v.xy_position); out.position = to_device_position(unit_vertex, sprite.bounds);
out.clip_distances = distance_from_clip_rect_impl(v.xy_position, v.content_mask); out.tile_position = to_tile_position(unit_vertex, sprite.tile);
out.instance_id = instance_id; out.instance_id = instance_id;
let gradient = prepare_gradient_color( let gradient = prepare_gradient_color(
@ -969,15 +1006,13 @@ fn vs_path(@builtin(vertex_index) vertex_id: u32, @builtin(instance_index) insta
@fragment @fragment
fn fs_path(input: PathVarying) -> @location(0) vec4<f32> { fn fs_path(input: PathVarying) -> @location(0) vec4<f32> {
if any(input.clip_distances < vec4<f32>(0.0)) { let sample = textureSample(t_sprite, s_sprite, input.tile_position).r;
return vec4<f32>(0.0); let mask = 1.0 - abs(1.0 - sample % 2.0);
}
let sprite = b_path_sprites[input.instance_id]; let sprite = b_path_sprites[input.instance_id];
let background = sprite.color; let background = sprite.color;
let color = gradient_color(background, input.position.xy, sprite.bounds, let color = gradient_color(background, input.position.xy, sprite.bounds,
input.color_solid, input.color0, input.color1); input.color_solid, input.color0, input.color1);
return blend_color(color, 1.0); return blend_color(color, mask);
} }
// --- underlines --- // // --- underlines --- //

View file

@ -417,17 +417,6 @@ impl Modifiers {
self.control || self.alt || self.shift || self.platform || self.function self.control || self.alt || self.shift || self.platform || self.function
} }
/// Returns the XOR of two modifier sets
pub fn xor(&self, other: &Modifiers) -> Modifiers {
Modifiers {
control: self.control ^ other.control,
alt: self.alt ^ other.alt,
shift: self.shift ^ other.shift,
platform: self.platform ^ other.platform,
function: self.function ^ other.function,
}
}
/// Whether the semantically 'secondary' modifier key is pressed. /// Whether the semantically 'secondary' modifier key is pressed.
/// ///
/// On macOS, this is the command key. /// On macOS, this is the command key.
@ -545,11 +534,62 @@ impl Modifiers {
/// Checks if this [`Modifiers`] is a subset of another [`Modifiers`]. /// Checks if this [`Modifiers`] is a subset of another [`Modifiers`].
pub fn is_subset_of(&self, other: &Modifiers) -> bool { pub fn is_subset_of(&self, other: &Modifiers) -> bool {
(other.control || !self.control) (*other & *self) == *self
&& (other.alt || !self.alt) }
&& (other.shift || !self.shift) }
&& (other.platform || !self.platform)
&& (other.function || !self.function) impl std::ops::BitOr for Modifiers {
type Output = Self;
fn bitor(mut self, other: Self) -> Self::Output {
self |= other;
self
}
}
impl std::ops::BitOrAssign for Modifiers {
fn bitor_assign(&mut self, other: Self) {
self.control |= other.control;
self.alt |= other.alt;
self.shift |= other.shift;
self.platform |= other.platform;
self.function |= other.function;
}
}
impl std::ops::BitXor for Modifiers {
type Output = Self;
fn bitxor(mut self, rhs: Self) -> Self::Output {
self ^= rhs;
self
}
}
impl std::ops::BitXorAssign for Modifiers {
fn bitxor_assign(&mut self, other: Self) {
self.control ^= other.control;
self.alt ^= other.alt;
self.shift ^= other.shift;
self.platform ^= other.platform;
self.function ^= other.function;
}
}
impl std::ops::BitAnd for Modifiers {
type Output = Self;
fn bitand(mut self, rhs: Self) -> Self::Output {
self &= rhs;
self
}
}
impl std::ops::BitAndAssign for Modifiers {
fn bitand_assign(&mut self, other: Self) {
self.control &= other.control;
self.alt &= other.alt;
self.shift &= other.shift;
self.platform &= other.platform;
self.function &= other.function;
} }
} }

View file

@ -822,11 +822,41 @@ impl crate::Keystroke {
Keysym::underscore => "_".to_owned(), Keysym::underscore => "_".to_owned(),
Keysym::equal => "=".to_owned(), Keysym::equal => "=".to_owned(),
Keysym::plus => "+".to_owned(), Keysym::plus => "+".to_owned(),
Keysym::space => "space".to_owned(),
Keysym::BackSpace => "backspace".to_owned(),
Keysym::Tab => "tab".to_owned(),
Keysym::Delete => "delete".to_owned(),
Keysym::Escape => "escape".to_owned(),
Keysym::Left => "left".to_owned(),
Keysym::Right => "right".to_owned(),
Keysym::Up => "up".to_owned(),
Keysym::Down => "down".to_owned(),
Keysym::Home => "home".to_owned(),
Keysym::End => "end".to_owned(),
_ => { _ => {
let name = xkb::keysym_get_name(key_sym).to_lowercase(); let name = xkb::keysym_get_name(key_sym).to_lowercase();
if key_sym.is_keypad_key() { if key_sym.is_keypad_key() {
name.replace("kp_", "") name.replace("kp_", "")
} else if let Some(key) = key_utf8.chars().next()
&& key_utf8.len() == 1
&& key.is_ascii()
{
if key.is_ascii_graphic() {
key_utf8.to_lowercase()
// map ctrl-a to `a`
// ctrl-0..9 may emit control codes like ctrl-[, but
// we don't want to map them to `[`
} else if key_utf32 <= 0x1f
&& !name.chars().next().is_some_and(|c| c.is_ascii_digit())
{
((key_utf32 as u8 + 0x40) as char)
.to_ascii_lowercase()
.to_string()
} else {
name
}
} else if let Some(key_en) = guess_ascii(keycode, modifiers.shift) { } else if let Some(key_en) = guess_ascii(keycode, modifiers.shift) {
String::from(key_en) String::from(key_en)
} else { } else {

View file

@ -13,12 +13,14 @@ use std::borrow::Cow;
pub(crate) struct MetalAtlas(Mutex<MetalAtlasState>); pub(crate) struct MetalAtlas(Mutex<MetalAtlasState>);
impl MetalAtlas { impl MetalAtlas {
pub(crate) fn new(device: Device) -> Self { pub(crate) fn new(device: Device, path_sample_count: u32) -> Self {
MetalAtlas(Mutex::new(MetalAtlasState { MetalAtlas(Mutex::new(MetalAtlasState {
device: AssertSend(device), device: AssertSend(device),
monochrome_textures: Default::default(), monochrome_textures: Default::default(),
polychrome_textures: Default::default(), polychrome_textures: Default::default(),
path_textures: Default::default(),
tiles_by_key: Default::default(), tiles_by_key: Default::default(),
path_sample_count,
})) }))
} }
@ -26,7 +28,10 @@ impl MetalAtlas {
self.0.lock().texture(id).metal_texture.clone() self.0.lock().texture(id).metal_texture.clone()
} }
#[allow(dead_code)] pub(crate) fn msaa_texture(&self, id: AtlasTextureId) -> Option<metal::Texture> {
self.0.lock().texture(id).msaa_texture.clone()
}
pub(crate) fn allocate( pub(crate) fn allocate(
&self, &self,
size: Size<DevicePixels>, size: Size<DevicePixels>,
@ -35,12 +40,12 @@ impl MetalAtlas {
self.0.lock().allocate(size, texture_kind) self.0.lock().allocate(size, texture_kind)
} }
#[allow(dead_code)]
pub(crate) fn clear_textures(&self, texture_kind: AtlasTextureKind) { pub(crate) fn clear_textures(&self, texture_kind: AtlasTextureKind) {
let mut lock = self.0.lock(); let mut lock = self.0.lock();
let textures = match texture_kind { let textures = match texture_kind {
AtlasTextureKind::Monochrome => &mut lock.monochrome_textures, AtlasTextureKind::Monochrome => &mut lock.monochrome_textures,
AtlasTextureKind::Polychrome => &mut lock.polychrome_textures, AtlasTextureKind::Polychrome => &mut lock.polychrome_textures,
AtlasTextureKind::Path => &mut lock.path_textures,
}; };
for texture in textures.iter_mut() { for texture in textures.iter_mut() {
texture.clear(); texture.clear();
@ -52,7 +57,9 @@ struct MetalAtlasState {
device: AssertSend<Device>, device: AssertSend<Device>,
monochrome_textures: AtlasTextureList<MetalAtlasTexture>, monochrome_textures: AtlasTextureList<MetalAtlasTexture>,
polychrome_textures: AtlasTextureList<MetalAtlasTexture>, polychrome_textures: AtlasTextureList<MetalAtlasTexture>,
path_textures: AtlasTextureList<MetalAtlasTexture>,
tiles_by_key: FxHashMap<AtlasKey, AtlasTile>, tiles_by_key: FxHashMap<AtlasKey, AtlasTile>,
path_sample_count: u32,
} }
impl PlatformAtlas for MetalAtlas { impl PlatformAtlas for MetalAtlas {
@ -87,6 +94,7 @@ impl PlatformAtlas for MetalAtlas {
let textures = match id.kind { let textures = match id.kind {
AtlasTextureKind::Monochrome => &mut lock.monochrome_textures, AtlasTextureKind::Monochrome => &mut lock.monochrome_textures,
AtlasTextureKind::Polychrome => &mut lock.polychrome_textures, AtlasTextureKind::Polychrome => &mut lock.polychrome_textures,
AtlasTextureKind::Path => &mut lock.polychrome_textures,
}; };
let Some(texture_slot) = textures let Some(texture_slot) = textures
@ -120,6 +128,7 @@ impl MetalAtlasState {
let textures = match texture_kind { let textures = match texture_kind {
AtlasTextureKind::Monochrome => &mut self.monochrome_textures, AtlasTextureKind::Monochrome => &mut self.monochrome_textures,
AtlasTextureKind::Polychrome => &mut self.polychrome_textures, AtlasTextureKind::Polychrome => &mut self.polychrome_textures,
AtlasTextureKind::Path => &mut self.path_textures,
}; };
if let Some(tile) = textures if let Some(tile) = textures
@ -164,14 +173,31 @@ impl MetalAtlasState {
pixel_format = metal::MTLPixelFormat::BGRA8Unorm; pixel_format = metal::MTLPixelFormat::BGRA8Unorm;
usage = metal::MTLTextureUsage::ShaderRead; usage = metal::MTLTextureUsage::ShaderRead;
} }
AtlasTextureKind::Path => {
pixel_format = metal::MTLPixelFormat::R16Float;
usage = metal::MTLTextureUsage::RenderTarget | metal::MTLTextureUsage::ShaderRead;
}
} }
texture_descriptor.set_pixel_format(pixel_format); texture_descriptor.set_pixel_format(pixel_format);
texture_descriptor.set_usage(usage); texture_descriptor.set_usage(usage);
let metal_texture = self.device.new_texture(&texture_descriptor); let metal_texture = self.device.new_texture(&texture_descriptor);
// We currently only enable MSAA for path textures.
let msaa_texture = if self.path_sample_count > 1 && kind == AtlasTextureKind::Path {
let mut descriptor = texture_descriptor.clone();
descriptor.set_texture_type(metal::MTLTextureType::D2Multisample);
descriptor.set_storage_mode(metal::MTLStorageMode::Private);
descriptor.set_sample_count(self.path_sample_count as _);
let msaa_texture = self.device.new_texture(&descriptor);
Some(msaa_texture)
} else {
None
};
let texture_list = match kind { let texture_list = match kind {
AtlasTextureKind::Monochrome => &mut self.monochrome_textures, AtlasTextureKind::Monochrome => &mut self.monochrome_textures,
AtlasTextureKind::Polychrome => &mut self.polychrome_textures, AtlasTextureKind::Polychrome => &mut self.polychrome_textures,
AtlasTextureKind::Path => &mut self.path_textures,
}; };
let index = texture_list.free_list.pop(); let index = texture_list.free_list.pop();
@ -183,6 +209,7 @@ impl MetalAtlasState {
}, },
allocator: etagere::BucketedAtlasAllocator::new(size.into()), allocator: etagere::BucketedAtlasAllocator::new(size.into()),
metal_texture: AssertSend(metal_texture), metal_texture: AssertSend(metal_texture),
msaa_texture: AssertSend(msaa_texture),
live_atlas_keys: 0, live_atlas_keys: 0,
}; };
@ -199,6 +226,7 @@ impl MetalAtlasState {
let textures = match id.kind { let textures = match id.kind {
crate::AtlasTextureKind::Monochrome => &self.monochrome_textures, crate::AtlasTextureKind::Monochrome => &self.monochrome_textures,
crate::AtlasTextureKind::Polychrome => &self.polychrome_textures, crate::AtlasTextureKind::Polychrome => &self.polychrome_textures,
crate::AtlasTextureKind::Path => &self.path_textures,
}; };
textures[id.index as usize].as_ref().unwrap() textures[id.index as usize].as_ref().unwrap()
} }
@ -208,6 +236,7 @@ struct MetalAtlasTexture {
id: AtlasTextureId, id: AtlasTextureId,
allocator: BucketedAtlasAllocator, allocator: BucketedAtlasAllocator,
metal_texture: AssertSend<metal::Texture>, metal_texture: AssertSend<metal::Texture>,
msaa_texture: AssertSend<Option<metal::Texture>>,
live_atlas_keys: u32, live_atlas_keys: u32,
} }

View file

@ -1,28 +1,27 @@
use super::metal_atlas::MetalAtlas; use super::metal_atlas::MetalAtlas;
use crate::{ use crate::{
AtlasTextureId, Background, Bounds, ContentMask, DevicePixels, MonochromeSprite, PaintSurface, AtlasTextureId, AtlasTextureKind, AtlasTile, Background, Bounds, ContentMask, DevicePixels,
Path, PathVertex, PolychromeSprite, PrimitiveBatch, Quad, ScaledPixels, Scene, Shadow, Size, MonochromeSprite, PaintSurface, Path, PathId, PathVertex, PolychromeSprite, PrimitiveBatch,
Surface, Underline, point, size, Quad, ScaledPixels, Scene, Shadow, Size, Surface, Underline, point, size,
}; };
use anyhow::Result; use anyhow::{Context as _, Result};
use block::ConcreteBlock; use block::ConcreteBlock;
use cocoa::{ use cocoa::{
base::{NO, YES}, base::{NO, YES},
foundation::{NSSize, NSUInteger}, foundation::{NSSize, NSUInteger},
quartzcore::AutoresizingMask, quartzcore::AutoresizingMask,
}; };
use collections::HashMap;
use core_foundation::base::TCFType; use core_foundation::base::TCFType;
use core_video::{ use core_video::{
metal_texture::CVMetalTextureGetTexture, metal_texture_cache::CVMetalTextureCache, metal_texture::CVMetalTextureGetTexture, metal_texture_cache::CVMetalTextureCache,
pixel_buffer::kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, pixel_buffer::kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
}; };
use foreign_types::{ForeignType, ForeignTypeRef}; use foreign_types::{ForeignType, ForeignTypeRef};
use metal::{ use metal::{CAMetalLayer, CommandQueue, MTLPixelFormat, MTLResourceOptions, NSRange};
CAMetalLayer, CommandQueue, MTLDrawPrimitivesIndirectArguments, MTLPixelFormat,
MTLResourceOptions, NSRange,
};
use objc::{self, msg_send, sel, sel_impl}; use objc::{self, msg_send, sel, sel_impl};
use parking_lot::Mutex; use parking_lot::Mutex;
use smallvec::SmallVec;
use std::{cell::Cell, ffi::c_void, mem, ptr, sync::Arc}; use std::{cell::Cell, ffi::c_void, mem, ptr, sync::Arc};
// Exported to metal // Exported to metal
@ -32,6 +31,9 @@ pub(crate) type PointF = crate::Point<f32>;
const SHADERS_METALLIB: &[u8] = include_bytes!(concat!(env!("OUT_DIR"), "/shaders.metallib")); const SHADERS_METALLIB: &[u8] = include_bytes!(concat!(env!("OUT_DIR"), "/shaders.metallib"));
#[cfg(feature = "runtime_shaders")] #[cfg(feature = "runtime_shaders")]
const SHADERS_SOURCE_FILE: &str = include_str!(concat!(env!("OUT_DIR"), "/stitched_shaders.metal")); const SHADERS_SOURCE_FILE: &str = include_str!(concat!(env!("OUT_DIR"), "/stitched_shaders.metal"));
// Use 4x MSAA, all devices support it.
// https://developer.apple.com/documentation/metal/mtldevice/1433355-supportstexturesamplecount
const PATH_SAMPLE_COUNT: u32 = 4;
pub type Context = Arc<Mutex<InstanceBufferPool>>; pub type Context = Arc<Mutex<InstanceBufferPool>>;
pub type Renderer = MetalRenderer; pub type Renderer = MetalRenderer;
@ -96,7 +98,8 @@ pub(crate) struct MetalRenderer {
layer: metal::MetalLayer, layer: metal::MetalLayer,
presents_with_transaction: bool, presents_with_transaction: bool,
command_queue: CommandQueue, command_queue: CommandQueue,
path_pipeline_state: metal::RenderPipelineState, paths_rasterization_pipeline_state: metal::RenderPipelineState,
path_sprites_pipeline_state: metal::RenderPipelineState,
shadows_pipeline_state: metal::RenderPipelineState, shadows_pipeline_state: metal::RenderPipelineState,
quads_pipeline_state: metal::RenderPipelineState, quads_pipeline_state: metal::RenderPipelineState,
underlines_pipeline_state: metal::RenderPipelineState, underlines_pipeline_state: metal::RenderPipelineState,
@ -108,8 +111,6 @@ pub(crate) struct MetalRenderer {
instance_buffer_pool: Arc<Mutex<InstanceBufferPool>>, instance_buffer_pool: Arc<Mutex<InstanceBufferPool>>,
sprite_atlas: Arc<MetalAtlas>, sprite_atlas: Arc<MetalAtlas>,
core_video_texture_cache: core_video::metal_texture_cache::CVMetalTextureCache, core_video_texture_cache: core_video::metal_texture_cache::CVMetalTextureCache,
sample_count: u64,
msaa_texture: Option<metal::Texture>,
} }
impl MetalRenderer { impl MetalRenderer {
@ -168,19 +169,22 @@ impl MetalRenderer {
MTLResourceOptions::StorageModeManaged, MTLResourceOptions::StorageModeManaged,
); );
let sample_count = [4, 2, 1] let paths_rasterization_pipeline_state = build_path_rasterization_pipeline_state(
.into_iter()
.find(|count| device.supports_texture_sample_count(*count))
.unwrap_or(1);
let path_pipeline_state = build_pipeline_state(
&device, &device,
&library, &library,
"paths", "paths_rasterization",
"path_vertex", "path_rasterization_vertex",
"path_fragment", "path_rasterization_fragment",
MTLPixelFormat::R16Float,
PATH_SAMPLE_COUNT,
);
let path_sprites_pipeline_state = build_pipeline_state(
&device,
&library,
"path_sprites",
"path_sprite_vertex",
"path_sprite_fragment",
MTLPixelFormat::BGRA8Unorm, MTLPixelFormat::BGRA8Unorm,
sample_count,
); );
let shadows_pipeline_state = build_pipeline_state( let shadows_pipeline_state = build_pipeline_state(
&device, &device,
@ -189,7 +193,6 @@ impl MetalRenderer {
"shadow_vertex", "shadow_vertex",
"shadow_fragment", "shadow_fragment",
MTLPixelFormat::BGRA8Unorm, MTLPixelFormat::BGRA8Unorm,
sample_count,
); );
let quads_pipeline_state = build_pipeline_state( let quads_pipeline_state = build_pipeline_state(
&device, &device,
@ -198,7 +201,6 @@ impl MetalRenderer {
"quad_vertex", "quad_vertex",
"quad_fragment", "quad_fragment",
MTLPixelFormat::BGRA8Unorm, MTLPixelFormat::BGRA8Unorm,
sample_count,
); );
let underlines_pipeline_state = build_pipeline_state( let underlines_pipeline_state = build_pipeline_state(
&device, &device,
@ -207,7 +209,6 @@ impl MetalRenderer {
"underline_vertex", "underline_vertex",
"underline_fragment", "underline_fragment",
MTLPixelFormat::BGRA8Unorm, MTLPixelFormat::BGRA8Unorm,
sample_count,
); );
let monochrome_sprites_pipeline_state = build_pipeline_state( let monochrome_sprites_pipeline_state = build_pipeline_state(
&device, &device,
@ -216,7 +217,6 @@ impl MetalRenderer {
"monochrome_sprite_vertex", "monochrome_sprite_vertex",
"monochrome_sprite_fragment", "monochrome_sprite_fragment",
MTLPixelFormat::BGRA8Unorm, MTLPixelFormat::BGRA8Unorm,
sample_count,
); );
let polychrome_sprites_pipeline_state = build_pipeline_state( let polychrome_sprites_pipeline_state = build_pipeline_state(
&device, &device,
@ -225,7 +225,6 @@ impl MetalRenderer {
"polychrome_sprite_vertex", "polychrome_sprite_vertex",
"polychrome_sprite_fragment", "polychrome_sprite_fragment",
MTLPixelFormat::BGRA8Unorm, MTLPixelFormat::BGRA8Unorm,
sample_count,
); );
let surfaces_pipeline_state = build_pipeline_state( let surfaces_pipeline_state = build_pipeline_state(
&device, &device,
@ -234,21 +233,20 @@ impl MetalRenderer {
"surface_vertex", "surface_vertex",
"surface_fragment", "surface_fragment",
MTLPixelFormat::BGRA8Unorm, MTLPixelFormat::BGRA8Unorm,
sample_count,
); );
let command_queue = device.new_command_queue(); let command_queue = device.new_command_queue();
let sprite_atlas = Arc::new(MetalAtlas::new(device.clone())); let sprite_atlas = Arc::new(MetalAtlas::new(device.clone(), PATH_SAMPLE_COUNT));
let core_video_texture_cache = let core_video_texture_cache =
CVMetalTextureCache::new(None, device.clone(), None).unwrap(); CVMetalTextureCache::new(None, device.clone(), None).unwrap();
let msaa_texture = create_msaa_texture(&device, &layer, sample_count);
Self { Self {
device, device,
layer, layer,
presents_with_transaction: false, presents_with_transaction: false,
command_queue, command_queue,
path_pipeline_state, paths_rasterization_pipeline_state,
path_sprites_pipeline_state,
shadows_pipeline_state, shadows_pipeline_state,
quads_pipeline_state, quads_pipeline_state,
underlines_pipeline_state, underlines_pipeline_state,
@ -259,8 +257,6 @@ impl MetalRenderer {
instance_buffer_pool, instance_buffer_pool,
sprite_atlas, sprite_atlas,
core_video_texture_cache, core_video_texture_cache,
sample_count,
msaa_texture,
} }
} }
@ -293,8 +289,6 @@ impl MetalRenderer {
setDrawableSize: size setDrawableSize: size
]; ];
} }
self.msaa_texture = create_msaa_texture(&self.device, &self.layer, self.sample_count);
} }
pub fn update_transparency(&self, _transparent: bool) { pub fn update_transparency(&self, _transparent: bool) {
@ -381,23 +375,25 @@ impl MetalRenderer {
let command_queue = self.command_queue.clone(); let command_queue = self.command_queue.clone();
let command_buffer = command_queue.new_command_buffer(); let command_buffer = command_queue.new_command_buffer();
let mut instance_offset = 0; let mut instance_offset = 0;
let path_tiles = self
.rasterize_paths(
scene.paths(),
instance_buffer,
&mut instance_offset,
command_buffer,
)
.with_context(|| format!("rasterizing {} paths", scene.paths().len()))?;
let render_pass_descriptor = metal::RenderPassDescriptor::new(); let render_pass_descriptor = metal::RenderPassDescriptor::new();
let color_attachment = render_pass_descriptor let color_attachment = render_pass_descriptor
.color_attachments() .color_attachments()
.object_at(0) .object_at(0)
.unwrap(); .unwrap();
if let Some(msaa_texture_ref) = self.msaa_texture.as_deref() { color_attachment.set_texture(Some(drawable.texture()));
color_attachment.set_texture(Some(msaa_texture_ref)); color_attachment.set_load_action(metal::MTLLoadAction::Clear);
color_attachment.set_load_action(metal::MTLLoadAction::Clear); color_attachment.set_store_action(metal::MTLStoreAction::Store);
color_attachment.set_store_action(metal::MTLStoreAction::MultisampleResolve);
color_attachment.set_resolve_texture(Some(drawable.texture()));
} else {
color_attachment.set_load_action(metal::MTLLoadAction::Clear);
color_attachment.set_texture(Some(drawable.texture()));
color_attachment.set_store_action(metal::MTLStoreAction::Store);
}
let alpha = if self.layer.is_opaque() { 1. } else { 0. }; let alpha = if self.layer.is_opaque() { 1. } else { 0. };
color_attachment.set_clear_color(metal::MTLClearColor::new(0., 0., 0., alpha)); color_attachment.set_clear_color(metal::MTLClearColor::new(0., 0., 0., alpha));
let command_encoder = command_buffer.new_render_command_encoder(render_pass_descriptor); let command_encoder = command_buffer.new_render_command_encoder(render_pass_descriptor);
@ -429,6 +425,7 @@ impl MetalRenderer {
), ),
PrimitiveBatch::Paths(paths) => self.draw_paths( PrimitiveBatch::Paths(paths) => self.draw_paths(
paths, paths,
&path_tiles,
instance_buffer, instance_buffer,
&mut instance_offset, &mut instance_offset,
viewport_size, viewport_size,
@ -496,6 +493,106 @@ impl MetalRenderer {
Ok(command_buffer.to_owned()) Ok(command_buffer.to_owned())
} }
fn rasterize_paths(
&self,
paths: &[Path<ScaledPixels>],
instance_buffer: &mut InstanceBuffer,
instance_offset: &mut usize,
command_buffer: &metal::CommandBufferRef,
) -> Option<HashMap<PathId, AtlasTile>> {
self.sprite_atlas.clear_textures(AtlasTextureKind::Path);
let mut tiles = HashMap::default();
let mut vertices_by_texture_id = HashMap::default();
for path in paths {
let clipped_bounds = path.bounds.intersect(&path.content_mask.bounds);
let tile = self
.sprite_atlas
.allocate(clipped_bounds.size.map(Into::into), AtlasTextureKind::Path)?;
vertices_by_texture_id
.entry(tile.texture_id)
.or_insert(Vec::new())
.extend(path.vertices.iter().map(|vertex| PathVertex {
xy_position: vertex.xy_position - clipped_bounds.origin
+ tile.bounds.origin.map(Into::into),
st_position: vertex.st_position,
content_mask: ContentMask {
bounds: tile.bounds.map(Into::into),
},
}));
tiles.insert(path.id, tile);
}
for (texture_id, vertices) in vertices_by_texture_id {
align_offset(instance_offset);
let vertices_bytes_len = mem::size_of_val(vertices.as_slice());
let next_offset = *instance_offset + vertices_bytes_len;
if next_offset > instance_buffer.size {
return None;
}
let render_pass_descriptor = metal::RenderPassDescriptor::new();
let color_attachment = render_pass_descriptor
.color_attachments()
.object_at(0)
.unwrap();
let texture = self.sprite_atlas.metal_texture(texture_id);
let msaa_texture = self.sprite_atlas.msaa_texture(texture_id);
if let Some(msaa_texture) = msaa_texture {
color_attachment.set_texture(Some(&msaa_texture));
color_attachment.set_resolve_texture(Some(&texture));
color_attachment.set_load_action(metal::MTLLoadAction::Clear);
color_attachment.set_store_action(metal::MTLStoreAction::MultisampleResolve);
} else {
color_attachment.set_texture(Some(&texture));
color_attachment.set_load_action(metal::MTLLoadAction::Clear);
color_attachment.set_store_action(metal::MTLStoreAction::Store);
}
color_attachment.set_clear_color(metal::MTLClearColor::new(0., 0., 0., 1.));
let command_encoder = command_buffer.new_render_command_encoder(render_pass_descriptor);
command_encoder.set_render_pipeline_state(&self.paths_rasterization_pipeline_state);
command_encoder.set_vertex_buffer(
PathRasterizationInputIndex::Vertices as u64,
Some(&instance_buffer.metal_buffer),
*instance_offset as u64,
);
let texture_size = Size {
width: DevicePixels::from(texture.width()),
height: DevicePixels::from(texture.height()),
};
command_encoder.set_vertex_bytes(
PathRasterizationInputIndex::AtlasTextureSize as u64,
mem::size_of_val(&texture_size) as u64,
&texture_size as *const Size<DevicePixels> as *const _,
);
let buffer_contents = unsafe {
(instance_buffer.metal_buffer.contents() as *mut u8).add(*instance_offset)
};
unsafe {
ptr::copy_nonoverlapping(
vertices.as_ptr() as *const u8,
buffer_contents,
vertices_bytes_len,
);
}
command_encoder.draw_primitives(
metal::MTLPrimitiveType::Triangle,
0,
vertices.len() as u64,
);
command_encoder.end_encoding();
*instance_offset = next_offset;
}
Some(tiles)
}
fn draw_shadows( fn draw_shadows(
&self, &self,
shadows: &[Shadow], shadows: &[Shadow],
@ -621,6 +718,7 @@ impl MetalRenderer {
fn draw_paths( fn draw_paths(
&self, &self,
paths: &[Path<ScaledPixels>], paths: &[Path<ScaledPixels>],
tiles_by_path_id: &HashMap<PathId, AtlasTile>,
instance_buffer: &mut InstanceBuffer, instance_buffer: &mut InstanceBuffer,
instance_offset: &mut usize, instance_offset: &mut usize,
viewport_size: Size<DevicePixels>, viewport_size: Size<DevicePixels>,
@ -630,108 +728,100 @@ impl MetalRenderer {
return true; return true;
} }
command_encoder.set_render_pipeline_state(&self.path_pipeline_state); command_encoder.set_render_pipeline_state(&self.path_sprites_pipeline_state);
command_encoder.set_vertex_buffer(
SpriteInputIndex::Vertices as u64,
Some(&self.unit_vertices),
0,
);
command_encoder.set_vertex_bytes(
SpriteInputIndex::ViewportSize as u64,
mem::size_of_val(&viewport_size) as u64,
&viewport_size as *const Size<DevicePixels> as *const _,
);
unsafe { let mut prev_texture_id = None;
let base_addr = instance_buffer.metal_buffer.contents(); let mut sprites = SmallVec::<[_; 1]>::new();
let mut p = (base_addr as *mut u8).add(*instance_offset); let mut paths_and_tiles = paths
let mut draw_indirect_commands = Vec::with_capacity(paths.len()); .iter()
.map(|path| (path, tiles_by_path_id.get(&path.id).unwrap()))
.peekable();
// copy vertices loop {
let vertices_offset = (p as usize) - (base_addr as usize); if let Some((path, tile)) = paths_and_tiles.peek() {
let mut first_vertex = 0; if prev_texture_id.map_or(true, |texture_id| texture_id == tile.texture_id) {
for (i, path) in paths.iter().enumerate() { prev_texture_id = Some(tile.texture_id);
if (p as usize) - (base_addr as usize) let origin = path.bounds.intersect(&path.content_mask.bounds).origin;
+ (mem::size_of::<PathVertex<ScaledPixels>>() * path.vertices.len()) sprites.push(PathSprite {
> instance_buffer.size bounds: Bounds {
{ origin: origin.map(|p| p.floor()),
size: tile.bounds.size.map(Into::into),
},
color: path.color,
tile: (*tile).clone(),
});
paths_and_tiles.next();
continue;
}
}
if sprites.is_empty() {
break;
} else {
align_offset(instance_offset);
let texture_id = prev_texture_id.take().unwrap();
let texture: metal::Texture = self.sprite_atlas.metal_texture(texture_id);
let texture_size = size(
DevicePixels(texture.width() as i32),
DevicePixels(texture.height() as i32),
);
command_encoder.set_vertex_buffer(
SpriteInputIndex::Sprites as u64,
Some(&instance_buffer.metal_buffer),
*instance_offset as u64,
);
command_encoder.set_vertex_bytes(
SpriteInputIndex::AtlasTextureSize as u64,
mem::size_of_val(&texture_size) as u64,
&texture_size as *const Size<DevicePixels> as *const _,
);
command_encoder.set_fragment_buffer(
SpriteInputIndex::Sprites as u64,
Some(&instance_buffer.metal_buffer),
*instance_offset as u64,
);
command_encoder
.set_fragment_texture(SpriteInputIndex::AtlasTexture as u64, Some(&texture));
let sprite_bytes_len = mem::size_of_val(sprites.as_slice());
let next_offset = *instance_offset + sprite_bytes_len;
if next_offset > instance_buffer.size {
return false; return false;
} }
for v in &path.vertices { let buffer_contents = unsafe {
*(p as *mut PathVertex<ScaledPixels>) = PathVertex { (instance_buffer.metal_buffer.contents() as *mut u8).add(*instance_offset)
xy_position: v.xy_position, };
content_mask: ContentMask {
bounds: path.content_mask.bounds, unsafe {
}, ptr::copy_nonoverlapping(
}; sprites.as_ptr() as *const u8,
p = p.add(mem::size_of::<PathVertex<ScaledPixels>>()); buffer_contents,
sprite_bytes_len,
);
} }
draw_indirect_commands.push(MTLDrawPrimitivesIndirectArguments { command_encoder.draw_primitives_instanced(
vertexCount: path.vertices.len() as u32,
instanceCount: 1,
vertexStart: first_vertex,
baseInstance: i as u32,
});
first_vertex += path.vertices.len() as u32;
}
// copy sprites
let sprites_offset = (p as u64) - (base_addr as u64);
if (p as usize) - (base_addr as usize) + (mem::size_of::<PathSprite>() * paths.len())
> instance_buffer.size
{
return false;
}
for path in paths {
*(p as *mut PathSprite) = PathSprite {
bounds: path.bounds,
color: path.color,
};
p = p.add(mem::size_of::<PathSprite>());
}
// copy indirect commands
let icb_bytes_len = mem::size_of_val(draw_indirect_commands.as_slice());
let icb_offset = (p as u64) - (base_addr as u64);
if (p as usize) - (base_addr as usize) + icb_bytes_len > instance_buffer.size {
return false;
}
ptr::copy_nonoverlapping(
draw_indirect_commands.as_ptr() as *const u8,
p,
icb_bytes_len,
);
p = p.add(icb_bytes_len);
// draw path
command_encoder.set_vertex_buffer(
PathInputIndex::Vertices as u64,
Some(&instance_buffer.metal_buffer),
vertices_offset as u64,
);
command_encoder.set_vertex_bytes(
PathInputIndex::ViewportSize as u64,
mem::size_of_val(&viewport_size) as u64,
&viewport_size as *const Size<DevicePixels> as *const _,
);
command_encoder.set_vertex_buffer(
PathInputIndex::Sprites as u64,
Some(&instance_buffer.metal_buffer),
sprites_offset,
);
command_encoder.set_fragment_buffer(
PathInputIndex::Sprites as u64,
Some(&instance_buffer.metal_buffer),
sprites_offset,
);
for i in 0..paths.len() {
command_encoder.draw_primitives_indirect(
metal::MTLPrimitiveType::Triangle, metal::MTLPrimitiveType::Triangle,
&instance_buffer.metal_buffer, 0,
icb_offset 6,
+ (i * std::mem::size_of::<MTLDrawPrimitivesIndirectArguments>()) as u64, sprites.len() as u64,
); );
*instance_offset = next_offset;
sprites.clear();
} }
*instance_offset = (p as usize) - (base_addr as usize);
} }
true true
} }
@ -1053,7 +1143,6 @@ fn build_pipeline_state(
vertex_fn_name: &str, vertex_fn_name: &str,
fragment_fn_name: &str, fragment_fn_name: &str,
pixel_format: metal::MTLPixelFormat, pixel_format: metal::MTLPixelFormat,
sample_count: u64,
) -> metal::RenderPipelineState { ) -> metal::RenderPipelineState {
let vertex_fn = library let vertex_fn = library
.get_function(vertex_fn_name, None) .get_function(vertex_fn_name, None)
@ -1066,7 +1155,6 @@ fn build_pipeline_state(
descriptor.set_label(label); descriptor.set_label(label);
descriptor.set_vertex_function(Some(vertex_fn.as_ref())); descriptor.set_vertex_function(Some(vertex_fn.as_ref()));
descriptor.set_fragment_function(Some(fragment_fn.as_ref())); descriptor.set_fragment_function(Some(fragment_fn.as_ref()));
descriptor.set_sample_count(sample_count);
let color_attachment = descriptor.color_attachments().object_at(0).unwrap(); let color_attachment = descriptor.color_attachments().object_at(0).unwrap();
color_attachment.set_pixel_format(pixel_format); color_attachment.set_pixel_format(pixel_format);
color_attachment.set_blending_enabled(true); color_attachment.set_blending_enabled(true);
@ -1082,45 +1170,50 @@ fn build_pipeline_state(
.expect("could not create render pipeline state") .expect("could not create render pipeline state")
} }
fn build_path_rasterization_pipeline_state(
device: &metal::DeviceRef,
library: &metal::LibraryRef,
label: &str,
vertex_fn_name: &str,
fragment_fn_name: &str,
pixel_format: metal::MTLPixelFormat,
path_sample_count: u32,
) -> metal::RenderPipelineState {
let vertex_fn = library
.get_function(vertex_fn_name, None)
.expect("error locating vertex function");
let fragment_fn = library
.get_function(fragment_fn_name, None)
.expect("error locating fragment function");
let descriptor = metal::RenderPipelineDescriptor::new();
descriptor.set_label(label);
descriptor.set_vertex_function(Some(vertex_fn.as_ref()));
descriptor.set_fragment_function(Some(fragment_fn.as_ref()));
if path_sample_count > 1 {
descriptor.set_raster_sample_count(path_sample_count as _);
descriptor.set_alpha_to_coverage_enabled(true);
}
let color_attachment = descriptor.color_attachments().object_at(0).unwrap();
color_attachment.set_pixel_format(pixel_format);
color_attachment.set_blending_enabled(true);
color_attachment.set_rgb_blend_operation(metal::MTLBlendOperation::Add);
color_attachment.set_alpha_blend_operation(metal::MTLBlendOperation::Add);
color_attachment.set_source_rgb_blend_factor(metal::MTLBlendFactor::One);
color_attachment.set_source_alpha_blend_factor(metal::MTLBlendFactor::One);
color_attachment.set_destination_rgb_blend_factor(metal::MTLBlendFactor::One);
color_attachment.set_destination_alpha_blend_factor(metal::MTLBlendFactor::One);
device
.new_render_pipeline_state(&descriptor)
.expect("could not create render pipeline state")
}
// Align to multiples of 256 make Metal happy. // Align to multiples of 256 make Metal happy.
fn align_offset(offset: &mut usize) { fn align_offset(offset: &mut usize) {
*offset = (*offset).div_ceil(256) * 256; *offset = (*offset).div_ceil(256) * 256;
} }
fn create_msaa_texture(
device: &metal::Device,
layer: &metal::MetalLayer,
sample_count: u64,
) -> Option<metal::Texture> {
let viewport_size = layer.drawable_size();
let width = viewport_size.width.ceil() as u64;
let height = viewport_size.height.ceil() as u64;
if width == 0 || height == 0 {
return None;
}
if sample_count <= 1 {
return None;
}
let texture_descriptor = metal::TextureDescriptor::new();
texture_descriptor.set_texture_type(metal::MTLTextureType::D2Multisample);
// MTLStorageMode default is `shared` only for Apple silicon GPUs. Use `private` for Apple and Intel GPUs both.
// Reference: https://developer.apple.com/documentation/metal/choosing-a-resource-storage-mode-for-apple-gpus
texture_descriptor.set_storage_mode(metal::MTLStorageMode::Private);
texture_descriptor.set_width(width);
texture_descriptor.set_height(height);
texture_descriptor.set_pixel_format(layer.pixel_format());
texture_descriptor.set_usage(metal::MTLTextureUsage::RenderTarget);
texture_descriptor.set_sample_count(sample_count);
let metal_texture = device.new_texture(&texture_descriptor);
Some(metal_texture)
}
#[repr(C)] #[repr(C)]
enum ShadowInputIndex { enum ShadowInputIndex {
Vertices = 0, Vertices = 0,
@ -1162,10 +1255,9 @@ enum SurfaceInputIndex {
} }
#[repr(C)] #[repr(C)]
enum PathInputIndex { enum PathRasterizationInputIndex {
Vertices = 0, Vertices = 0,
ViewportSize = 1, AtlasTextureSize = 1,
Sprites = 2,
} }
#[derive(Clone, Debug, Eq, PartialEq)] #[derive(Clone, Debug, Eq, PartialEq)]
@ -1173,6 +1265,7 @@ enum PathInputIndex {
pub struct PathSprite { pub struct PathSprite {
pub bounds: Bounds<ScaledPixels>, pub bounds: Bounds<ScaledPixels>,
pub color: Background, pub color: Background,
pub tile: AtlasTile,
} }
#[derive(Clone, Debug, Eq, PartialEq)] #[derive(Clone, Debug, Eq, PartialEq)]

View file

@ -698,27 +698,76 @@ fragment float4 polychrome_sprite_fragment(
return color; return color;
} }
struct PathVertexOutput { struct PathRasterizationVertexOutput {
float4 position [[position]]; float4 position [[position]];
float2 st_position;
float clip_rect_distance [[clip_distance]][4];
};
struct PathRasterizationFragmentInput {
float4 position [[position]];
float2 st_position;
};
vertex PathRasterizationVertexOutput path_rasterization_vertex(
uint vertex_id [[vertex_id]],
constant PathVertex_ScaledPixels *vertices
[[buffer(PathRasterizationInputIndex_Vertices)]],
constant Size_DevicePixels *atlas_size
[[buffer(PathRasterizationInputIndex_AtlasTextureSize)]]) {
PathVertex_ScaledPixels v = vertices[vertex_id];
float2 vertex_position = float2(v.xy_position.x, v.xy_position.y);
float2 viewport_size = float2(atlas_size->width, atlas_size->height);
return PathRasterizationVertexOutput{
float4(vertex_position / viewport_size * float2(2., -2.) +
float2(-1., 1.),
0., 1.),
float2(v.st_position.x, v.st_position.y),
{v.xy_position.x - v.content_mask.bounds.origin.x,
v.content_mask.bounds.origin.x + v.content_mask.bounds.size.width -
v.xy_position.x,
v.xy_position.y - v.content_mask.bounds.origin.y,
v.content_mask.bounds.origin.y + v.content_mask.bounds.size.height -
v.xy_position.y}};
}
fragment float4 path_rasterization_fragment(PathRasterizationFragmentInput input
[[stage_in]]) {
float2 dx = dfdx(input.st_position);
float2 dy = dfdy(input.st_position);
float2 gradient = float2((2. * input.st_position.x) * dx.x - dx.y,
(2. * input.st_position.x) * dy.x - dy.y);
float f = (input.st_position.x * input.st_position.x) - input.st_position.y;
float distance = f / length(gradient);
float alpha = saturate(0.5 - distance);
return float4(alpha, 0., 0., 1.);
}
struct PathSpriteVertexOutput {
float4 position [[position]];
float2 tile_position;
uint sprite_id [[flat]]; uint sprite_id [[flat]];
float4 solid_color [[flat]]; float4 solid_color [[flat]];
float4 color0 [[flat]]; float4 color0 [[flat]];
float4 color1 [[flat]]; float4 color1 [[flat]];
float4 clip_distance;
}; };
vertex PathVertexOutput path_vertex( vertex PathSpriteVertexOutput path_sprite_vertex(
uint vertex_id [[vertex_id]], uint unit_vertex_id [[vertex_id]], uint sprite_id [[instance_id]],
constant PathVertex_ScaledPixels *vertices [[buffer(PathInputIndex_Vertices)]], constant float2 *unit_vertices [[buffer(SpriteInputIndex_Vertices)]],
uint sprite_id [[instance_id]], constant PathSprite *sprites [[buffer(SpriteInputIndex_Sprites)]],
constant PathSprite *sprites [[buffer(PathInputIndex_Sprites)]], constant Size_DevicePixels *viewport_size
constant Size_DevicePixels *input_viewport_size [[buffer(PathInputIndex_ViewportSize)]]) { [[buffer(SpriteInputIndex_ViewportSize)]],
PathVertex_ScaledPixels v = vertices[vertex_id]; constant Size_DevicePixels *atlas_size
float2 vertex_position = float2(v.xy_position.x, v.xy_position.y); [[buffer(SpriteInputIndex_AtlasTextureSize)]]) {
float2 viewport_size = float2((float)input_viewport_size->width,
(float)input_viewport_size->height); float2 unit_vertex = unit_vertices[unit_vertex_id];
PathSprite sprite = sprites[sprite_id]; PathSprite sprite = sprites[sprite_id];
float4 device_position = float4(vertex_position / viewport_size * float2(2., -2.) + float2(-1., 1.), 0., 1.); // Don't apply content mask because it was already accounted for when
// rasterizing the path.
float4 device_position =
to_device_position(unit_vertex, sprite.bounds, viewport_size);
float2 tile_position = to_tile_position(unit_vertex, sprite.tile, atlas_size);
GradientColor gradient = prepare_fill_color( GradientColor gradient = prepare_fill_color(
sprite.color.tag, sprite.color.tag,
@ -728,32 +777,30 @@ vertex PathVertexOutput path_vertex(
sprite.color.colors[1].color sprite.color.colors[1].color
); );
return PathVertexOutput{ return PathSpriteVertexOutput{
device_position, device_position,
tile_position,
sprite_id, sprite_id,
gradient.solid, gradient.solid,
gradient.color0, gradient.color0,
gradient.color1, gradient.color1
{v.xy_position.x - v.content_mask.bounds.origin.x,
v.content_mask.bounds.origin.x + v.content_mask.bounds.size.width -
v.xy_position.x,
v.xy_position.y - v.content_mask.bounds.origin.y,
v.content_mask.bounds.origin.y + v.content_mask.bounds.size.height -
v.xy_position.y}
}; };
} }
fragment float4 path_fragment( fragment float4 path_sprite_fragment(
PathVertexOutput input [[stage_in]], PathSpriteVertexOutput input [[stage_in]],
constant PathSprite *sprites [[buffer(PathInputIndex_Sprites)]]) { constant PathSprite *sprites [[buffer(SpriteInputIndex_Sprites)]],
if (any(input.clip_distance < float4(0.0))) { texture2d<float> atlas_texture [[texture(SpriteInputIndex_AtlasTexture)]]) {
return float4(0.0); constexpr sampler atlas_texture_sampler(mag_filter::linear,
} min_filter::linear);
float4 sample =
atlas_texture.sample(atlas_texture_sampler, input.tile_position);
float mask = 1. - abs(1. - fmod(sample.r, 2.));
PathSprite sprite = sprites[input.sprite_id]; PathSprite sprite = sprites[input.sprite_id];
Background background = sprite.color; Background background = sprite.color;
float4 color = fill_color(background, input.position.xy, sprite.bounds, float4 color = fill_color(background, input.position.xy, sprite.bounds,
input.solid_color, input.color0, input.color1); input.solid_color, input.color0, input.color1);
color.a *= mask;
return color; return color;
} }

View file

@ -341,7 +341,7 @@ impl PlatformAtlas for TestAtlas {
crate::AtlasTile { crate::AtlasTile {
texture_id: AtlasTextureId { texture_id: AtlasTextureId {
index: texture_id, index: texture_id,
kind: crate::AtlasTextureKind::Polychrome, kind: crate::AtlasTextureKind::Path,
}, },
tile_id: TileId(tile_id), tile_id: TileId(tile_id),
padding: 0, padding: 0,

View file

@ -6,7 +6,7 @@ use serde::{Deserialize, Serialize};
use crate::{ use crate::{
AtlasTextureId, AtlasTile, Background, Bounds, ContentMask, Corners, Edges, Hsla, Pixels, AtlasTextureId, AtlasTile, Background, Bounds, ContentMask, Corners, Edges, Hsla, Pixels,
Point, Radians, ScaledPixels, Size, bounds_tree::BoundsTree, Point, Radians, ScaledPixels, Size, bounds_tree::BoundsTree, point,
}; };
use std::{fmt::Debug, iter::Peekable, ops::Range, slice}; use std::{fmt::Debug, iter::Peekable, ops::Range, slice};
@ -43,7 +43,13 @@ impl Scene {
self.surfaces.clear(); self.surfaces.clear();
} }
#[allow(dead_code)] #[cfg_attr(
all(
any(target_os = "linux", target_os = "freebsd"),
not(any(feature = "x11", feature = "wayland"))
),
allow(dead_code)
)]
pub fn paths(&self) -> &[Path<ScaledPixels>] { pub fn paths(&self) -> &[Path<ScaledPixels>] {
&self.paths &self.paths
} }
@ -683,7 +689,6 @@ pub struct Path<P: Clone + Debug + Default + PartialEq> {
start: Point<P>, start: Point<P>,
current: Point<P>, current: Point<P>,
contour_count: usize, contour_count: usize,
base_scale: f32,
} }
impl Path<Pixels> { impl Path<Pixels> {
@ -702,35 +707,25 @@ impl Path<Pixels> {
content_mask: Default::default(), content_mask: Default::default(),
color: Default::default(), color: Default::default(),
contour_count: 0, contour_count: 0,
base_scale: 1.0,
} }
} }
/// Set the base scale of the path. /// Scale this path by the given factor.
pub fn scale(mut self, factor: f32) -> Self { pub fn scale(&self, factor: f32) -> Path<ScaledPixels> {
self.base_scale = factor;
self
}
/// Apply a scale to the path.
pub(crate) fn apply_scale(&self, factor: f32) -> Path<ScaledPixels> {
Path { Path {
id: self.id, id: self.id,
order: self.order, order: self.order,
bounds: self.bounds.scale(self.base_scale * factor), bounds: self.bounds.scale(factor),
content_mask: self.content_mask.scale(self.base_scale * factor), content_mask: self.content_mask.scale(factor),
vertices: self vertices: self
.vertices .vertices
.iter() .iter()
.map(|vertex| vertex.scale(self.base_scale * factor)) .map(|vertex| vertex.scale(factor))
.collect(), .collect(),
start: self start: self.start.map(|start| start.scale(factor)),
.start current: self.current.scale(factor),
.map(|start| start.scale(self.base_scale * factor)),
current: self.current.scale(self.base_scale * factor),
contour_count: self.contour_count, contour_count: self.contour_count,
color: self.color, color: self.color,
base_scale: 1.0,
} }
} }
@ -745,7 +740,10 @@ impl Path<Pixels> {
pub fn line_to(&mut self, to: Point<Pixels>) { pub fn line_to(&mut self, to: Point<Pixels>) {
self.contour_count += 1; self.contour_count += 1;
if self.contour_count > 1 { if self.contour_count > 1 {
self.push_triangle((self.start, self.current, to)); self.push_triangle(
(self.start, self.current, to),
(point(0., 1.), point(0., 1.), point(0., 1.)),
);
} }
self.current = to; self.current = to;
} }
@ -754,15 +752,25 @@ impl Path<Pixels> {
pub fn curve_to(&mut self, to: Point<Pixels>, ctrl: Point<Pixels>) { pub fn curve_to(&mut self, to: Point<Pixels>, ctrl: Point<Pixels>) {
self.contour_count += 1; self.contour_count += 1;
if self.contour_count > 1 { if self.contour_count > 1 {
self.push_triangle((self.start, self.current, to)); self.push_triangle(
(self.start, self.current, to),
(point(0., 1.), point(0., 1.), point(0., 1.)),
);
} }
self.push_triangle((self.current, ctrl, to)); self.push_triangle(
(self.current, ctrl, to),
(point(0., 0.), point(0.5, 0.), point(1., 1.)),
);
self.current = to; self.current = to;
} }
/// Push a triangle to the Path. /// Push a triangle to the Path.
pub fn push_triangle(&mut self, xy: (Point<Pixels>, Point<Pixels>, Point<Pixels>)) { pub fn push_triangle(
&mut self,
xy: (Point<Pixels>, Point<Pixels>, Point<Pixels>),
st: (Point<f32>, Point<f32>, Point<f32>),
) {
self.bounds = self self.bounds = self
.bounds .bounds
.union(&Bounds { .union(&Bounds {
@ -780,14 +788,17 @@ impl Path<Pixels> {
self.vertices.push(PathVertex { self.vertices.push(PathVertex {
xy_position: xy.0, xy_position: xy.0,
st_position: st.0,
content_mask: Default::default(), content_mask: Default::default(),
}); });
self.vertices.push(PathVertex { self.vertices.push(PathVertex {
xy_position: xy.1, xy_position: xy.1,
st_position: st.1,
content_mask: Default::default(), content_mask: Default::default(),
}); });
self.vertices.push(PathVertex { self.vertices.push(PathVertex {
xy_position: xy.2, xy_position: xy.2,
st_position: st.2,
content_mask: Default::default(), content_mask: Default::default(),
}); });
} }
@ -803,6 +814,7 @@ impl From<Path<ScaledPixels>> for Primitive {
#[repr(C)] #[repr(C)]
pub(crate) struct PathVertex<P: Clone + Debug + Default + PartialEq> { pub(crate) struct PathVertex<P: Clone + Debug + Default + PartialEq> {
pub(crate) xy_position: Point<P>, pub(crate) xy_position: Point<P>,
pub(crate) st_position: Point<f32>,
pub(crate) content_mask: ContentMask<P>, pub(crate) content_mask: ContentMask<P>,
} }
@ -810,6 +822,7 @@ impl PathVertex<Pixels> {
pub fn scale(&self, factor: f32) -> PathVertex<ScaledPixels> { pub fn scale(&self, factor: f32) -> PathVertex<ScaledPixels> {
PathVertex { PathVertex {
xy_position: self.xy_position.scale(factor), xy_position: self.xy_position.scale(factor),
st_position: self.st_position,
content_mask: self.content_mask.scale(factor), content_mask: self.content_mask.scale(factor),
} }
} }

View file

@ -2424,6 +2424,53 @@ impl Window {
result result
} }
/// Use a piece of state that exists as long this element is being rendered in consecutive frames.
pub fn use_keyed_state<S: 'static>(
&mut self,
key: impl Into<ElementId>,
cx: &mut App,
init: impl FnOnce(&mut Self, &mut App) -> S,
) -> Entity<S> {
let current_view = self.current_view();
self.with_global_id(key.into(), |global_id, window| {
window.with_element_state(global_id, |state: Option<Entity<S>>, window| {
if let Some(state) = state {
(state.clone(), state)
} else {
let new_state = cx.new(|cx| init(window, cx));
cx.observe(&new_state, move |_, cx| {
cx.notify(current_view);
})
.detach();
(new_state.clone(), new_state)
}
})
})
}
/// Immediately push an element ID onto the stack. Useful for simplifying IDs in lists
pub fn with_id<R>(&mut self, id: impl Into<ElementId>, f: impl FnOnce(&mut Self) -> R) -> R {
self.with_global_id(id.into(), |_, window| f(window))
}
/// Use a piece of state that exists as long this element is being rendered in consecutive frames, without needing to specify a key
///
/// NOTE: This method uses the location of the caller to generate an ID for this state.
/// If this is not sufficient to identify your state (e.g. you're rendering a list item),
/// you can provide a custom ElementID using the `use_keyed_state` method.
#[track_caller]
pub fn use_state<S: 'static>(
&mut self,
cx: &mut App,
init: impl FnOnce(&mut Self, &mut App) -> S,
) -> Entity<S> {
self.use_keyed_state(
ElementId::CodeLocation(*core::panic::Location::caller()),
cx,
init,
)
}
/// Updates or initializes state for an element with the given id that lives across multiple /// Updates or initializes state for an element with the given id that lives across multiple
/// frames. If an element with this ID existed in the rendered frame, its state will be passed /// frames. If an element with this ID existed in the rendered frame, its state will be passed
/// to the given closure. The state returned by the closure will be stored so it can be referenced /// to the given closure. The state returned by the closure will be stored so it can be referenced
@ -2658,7 +2705,7 @@ impl Window {
path.color = color.opacity(opacity); path.color = color.opacity(opacity);
self.next_frame self.next_frame
.scene .scene
.insert_primitive(path.apply_scale(scale_factor)); .insert_primitive(path.scale(scale_factor));
} }
/// Paint an underline into the scene for the next frame at the current z-index. /// Paint an underline into the scene for the next frame at the current z-index.
@ -4577,6 +4624,8 @@ pub enum ElementId {
NamedInteger(SharedString, u64), NamedInteger(SharedString, u64),
/// A path. /// A path.
Path(Arc<std::path::Path>), Path(Arc<std::path::Path>),
/// A code location.
CodeLocation(core::panic::Location<'static>),
} }
impl ElementId { impl ElementId {
@ -4596,6 +4645,7 @@ impl Display for ElementId {
ElementId::NamedInteger(s, i) => write!(f, "{}-{}", s, i)?, ElementId::NamedInteger(s, i) => write!(f, "{}-{}", s, i)?,
ElementId::Uuid(uuid) => write!(f, "{}", uuid)?, ElementId::Uuid(uuid) => write!(f, "{}", uuid)?,
ElementId::Path(path) => write!(f, "{}", path.display())?, ElementId::Path(path) => write!(f, "{}", path.display())?,
ElementId::CodeLocation(location) => write!(f, "{}", location)?,
} }
Ok(()) Ok(())

View file

@ -53,6 +53,16 @@ pub fn derive_app_context(input: TokenStream) -> TokenStream {
self.#app_variable.update_entity(handle, update) self.#app_variable.update_entity(handle, update)
} }
fn as_mut<'y, 'z, T>(
&'y mut self,
handle: &'z gpui::Entity<T>,
) -> Self::Result<gpui::GpuiBorrow<'y, T>>
where
T: 'static,
{
self.#app_variable.as_mut(handle)
}
fn read_entity<T, R>( fn read_entity<T, R>(
&self, &self,
handle: &gpui::Entity<T>, handle: &gpui::Entity<T>,

View file

@ -4,6 +4,7 @@ pub mod github;
pub use anyhow::{Result, anyhow}; pub use anyhow::{Result, anyhow};
pub use async_body::{AsyncBody, Inner}; pub use async_body::{AsyncBody, Inner};
use derive_more::Deref; use derive_more::Deref;
use http::HeaderValue;
pub use http::{self, Method, Request, Response, StatusCode, Uri}; pub use http::{self, Method, Request, Response, StatusCode, Uri};
use futures::future::BoxFuture; use futures::future::BoxFuture;
@ -39,6 +40,8 @@ impl HttpRequestExt for http::request::Builder {
pub trait HttpClient: 'static + Send + Sync { pub trait HttpClient: 'static + Send + Sync {
fn type_name(&self) -> &'static str; fn type_name(&self) -> &'static str;
fn user_agent(&self) -> Option<&HeaderValue>;
fn send( fn send(
&self, &self,
req: http::Request<AsyncBody>, req: http::Request<AsyncBody>,
@ -118,6 +121,10 @@ impl HttpClient for HttpClientWithProxy {
self.client.send(req) self.client.send(req)
} }
fn user_agent(&self) -> Option<&HeaderValue> {
self.client.user_agent()
}
fn proxy(&self) -> Option<&Url> { fn proxy(&self) -> Option<&Url> {
self.proxy.as_ref() self.proxy.as_ref()
} }
@ -135,6 +142,10 @@ impl HttpClient for Arc<HttpClientWithProxy> {
self.client.send(req) self.client.send(req)
} }
fn user_agent(&self) -> Option<&HeaderValue> {
self.client.user_agent()
}
fn proxy(&self) -> Option<&Url> { fn proxy(&self) -> Option<&Url> {
self.proxy.as_ref() self.proxy.as_ref()
} }
@ -250,6 +261,10 @@ impl HttpClient for Arc<HttpClientWithUrl> {
self.client.send(req) self.client.send(req)
} }
fn user_agent(&self) -> Option<&HeaderValue> {
self.client.user_agent()
}
fn proxy(&self) -> Option<&Url> { fn proxy(&self) -> Option<&Url> {
self.client.proxy.as_ref() self.client.proxy.as_ref()
} }
@ -267,6 +282,10 @@ impl HttpClient for HttpClientWithUrl {
self.client.send(req) self.client.send(req)
} }
fn user_agent(&self) -> Option<&HeaderValue> {
self.client.user_agent()
}
fn proxy(&self) -> Option<&Url> { fn proxy(&self) -> Option<&Url> {
self.client.proxy.as_ref() self.client.proxy.as_ref()
} }
@ -314,6 +333,10 @@ impl HttpClient for BlockedHttpClient {
}) })
} }
fn user_agent(&self) -> Option<&HeaderValue> {
None
}
fn proxy(&self) -> Option<&Url> { fn proxy(&self) -> Option<&Url> {
None None
} }
@ -334,6 +357,7 @@ type FakeHttpHandler = Box<
#[cfg(feature = "test-support")] #[cfg(feature = "test-support")]
pub struct FakeHttpClient { pub struct FakeHttpClient {
handler: FakeHttpHandler, handler: FakeHttpHandler,
user_agent: HeaderValue,
} }
#[cfg(feature = "test-support")] #[cfg(feature = "test-support")]
@ -348,6 +372,7 @@ impl FakeHttpClient {
client: HttpClientWithProxy { client: HttpClientWithProxy {
client: Arc::new(Self { client: Arc::new(Self {
handler: Box::new(move |req| Box::pin(handler(req))), handler: Box::new(move |req| Box::pin(handler(req))),
user_agent: HeaderValue::from_static(type_name::<Self>()),
}), }),
proxy: None, proxy: None,
}, },
@ -390,6 +415,10 @@ impl HttpClient for FakeHttpClient {
future future
} }
fn user_agent(&self) -> Option<&HeaderValue> {
Some(&self.user_agent)
}
fn proxy(&self) -> Option<&Url> { fn proxy(&self) -> Option<&Url> {
None None
} }

View file

@ -2072,6 +2072,21 @@ impl Buffer {
self.text.push_transaction(transaction, now); self.text.push_transaction(transaction, now);
} }
/// Differs from `push_transaction` in that it does not clear the redo
/// stack. Intended to be used to create a parent transaction to merge
/// potential child transactions into.
///
/// The caller is responsible for removing it from the undo history using
/// `forget_transaction` if no edits are merged into it. Otherwise, if edits
/// are merged into this transaction, the caller is responsible for ensuring
/// the redo stack is cleared. The easiest way to ensure the redo stack is
/// cleared is to create transactions with the usual `start_transaction` and
/// `end_transaction` methods and merging the resulting transactions into
/// the transaction created by this method
pub fn push_empty_transaction(&mut self, now: Instant) -> TransactionId {
self.text.push_empty_transaction(now)
}
/// Prevent the last transaction from being grouped with any subsequent transactions, /// Prevent the last transaction from being grouped with any subsequent transactions,
/// even if they occur with the buffer's undo grouping duration. /// even if they occur with the buffer's undo grouping duration.
pub fn finalize_last_transaction(&mut self) -> Option<&Transaction> { pub fn finalize_last_transaction(&mut self) -> Option<&Transaction> {

View file

@ -116,6 +116,12 @@ pub enum LanguageModelCompletionError {
provider: LanguageModelProviderName, provider: LanguageModelProviderName,
message: String, message: String,
}, },
#[error("{message}")]
UpstreamProviderError {
message: String,
status: StatusCode,
retry_after: Option<Duration>,
},
#[error("HTTP response error from {provider}'s API: status {status_code} - {message:?}")] #[error("HTTP response error from {provider}'s API: status {status_code} - {message:?}")]
HttpResponseError { HttpResponseError {
provider: LanguageModelProviderName, provider: LanguageModelProviderName,

View file

@ -644,8 +644,62 @@ struct ApiError {
headers: HeaderMap<HeaderValue>, headers: HeaderMap<HeaderValue>,
} }
/// Represents error responses from Zed's cloud API.
///
/// Example JSON for an upstream HTTP error:
/// ```json
/// {
/// "code": "upstream_http_error",
/// "message": "Received an error from the Anthropic API: upstream connect error or disconnect/reset before headers, reset reason: connection timeout",
/// "upstream_status": 503
/// }
/// ```
#[derive(Debug, serde::Deserialize)]
struct CloudApiError {
code: String,
message: String,
#[serde(default)]
#[serde(deserialize_with = "deserialize_optional_status_code")]
upstream_status: Option<StatusCode>,
#[serde(default)]
retry_after: Option<f64>,
}
fn deserialize_optional_status_code<'de, D>(deserializer: D) -> Result<Option<StatusCode>, D::Error>
where
D: serde::Deserializer<'de>,
{
let opt: Option<u16> = Option::deserialize(deserializer)?;
Ok(opt.and_then(|code| StatusCode::from_u16(code).ok()))
}
impl From<ApiError> for LanguageModelCompletionError { impl From<ApiError> for LanguageModelCompletionError {
fn from(error: ApiError) -> Self { fn from(error: ApiError) -> Self {
if let Ok(cloud_error) = serde_json::from_str::<CloudApiError>(&error.body) {
if cloud_error.code.starts_with("upstream_http_") {
let status = if let Some(status) = cloud_error.upstream_status {
status
} else if cloud_error.code.ends_with("_error") {
error.status
} else {
// If there's a status code in the code string (e.g. "upstream_http_429")
// then use that; otherwise, see if the JSON contains a status code.
cloud_error
.code
.strip_prefix("upstream_http_")
.and_then(|code_str| code_str.parse::<u16>().ok())
.and_then(|code| StatusCode::from_u16(code).ok())
.unwrap_or(error.status)
};
return LanguageModelCompletionError::UpstreamProviderError {
message: cloud_error.message,
status,
retry_after: cloud_error.retry_after.map(Duration::from_secs_f64),
};
}
}
let retry_after = None; let retry_after = None;
LanguageModelCompletionError::from_http_status( LanguageModelCompletionError::from_http_status(
PROVIDER_NAME, PROVIDER_NAME,
@ -1279,3 +1333,155 @@ impl Component for ZedAiConfiguration {
) )
} }
} }
#[cfg(test)]
mod tests {
use super::*;
use http_client::http::{HeaderMap, StatusCode};
use language_model::LanguageModelCompletionError;
#[test]
fn test_api_error_conversion_with_upstream_http_error() {
// upstream_http_error with 503 status should become ServerOverloaded
let error_body = r#"{"code":"upstream_http_error","message":"Received an error from the Anthropic API: upstream connect error or disconnect/reset before headers, reset reason: connection timeout","upstream_status":503}"#;
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::UpstreamProviderError { message, .. } => {
assert_eq!(
message,
"Received an error from the Anthropic API: upstream connect error or disconnect/reset before headers, reset reason: connection timeout"
);
}
_ => panic!(
"Expected UpstreamProviderError for upstream 503, got: {:?}",
completion_error
),
}
// upstream_http_error with 500 status should become ApiInternalServerError
let error_body = r#"{"code":"upstream_http_error","message":"Received an error from the OpenAI API: internal server error","upstream_status":500}"#;
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::UpstreamProviderError { message, .. } => {
assert_eq!(
message,
"Received an error from the OpenAI API: internal server error"
);
}
_ => panic!(
"Expected UpstreamProviderError for upstream 500, got: {:?}",
completion_error
),
}
// upstream_http_error with 429 status should become RateLimitExceeded
let error_body = r#"{"code":"upstream_http_error","message":"Received an error from the Google API: rate limit exceeded","upstream_status":429}"#;
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::UpstreamProviderError { message, .. } => {
assert_eq!(
message,
"Received an error from the Google API: rate limit exceeded"
);
}
_ => panic!(
"Expected UpstreamProviderError for upstream 429, got: {:?}",
completion_error
),
}
// Regular 500 error without upstream_http_error should remain ApiInternalServerError for Zed
let error_body = "Regular internal server error";
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::ApiInternalServerError { provider, message } => {
assert_eq!(provider, PROVIDER_NAME);
assert_eq!(message, "Regular internal server error");
}
_ => panic!(
"Expected ApiInternalServerError for regular 500, got: {:?}",
completion_error
),
}
// upstream_http_429 format should be converted to UpstreamProviderError
let error_body = r#"{"code":"upstream_http_429","message":"Upstream Anthropic rate limit exceeded.","retry_after":30.5}"#;
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::UpstreamProviderError {
message,
status,
retry_after,
} => {
assert_eq!(message, "Upstream Anthropic rate limit exceeded.");
assert_eq!(status, StatusCode::TOO_MANY_REQUESTS);
assert_eq!(retry_after, Some(Duration::from_secs_f64(30.5)));
}
_ => panic!(
"Expected UpstreamProviderError for upstream_http_429, got: {:?}",
completion_error
),
}
// Invalid JSON in error body should fall back to regular error handling
let error_body = "Not JSON at all";
let api_error = ApiError {
status: StatusCode::INTERNAL_SERVER_ERROR,
body: error_body.to_string(),
headers: HeaderMap::new(),
};
let completion_error: LanguageModelCompletionError = api_error.into();
match completion_error {
LanguageModelCompletionError::ApiInternalServerError { provider, .. } => {
assert_eq!(provider, PROVIDER_NAME);
}
_ => panic!(
"Expected ApiInternalServerError for invalid JSON, got: {:?}",
completion_error
),
}
}
}

View file

@ -410,8 +410,20 @@ pub fn into_mistral(
.push_part(mistral::MessagePart::Text { text: text.clone() }); .push_part(mistral::MessagePart::Text { text: text.clone() });
} }
MessageContent::RedactedThinking(_) => {} MessageContent::RedactedThinking(_) => {}
MessageContent::ToolUse(_) | MessageContent::ToolResult(_) => { MessageContent::ToolUse(_) => {
// Tool content is not supported in User messages for Mistral // Tool use is not supported in User messages for Mistral
}
MessageContent::ToolResult(tool_result) => {
let tool_content = match &tool_result.content {
LanguageModelToolResultContent::Text(text) => text.to_string(),
LanguageModelToolResultContent::Image(_) => {
"[Tool responded with an image, but Zed doesn't support these in Mistral models yet]".to_string()
}
};
messages.push(mistral::RequestMessage::Tool {
content: tool_content,
tool_call_id: tool_result.tool_use_id.to_string(),
});
} }
} }
} }
@ -482,24 +494,6 @@ pub fn into_mistral(
} }
} }
for message in &request.messages {
for content in &message.content {
if let MessageContent::ToolResult(tool_result) = content {
let content = match &tool_result.content {
LanguageModelToolResultContent::Text(text) => text.to_string(),
LanguageModelToolResultContent::Image(_) => {
"[Tool responded with an image, but Zed doesn't support these in Mistral models yet]".to_string()
}
};
messages.push(mistral::RequestMessage::Tool {
content,
tool_call_id: tool_result.tool_use_id.to_string(),
});
}
}
}
// The Mistral API requires that tool messages be followed by assistant messages, // The Mistral API requires that tool messages be followed by assistant messages,
// not user messages. When we have a tool->user sequence in the conversation, // not user messages. When we have a tool->user sequence in the conversation,
// we need to insert a placeholder assistant message to maintain proper conversation // we need to insert a placeholder assistant message to maintain proper conversation

View file

@ -231,6 +231,13 @@ impl JsonLspAdapter {
)) ))
} }
schemas
.as_array_mut()
.unwrap()
.extend(cx.all_action_names().into_iter().map(|&name| {
project::lsp_store::json_language_server_ext::url_schema_for_action(name)
}));
// This can be viewed via `dev: open language server logs` -> `json-language-server` -> // This can be viewed via `dev: open language server logs` -> `json-language-server` ->
// `Server Info` // `Server Info`
serde_json::json!({ serde_json::json!({

View file

@ -273,6 +273,7 @@ pub fn init(languages: Arc<LanguageRegistry>, node: NodeRuntime, cx: &mut App) {
"Astro", "Astro",
"CSS", "CSS",
"ERB", "ERB",
"HTML/ERB",
"HEEX", "HEEX",
"HTML", "HTML",
"JavaScript", "JavaScript",

View file

@ -179,6 +179,7 @@ impl LspAdapter for TailwindLspAdapter {
("Elixir".to_string(), "phoenix-heex".to_string()), ("Elixir".to_string(), "phoenix-heex".to_string()),
("HEEX".to_string(), "phoenix-heex".to_string()), ("HEEX".to_string(), "phoenix-heex".to_string()),
("ERB".to_string(), "erb".to_string()), ("ERB".to_string(), "erb".to_string()),
("HTML/ERB".to_string(), "erb".to_string()),
("PHP".to_string(), "php".to_string()), ("PHP".to_string(), "php".to_string()),
("Vue.js".to_string(), "vue".to_string()), ("Vue.js".to_string(), "vue".to_string()),
]) ])

View file

@ -1,4 +1,5 @@
pub mod clangd_ext; pub mod clangd_ext;
pub mod json_language_server_ext;
pub mod lsp_ext_command; pub mod lsp_ext_command;
pub mod rust_analyzer_ext; pub mod rust_analyzer_ext;
@ -1034,6 +1035,7 @@ impl LocalLspStore {
}) })
.detach(); .detach();
json_language_server_ext::register_requests(this.clone(), language_server);
rust_analyzer_ext::register_notifications(this.clone(), language_server); rust_analyzer_ext::register_notifications(this.clone(), language_server);
clangd_ext::register_notifications(this, language_server, adapter); clangd_ext::register_notifications(this, language_server, adapter);
} }
@ -1272,15 +1274,11 @@ impl LocalLspStore {
// grouped with the previous transaction in the history // grouped with the previous transaction in the history
// based on the transaction group interval // based on the transaction group interval
buffer.finalize_last_transaction(); buffer.finalize_last_transaction();
let transaction_id = buffer buffer
.start_transaction() .start_transaction()
.context("transaction already open")?; .context("transaction already open")?;
let transaction = buffer
.get_transaction(transaction_id)
.expect("transaction started")
.clone();
buffer.end_transaction(cx); buffer.end_transaction(cx);
buffer.push_transaction(transaction, cx.background_executor().now()); let transaction_id = buffer.push_empty_transaction(cx.background_executor().now());
buffer.finalize_last_transaction(); buffer.finalize_last_transaction();
anyhow::Ok(transaction_id) anyhow::Ok(transaction_id)
})??; })??;
@ -3553,7 +3551,8 @@ pub struct LspStore {
_maintain_buffer_languages: Task<()>, _maintain_buffer_languages: Task<()>,
diagnostic_summaries: diagnostic_summaries:
HashMap<WorktreeId, HashMap<Arc<Path>, HashMap<LanguageServerId, DiagnosticSummary>>>, HashMap<WorktreeId, HashMap<Arc<Path>, HashMap<LanguageServerId, DiagnosticSummary>>>,
lsp_data: HashMap<BufferId, DocumentColorData>, lsp_document_colors: HashMap<BufferId, DocumentColorData>,
lsp_code_lens: HashMap<BufferId, CodeLensData>,
} }
#[derive(Debug, Default, Clone)] #[derive(Debug, Default, Clone)]
@ -3563,6 +3562,7 @@ pub struct DocumentColors {
} }
type DocumentColorTask = Shared<Task<std::result::Result<DocumentColors, Arc<anyhow::Error>>>>; type DocumentColorTask = Shared<Task<std::result::Result<DocumentColors, Arc<anyhow::Error>>>>;
type CodeLensTask = Shared<Task<std::result::Result<Vec<CodeAction>, Arc<anyhow::Error>>>>;
#[derive(Debug, Default)] #[derive(Debug, Default)]
struct DocumentColorData { struct DocumentColorData {
@ -3572,8 +3572,15 @@ struct DocumentColorData {
colors_update: Option<(Global, DocumentColorTask)>, colors_update: Option<(Global, DocumentColorTask)>,
} }
#[derive(Debug, Default)]
struct CodeLensData {
lens_for_version: Global,
lens: HashMap<LanguageServerId, Vec<CodeAction>>,
update: Option<(Global, CodeLensTask)>,
}
#[derive(Debug, PartialEq, Eq, Clone, Copy)] #[derive(Debug, PartialEq, Eq, Clone, Copy)]
pub enum ColorFetchStrategy { pub enum LspFetchStrategy {
IgnoreCache, IgnoreCache,
UseCache { known_cache_version: Option<usize> }, UseCache { known_cache_version: Option<usize> },
} }
@ -3806,7 +3813,8 @@ impl LspStore {
language_server_statuses: Default::default(), language_server_statuses: Default::default(),
nonce: StdRng::from_entropy().r#gen(), nonce: StdRng::from_entropy().r#gen(),
diagnostic_summaries: HashMap::default(), diagnostic_summaries: HashMap::default(),
lsp_data: HashMap::default(), lsp_document_colors: HashMap::default(),
lsp_code_lens: HashMap::default(),
active_entry: None, active_entry: None,
_maintain_workspace_config, _maintain_workspace_config,
_maintain_buffer_languages: Self::maintain_buffer_languages(languages, cx), _maintain_buffer_languages: Self::maintain_buffer_languages(languages, cx),
@ -3863,7 +3871,8 @@ impl LspStore {
language_server_statuses: Default::default(), language_server_statuses: Default::default(),
nonce: StdRng::from_entropy().r#gen(), nonce: StdRng::from_entropy().r#gen(),
diagnostic_summaries: HashMap::default(), diagnostic_summaries: HashMap::default(),
lsp_data: HashMap::default(), lsp_document_colors: HashMap::default(),
lsp_code_lens: HashMap::default(),
active_entry: None, active_entry: None,
toolchain_store, toolchain_store,
_maintain_workspace_config, _maintain_workspace_config,
@ -4164,7 +4173,8 @@ impl LspStore {
*refcount *refcount
}; };
if refcount == 0 { if refcount == 0 {
lsp_store.lsp_data.remove(&buffer_id); lsp_store.lsp_document_colors.remove(&buffer_id);
lsp_store.lsp_code_lens.remove(&buffer_id);
let local = lsp_store.as_local_mut().unwrap(); let local = lsp_store.as_local_mut().unwrap();
local.registered_buffers.remove(&buffer_id); local.registered_buffers.remove(&buffer_id);
local.buffers_opened_in_servers.remove(&buffer_id); local.buffers_opened_in_servers.remove(&buffer_id);
@ -5704,69 +5714,168 @@ impl LspStore {
} }
} }
pub fn code_lens( pub fn code_lens_actions(
&mut self, &mut self,
buffer_handle: &Entity<Buffer>, buffer: &Entity<Buffer>,
cx: &mut Context<Self>, cx: &mut Context<Self>,
) -> Task<Result<Vec<CodeAction>>> { ) -> CodeLensTask {
let version_queried_for = buffer.read(cx).version();
let buffer_id = buffer.read(cx).remote_id();
if let Some(cached_data) = self.lsp_code_lens.get(&buffer_id) {
if !version_queried_for.changed_since(&cached_data.lens_for_version) {
let has_different_servers = self.as_local().is_some_and(|local| {
local
.buffers_opened_in_servers
.get(&buffer_id)
.cloned()
.unwrap_or_default()
!= cached_data.lens.keys().copied().collect()
});
if !has_different_servers {
return Task::ready(Ok(cached_data.lens.values().flatten().cloned().collect()))
.shared();
}
}
}
let lsp_data = self.lsp_code_lens.entry(buffer_id).or_default();
if let Some((updating_for, running_update)) = &lsp_data.update {
if !version_queried_for.changed_since(&updating_for) {
return running_update.clone();
}
}
let buffer = buffer.clone();
let query_version_queried_for = version_queried_for.clone();
let new_task = cx
.spawn(async move |lsp_store, cx| {
cx.background_executor()
.timer(Duration::from_millis(30))
.await;
let fetched_lens = lsp_store
.update(cx, |lsp_store, cx| lsp_store.fetch_code_lens(&buffer, cx))
.map_err(Arc::new)?
.await
.context("fetching code lens")
.map_err(Arc::new);
let fetched_lens = match fetched_lens {
Ok(fetched_lens) => fetched_lens,
Err(e) => {
lsp_store
.update(cx, |lsp_store, _| {
lsp_store.lsp_code_lens.entry(buffer_id).or_default().update = None;
})
.ok();
return Err(e);
}
};
lsp_store
.update(cx, |lsp_store, _| {
let lsp_data = lsp_store.lsp_code_lens.entry(buffer_id).or_default();
if lsp_data.lens_for_version == query_version_queried_for {
lsp_data.lens.extend(fetched_lens.clone());
} else if !lsp_data
.lens_for_version
.changed_since(&query_version_queried_for)
{
lsp_data.lens_for_version = query_version_queried_for;
lsp_data.lens = fetched_lens.clone();
}
lsp_data.update = None;
lsp_data.lens.values().flatten().cloned().collect()
})
.map_err(Arc::new)
})
.shared();
lsp_data.update = Some((version_queried_for, new_task.clone()));
new_task
}
fn fetch_code_lens(
&mut self,
buffer: &Entity<Buffer>,
cx: &mut Context<Self>,
) -> Task<Result<HashMap<LanguageServerId, Vec<CodeAction>>>> {
if let Some((upstream_client, project_id)) = self.upstream_client() { if let Some((upstream_client, project_id)) = self.upstream_client() {
let request_task = upstream_client.request(proto::MultiLspQuery { let request_task = upstream_client.request(proto::MultiLspQuery {
buffer_id: buffer_handle.read(cx).remote_id().into(), buffer_id: buffer.read(cx).remote_id().into(),
version: serialize_version(&buffer_handle.read(cx).version()), version: serialize_version(&buffer.read(cx).version()),
project_id, project_id,
strategy: Some(proto::multi_lsp_query::Strategy::All( strategy: Some(proto::multi_lsp_query::Strategy::All(
proto::AllLanguageServers {}, proto::AllLanguageServers {},
)), )),
request: Some(proto::multi_lsp_query::Request::GetCodeLens( request: Some(proto::multi_lsp_query::Request::GetCodeLens(
GetCodeLens.to_proto(project_id, buffer_handle.read(cx)), GetCodeLens.to_proto(project_id, buffer.read(cx)),
)), )),
}); });
let buffer = buffer_handle.clone(); let buffer = buffer.clone();
cx.spawn(async move |weak_project, cx| { cx.spawn(async move |weak_lsp_store, cx| {
let Some(project) = weak_project.upgrade() else { let Some(lsp_store) = weak_lsp_store.upgrade() else {
return Ok(Vec::new()); return Ok(HashMap::default());
}; };
let responses = request_task.await?.responses; let responses = request_task.await?.responses;
let code_lens = join_all( let code_lens_actions = join_all(
responses responses
.into_iter() .into_iter()
.filter_map(|lsp_response| match lsp_response.response? { .filter_map(|lsp_response| {
proto::lsp_response::Response::GetCodeLensResponse(response) => { let response = match lsp_response.response? {
Some(response) proto::lsp_response::Response::GetCodeLensResponse(response) => {
} Some(response)
unexpected => { }
debug_panic!("Unexpected response: {unexpected:?}"); unexpected => {
None debug_panic!("Unexpected response: {unexpected:?}");
} None
}
}?;
let server_id = LanguageServerId::from_proto(lsp_response.server_id);
Some((server_id, response))
}) })
.map(|code_lens_response| { .map(|(server_id, code_lens_response)| {
GetCodeLens.response_from_proto( let lsp_store = lsp_store.clone();
code_lens_response, let buffer = buffer.clone();
project.clone(), let cx = cx.clone();
buffer.clone(), async move {
cx.clone(), (
) server_id,
GetCodeLens
.response_from_proto(
code_lens_response,
lsp_store,
buffer,
cx,
)
.await,
)
}
}), }),
) )
.await; .await;
Ok(code_lens let mut has_errors = false;
let code_lens_actions = code_lens_actions
.into_iter() .into_iter()
.collect::<Result<Vec<Vec<_>>>>()? .filter_map(|(server_id, code_lens)| match code_lens {
.into_iter() Ok(code_lens) => Some((server_id, code_lens)),
.flatten() Err(e) => {
.collect()) has_errors = true;
log::error!("{e:#}");
None
}
})
.collect::<HashMap<_, _>>();
anyhow::ensure!(
!has_errors || !code_lens_actions.is_empty(),
"Failed to fetch code lens"
);
Ok(code_lens_actions)
}) })
} else { } else {
let code_lens_task = let code_lens_actions_task =
self.request_multiple_lsp_locally(buffer_handle, None::<usize>, GetCodeLens, cx); self.request_multiple_lsp_locally(buffer, None::<usize>, GetCodeLens, cx);
cx.spawn(async move |_, _| { cx.background_spawn(
Ok(code_lens_task async move { Ok(code_lens_actions_task.await.into_iter().collect()) },
.await )
.into_iter()
.flat_map(|(_, code_lens)| code_lens)
.collect())
})
} }
} }
@ -6599,7 +6708,7 @@ impl LspStore {
pub fn document_colors( pub fn document_colors(
&mut self, &mut self,
fetch_strategy: ColorFetchStrategy, fetch_strategy: LspFetchStrategy,
buffer: Entity<Buffer>, buffer: Entity<Buffer>,
cx: &mut Context<Self>, cx: &mut Context<Self>,
) -> Option<DocumentColorTask> { ) -> Option<DocumentColorTask> {
@ -6607,11 +6716,11 @@ impl LspStore {
let buffer_id = buffer.read(cx).remote_id(); let buffer_id = buffer.read(cx).remote_id();
match fetch_strategy { match fetch_strategy {
ColorFetchStrategy::IgnoreCache => {} LspFetchStrategy::IgnoreCache => {}
ColorFetchStrategy::UseCache { LspFetchStrategy::UseCache {
known_cache_version, known_cache_version,
} => { } => {
if let Some(cached_data) = self.lsp_data.get(&buffer_id) { if let Some(cached_data) = self.lsp_document_colors.get(&buffer_id) {
if !version_queried_for.changed_since(&cached_data.colors_for_version) { if !version_queried_for.changed_since(&cached_data.colors_for_version) {
let has_different_servers = self.as_local().is_some_and(|local| { let has_different_servers = self.as_local().is_some_and(|local| {
local local
@ -6644,7 +6753,7 @@ impl LspStore {
} }
} }
let lsp_data = self.lsp_data.entry(buffer_id).or_default(); let lsp_data = self.lsp_document_colors.entry(buffer_id).or_default();
if let Some((updating_for, running_update)) = &lsp_data.colors_update { if let Some((updating_for, running_update)) = &lsp_data.colors_update {
if !version_queried_for.changed_since(&updating_for) { if !version_queried_for.changed_since(&updating_for) {
return Some(running_update.clone()); return Some(running_update.clone());
@ -6658,14 +6767,14 @@ impl LspStore {
.await; .await;
let fetched_colors = lsp_store let fetched_colors = lsp_store
.update(cx, |lsp_store, cx| { .update(cx, |lsp_store, cx| {
lsp_store.fetch_document_colors_for_buffer(buffer.clone(), cx) lsp_store.fetch_document_colors_for_buffer(&buffer, cx)
})? })?
.await .await
.context("fetching document colors") .context("fetching document colors")
.map_err(Arc::new); .map_err(Arc::new);
let fetched_colors = match fetched_colors { let fetched_colors = match fetched_colors {
Ok(fetched_colors) => { Ok(fetched_colors) => {
if fetch_strategy != ColorFetchStrategy::IgnoreCache if fetch_strategy != LspFetchStrategy::IgnoreCache
&& Some(true) && Some(true)
== buffer == buffer
.update(cx, |buffer, _| { .update(cx, |buffer, _| {
@ -6681,7 +6790,7 @@ impl LspStore {
lsp_store lsp_store
.update(cx, |lsp_store, _| { .update(cx, |lsp_store, _| {
lsp_store lsp_store
.lsp_data .lsp_document_colors
.entry(buffer_id) .entry(buffer_id)
.or_default() .or_default()
.colors_update = None; .colors_update = None;
@ -6693,7 +6802,7 @@ impl LspStore {
lsp_store lsp_store
.update(cx, |lsp_store, _| { .update(cx, |lsp_store, _| {
let lsp_data = lsp_store.lsp_data.entry(buffer_id).or_default(); let lsp_data = lsp_store.lsp_document_colors.entry(buffer_id).or_default();
if lsp_data.colors_for_version == query_version_queried_for { if lsp_data.colors_for_version == query_version_queried_for {
lsp_data.colors.extend(fetched_colors.clone()); lsp_data.colors.extend(fetched_colors.clone());
@ -6727,7 +6836,7 @@ impl LspStore {
fn fetch_document_colors_for_buffer( fn fetch_document_colors_for_buffer(
&mut self, &mut self,
buffer: Entity<Buffer>, buffer: &Entity<Buffer>,
cx: &mut Context<Self>, cx: &mut Context<Self>,
) -> Task<anyhow::Result<HashMap<LanguageServerId, HashSet<DocumentColor>>>> { ) -> Task<anyhow::Result<HashMap<LanguageServerId, HashSet<DocumentColor>>>> {
if let Some((client, project_id)) = self.upstream_client() { if let Some((client, project_id)) = self.upstream_client() {
@ -6742,6 +6851,7 @@ impl LspStore {
GetDocumentColor {}.to_proto(project_id, buffer.read(cx)), GetDocumentColor {}.to_proto(project_id, buffer.read(cx)),
)), )),
}); });
let buffer = buffer.clone();
cx.spawn(async move |project, cx| { cx.spawn(async move |project, cx| {
let Some(project) = project.upgrade() else { let Some(project) = project.upgrade() else {
return Ok(HashMap::default()); return Ok(HashMap::default());
@ -6787,7 +6897,7 @@ impl LspStore {
}) })
} else { } else {
let document_colors_task = let document_colors_task =
self.request_multiple_lsp_locally(&buffer, None::<usize>, GetDocumentColor, cx); self.request_multiple_lsp_locally(buffer, None::<usize>, GetDocumentColor, cx);
cx.spawn(async move |_, _| { cx.spawn(async move |_, _| {
Ok(document_colors_task Ok(document_colors_task
.await .await
@ -7327,21 +7437,23 @@ impl LspStore {
} }
pub(crate) async fn refresh_workspace_configurations( pub(crate) async fn refresh_workspace_configurations(
this: &WeakEntity<Self>, lsp_store: &WeakEntity<Self>,
fs: Arc<dyn Fs>, fs: Arc<dyn Fs>,
cx: &mut AsyncApp, cx: &mut AsyncApp,
) { ) {
maybe!(async move { maybe!(async move {
let servers = this let mut refreshed_servers = HashSet::default();
.update(cx, |this, cx| { let servers = lsp_store
let Some(local) = this.as_local() else { .update(cx, |lsp_store, cx| {
let toolchain_store = lsp_store.toolchain_store(cx);
let Some(local) = lsp_store.as_local() else {
return Vec::default(); return Vec::default();
}; };
local local
.language_server_ids .language_server_ids
.iter() .iter()
.flat_map(|((worktree_id, _), server_ids)| { .flat_map(|((worktree_id, _), server_ids)| {
let worktree = this let worktree = lsp_store
.worktree_store .worktree_store
.read(cx) .read(cx)
.worktree_for_id(*worktree_id, cx); .worktree_for_id(*worktree_id, cx);
@ -7357,43 +7469,54 @@ impl LspStore {
) )
}); });
server_ids.iter().filter_map(move |server_id| { let fs = fs.clone();
let toolchain_store = toolchain_store.clone();
server_ids.iter().filter_map(|server_id| {
let delegate = delegate.clone()? as Arc<dyn LspAdapterDelegate>;
let states = local.language_servers.get(server_id)?; let states = local.language_servers.get(server_id)?;
match states { match states {
LanguageServerState::Starting { .. } => None, LanguageServerState::Starting { .. } => None,
LanguageServerState::Running { LanguageServerState::Running {
adapter, server, .. adapter, server, ..
} => Some(( } => {
adapter.adapter.clone(), let fs = fs.clone();
server.clone(), let toolchain_store = toolchain_store.clone();
delegate.clone()? as Arc<dyn LspAdapterDelegate>, let adapter = adapter.clone();
)), let server = server.clone();
refreshed_servers.insert(server.name());
Some(cx.spawn(async move |_, cx| {
let settings =
LocalLspStore::workspace_configuration_for_adapter(
adapter.adapter.clone(),
fs.as_ref(),
&delegate,
toolchain_store,
cx,
)
.await
.ok()?;
server
.notify::<lsp::notification::DidChangeConfiguration>(
&lsp::DidChangeConfigurationParams { settings },
)
.ok()?;
Some(())
}))
}
} }
}) }).collect::<Vec<_>>()
}) })
.collect::<Vec<_>>() .collect::<Vec<_>>()
}) })
.ok()?; .ok()?;
let toolchain_store = this.update(cx, |this, cx| this.toolchain_store(cx)).ok()?; log::info!("Refreshing workspace configurations for servers {refreshed_servers:?}");
for (adapter, server, delegate) in servers { // TODO this asynchronous job runs concurrently with extension (de)registration and may take enough time for a certain extension
let settings = LocalLspStore::workspace_configuration_for_adapter( // to stop and unregister its language server wrapper.
adapter, // This is racy : an extension might have already removed all `local.language_servers` state, but here we `.clone()` and hold onto it anyway.
fs.as_ref(), // This now causes errors in the logs, we should find a way to remove such servers from the processing everywhere.
&delegate, let _: Vec<Option<()>> = join_all(servers).await;
toolchain_store.clone(),
cx,
)
.await
.ok()?;
server
.notify::<lsp::notification::DidChangeConfiguration>(
&lsp::DidChangeConfigurationParams { settings },
)
.ok();
}
Some(()) Some(())
}) })
.await; .await;
@ -11280,9 +11403,12 @@ impl LspStore {
} }
fn cleanup_lsp_data(&mut self, for_server: LanguageServerId) { fn cleanup_lsp_data(&mut self, for_server: LanguageServerId) {
for buffer_lsp_data in self.lsp_data.values_mut() { for buffer_colors in self.lsp_document_colors.values_mut() {
buffer_lsp_data.colors.remove(&for_server); buffer_colors.colors.remove(&for_server);
buffer_lsp_data.cache_version += 1; buffer_colors.cache_version += 1;
}
for buffer_lens in self.lsp_code_lens.values_mut() {
buffer_lens.lens.remove(&for_server);
} }
if let Some(local) = self.as_local_mut() { if let Some(local) = self.as_local_mut() {
local.buffer_pull_diagnostics_result_ids.remove(&for_server); local.buffer_pull_diagnostics_result_ids.remove(&for_server);

View file

@ -0,0 +1,101 @@
use anyhow::Context as _;
use collections::HashMap;
use gpui::WeakEntity;
use lsp::LanguageServer;
use crate::LspStore;
/// https://github.com/Microsoft/vscode/blob/main/extensions/json-language-features/server/README.md#schema-content-request
///
/// Represents a "JSON language server-specific, non-standardized, extension to the LSP" with which the vscode-json-language-server
/// can request the contents of a schema that is associated with a uri scheme it does not support.
/// In our case, we provide the uris for actions on server startup under the `zed://schemas/action/{normalize_action_name}` scheme.
/// We can then respond to this request with the schema content on demand, thereby greatly reducing the total size of the JSON we send to the server on startup
struct SchemaContentRequest {}
impl lsp::request::Request for SchemaContentRequest {
type Params = Vec<String>;
type Result = String;
const METHOD: &'static str = "vscode/content";
}
pub fn register_requests(_lsp_store: WeakEntity<LspStore>, language_server: &LanguageServer) {
language_server
.on_request::<SchemaContentRequest, _, _>(|params, cx| {
// PERF: Use a cache (`OnceLock`?) to avoid recomputing the action schemas
let mut generator = settings::KeymapFile::action_schema_generator();
let all_schemas = cx.update(|cx| HashMap::from_iter(cx.action_schemas(&mut generator)));
async move {
let all_schemas = all_schemas?;
let Some(uri) = params.get(0) else {
anyhow::bail!("No URI");
};
let normalized_action_name = uri
.strip_prefix("zed://schemas/action/")
.context("Invalid URI")?;
let action_name = denormalize_action_name(normalized_action_name);
let schema = root_schema_from_action_schema(
all_schemas
.get(action_name.as_str())
.and_then(Option::as_ref),
&mut generator,
)
.to_value();
serde_json::to_string(&schema).context("Failed to serialize schema")
}
})
.detach();
}
pub fn normalize_action_name(action_name: &str) -> String {
action_name.replace("::", "__")
}
pub fn denormalize_action_name(action_name: &str) -> String {
action_name.replace("__", "::")
}
pub fn normalized_action_file_name(action_name: &str) -> String {
normalized_action_name_to_file_name(normalize_action_name(action_name))
}
pub fn normalized_action_name_to_file_name(mut normalized_action_name: String) -> String {
normalized_action_name.push_str(".json");
normalized_action_name
}
pub fn url_schema_for_action(action_name: &str) -> serde_json::Value {
let normalized_name = normalize_action_name(action_name);
let file_name = normalized_action_name_to_file_name(normalized_name.clone());
serde_json::json!({
"fileMatch": [file_name],
"url": format!("zed://schemas/action/{}", normalized_name)
})
}
fn root_schema_from_action_schema(
action_schema: Option<&schemars::Schema>,
generator: &mut schemars::SchemaGenerator,
) -> schemars::Schema {
let Some(action_schema) = action_schema else {
return schemars::json_schema!(false);
};
let meta_schema = generator
.settings()
.meta_schema
.as_ref()
.expect("meta_schema should be present in schemars settings")
.to_string();
let defs = generator.definitions();
let mut schema = schemars::json_schema!({
"$schema": meta_schema,
"allowTrailingCommas": true,
"$defs": defs,
});
schema
.ensure_object()
.extend(std::mem::take(action_schema.clone().ensure_object()));
schema
}

View file

@ -113,7 +113,7 @@ use std::{
use task_store::TaskStore; use task_store::TaskStore;
use terminals::Terminals; use terminals::Terminals;
use text::{Anchor, BufferId, Point}; use text::{Anchor, BufferId, OffsetRangeExt, Point};
use toolchain_store::EmptyToolchainStore; use toolchain_store::EmptyToolchainStore;
use util::{ use util::{
ResultExt as _, ResultExt as _,
@ -590,7 +590,7 @@ pub(crate) struct CoreCompletion {
} }
/// A code action provided by a language server. /// A code action provided by a language server.
#[derive(Clone, Debug)] #[derive(Clone, Debug, PartialEq)]
pub struct CodeAction { pub struct CodeAction {
/// The id of the language server that produced this code action. /// The id of the language server that produced this code action.
pub server_id: LanguageServerId, pub server_id: LanguageServerId,
@ -604,7 +604,7 @@ pub struct CodeAction {
} }
/// An action sent back by a language server. /// An action sent back by a language server.
#[derive(Clone, Debug)] #[derive(Clone, Debug, PartialEq)]
pub enum LspAction { pub enum LspAction {
/// An action with the full data, may have a command or may not. /// An action with the full data, may have a command or may not.
/// May require resolving. /// May require resolving.
@ -3607,20 +3607,29 @@ impl Project {
}) })
} }
pub fn code_lens<T: Clone + ToOffset>( pub fn code_lens_actions<T: Clone + ToOffset>(
&mut self, &mut self,
buffer_handle: &Entity<Buffer>, buffer: &Entity<Buffer>,
range: Range<T>, range: Range<T>,
cx: &mut Context<Self>, cx: &mut Context<Self>,
) -> Task<Result<Vec<CodeAction>>> { ) -> Task<Result<Vec<CodeAction>>> {
let snapshot = buffer_handle.read(cx).snapshot(); let snapshot = buffer.read(cx).snapshot();
let range = snapshot.anchor_before(range.start)..snapshot.anchor_after(range.end); let range = range.clone().to_owned().to_point(&snapshot);
let range_start = snapshot.anchor_before(range.start);
let range_end = if range.start == range.end {
range_start
} else {
snapshot.anchor_after(range.end)
};
let range = range_start..range_end;
let code_lens_actions = self let code_lens_actions = self
.lsp_store .lsp_store
.update(cx, |lsp_store, cx| lsp_store.code_lens(buffer_handle, cx)); .update(cx, |lsp_store, cx| lsp_store.code_lens_actions(buffer, cx));
cx.background_spawn(async move { cx.background_spawn(async move {
let mut code_lens_actions = code_lens_actions.await?; let mut code_lens_actions = code_lens_actions
.await
.map_err(|e| anyhow!("code lens fetch failed: {e:#}"))?;
code_lens_actions.retain(|code_lens_action| { code_lens_actions.retain(|code_lens_action| {
range range
.start .start

View file

@ -384,12 +384,20 @@ struct ItemColors {
focused: Hsla, focused: Hsla,
} }
fn get_item_color(cx: &App) -> ItemColors { fn get_item_color(is_sticky: bool, cx: &App) -> ItemColors {
let colors = cx.theme().colors(); let colors = cx.theme().colors();
ItemColors { ItemColors {
default: colors.panel_background, default: if is_sticky {
hover: colors.element_hover, colors.panel_overlay_background
} else {
colors.panel_background
},
hover: if is_sticky {
colors.panel_overlay_hover
} else {
colors.element_hover
},
marked: colors.element_selected, marked: colors.element_selected,
focused: colors.panel_focused_border, focused: colors.panel_focused_border,
drag_over: colors.drop_target_background, drag_over: colors.drop_target_background,
@ -3903,7 +3911,7 @@ impl ProjectPanel {
let filename_text_color = details.filename_text_color; let filename_text_color = details.filename_text_color;
let diagnostic_severity = details.diagnostic_severity; let diagnostic_severity = details.diagnostic_severity;
let item_colors = get_item_color(cx); let item_colors = get_item_color(is_sticky, cx);
let canonical_path = details let canonical_path = details
.canonical_path .canonical_path

View file

@ -20,6 +20,7 @@ static REDACT_REGEX: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"key=[^&]+")
pub struct ReqwestClient { pub struct ReqwestClient {
client: reqwest::Client, client: reqwest::Client,
proxy: Option<Url>, proxy: Option<Url>,
user_agent: Option<HeaderValue>,
handle: tokio::runtime::Handle, handle: tokio::runtime::Handle,
} }
@ -44,9 +45,11 @@ impl ReqwestClient {
Ok(client.into()) Ok(client.into())
} }
pub fn proxy_and_user_agent(proxy: Option<Url>, agent: &str) -> anyhow::Result<Self> { pub fn proxy_and_user_agent(proxy: Option<Url>, user_agent: &str) -> anyhow::Result<Self> {
let user_agent = HeaderValue::from_str(user_agent)?;
let mut map = HeaderMap::new(); let mut map = HeaderMap::new();
map.insert(http::header::USER_AGENT, HeaderValue::from_str(agent)?); map.insert(http::header::USER_AGENT, user_agent.clone());
let mut client = Self::builder().default_headers(map); let mut client = Self::builder().default_headers(map);
let client_has_proxy; let client_has_proxy;
@ -73,6 +76,7 @@ impl ReqwestClient {
.build()?; .build()?;
let mut client: ReqwestClient = client.into(); let mut client: ReqwestClient = client.into();
client.proxy = client_has_proxy.then_some(proxy).flatten(); client.proxy = client_has_proxy.then_some(proxy).flatten();
client.user_agent = Some(user_agent);
Ok(client) Ok(client)
} }
} }
@ -96,6 +100,7 @@ impl From<reqwest::Client> for ReqwestClient {
client, client,
handle, handle,
proxy: None, proxy: None,
user_agent: None,
} }
} }
} }
@ -216,6 +221,10 @@ impl http_client::HttpClient for ReqwestClient {
type_name::<Self>() type_name::<Self>()
} }
fn user_agent(&self) -> Option<&HeaderValue> {
self.user_agent.as_ref()
}
fn send( fn send(
&self, &self,
req: http::Request<http_client::AsyncBody>, req: http::Request<http_client::AsyncBody>,

View file

@ -847,6 +847,7 @@ impl KeymapFile {
} }
} }
#[derive(Clone)]
pub enum KeybindUpdateOperation<'a> { pub enum KeybindUpdateOperation<'a> {
Replace { Replace {
/// Describes the keybind to create /// Describes the keybind to create
@ -865,6 +866,47 @@ pub enum KeybindUpdateOperation<'a> {
}, },
} }
impl KeybindUpdateOperation<'_> {
pub fn generate_telemetry(
&self,
) -> (
// The keybind that is created
String,
// The keybinding that was removed
String,
// The source of the keybinding
String,
) {
let (new_binding, removed_binding, source) = match &self {
KeybindUpdateOperation::Replace {
source,
target,
target_keybind_source,
} => (Some(source), Some(target), Some(*target_keybind_source)),
KeybindUpdateOperation::Add { source, .. } => (Some(source), None, None),
KeybindUpdateOperation::Remove {
target,
target_keybind_source,
} => (None, Some(target), Some(*target_keybind_source)),
};
let new_binding = new_binding
.map(KeybindUpdateTarget::telemetry_string)
.unwrap_or("null".to_owned());
let removed_binding = removed_binding
.map(KeybindUpdateTarget::telemetry_string)
.unwrap_or("null".to_owned());
let source = source
.as_ref()
.map(KeybindSource::name)
.map(ToOwned::to_owned)
.unwrap_or("null".to_owned());
(new_binding, removed_binding, source)
}
}
impl<'a> KeybindUpdateOperation<'a> { impl<'a> KeybindUpdateOperation<'a> {
pub fn add(source: KeybindUpdateTarget<'a>) -> Self { pub fn add(source: KeybindUpdateTarget<'a>) -> Self {
Self::Add { source, from: None } Self::Add { source, from: None }
@ -905,21 +947,33 @@ impl<'a> KeybindUpdateTarget<'a> {
keystrokes.pop(); keystrokes.pop();
keystrokes keystrokes
} }
fn telemetry_string(&self) -> String {
format!(
"action_name: {}, context: {}, action_arguments: {}, keystrokes: {}",
self.action_name,
self.context.unwrap_or("global"),
self.action_arguments.unwrap_or("none"),
self.keystrokes_unparsed()
)
}
} }
#[derive(Clone, Copy, PartialEq, Eq)] #[derive(Clone, Copy, Default, PartialEq, Eq, PartialOrd, Ord)]
pub enum KeybindSource { pub enum KeybindSource {
User, User,
Default,
Base,
Vim, Vim,
Base,
#[default]
Default,
Unknown,
} }
impl KeybindSource { impl KeybindSource {
const BASE: KeyBindingMetaIndex = KeyBindingMetaIndex(0); const BASE: KeyBindingMetaIndex = KeyBindingMetaIndex(KeybindSource::Base as u32);
const DEFAULT: KeyBindingMetaIndex = KeyBindingMetaIndex(1); const DEFAULT: KeyBindingMetaIndex = KeyBindingMetaIndex(KeybindSource::Default as u32);
const VIM: KeyBindingMetaIndex = KeyBindingMetaIndex(2); const VIM: KeyBindingMetaIndex = KeyBindingMetaIndex(KeybindSource::Vim as u32);
const USER: KeyBindingMetaIndex = KeyBindingMetaIndex(3); const USER: KeyBindingMetaIndex = KeyBindingMetaIndex(KeybindSource::User as u32);
pub fn name(&self) -> &'static str { pub fn name(&self) -> &'static str {
match self { match self {
@ -927,6 +981,7 @@ impl KeybindSource {
KeybindSource::Default => "Default", KeybindSource::Default => "Default",
KeybindSource::Base => "Base", KeybindSource::Base => "Base",
KeybindSource::Vim => "Vim", KeybindSource::Vim => "Vim",
KeybindSource::Unknown => "Unknown",
} }
} }
@ -936,6 +991,7 @@ impl KeybindSource {
KeybindSource::Default => Self::DEFAULT, KeybindSource::Default => Self::DEFAULT,
KeybindSource::Base => Self::BASE, KeybindSource::Base => Self::BASE,
KeybindSource::Vim => Self::VIM, KeybindSource::Vim => Self::VIM,
KeybindSource::Unknown => KeyBindingMetaIndex(*self as u32),
} }
} }
@ -945,7 +1001,7 @@ impl KeybindSource {
Self::BASE => KeybindSource::Base, Self::BASE => KeybindSource::Base,
Self::DEFAULT => KeybindSource::Default, Self::DEFAULT => KeybindSource::Default,
Self::VIM => KeybindSource::Vim, Self::VIM => KeybindSource::Vim,
_ => unreachable!(), _ => KeybindSource::Unknown,
} }
} }
} }
@ -958,7 +1014,7 @@ impl From<KeyBindingMetaIndex> for KeybindSource {
impl From<KeybindSource> for KeyBindingMetaIndex { impl From<KeybindSource> for KeyBindingMetaIndex {
fn from(source: KeybindSource) -> Self { fn from(source: KeybindSource) -> Self {
return source.meta(); source.meta()
} }
} }
@ -1567,4 +1623,44 @@ mod tests {
.unindent(), .unindent(),
); );
} }
#[test]
fn test_keymap_remove() {
zlog::init_test();
check_keymap_update(
r#"
[
{
"context": "Editor",
"bindings": {
"cmd-k cmd-u": "editor::ConvertToUpperCase",
"cmd-k cmd-l": "editor::ConvertToLowerCase",
"cmd-[": "pane::GoBack",
}
},
]
"#,
KeybindUpdateOperation::Remove {
target: KeybindUpdateTarget {
context: Some("Editor"),
keystrokes: &parse_keystrokes("cmd-k cmd-l"),
action_name: "editor::ConvertToLowerCase",
action_arguments: None,
},
target_keybind_source: KeybindSource::User,
},
r#"
[
{
"context": "Editor",
"bindings": {
"cmd-k cmd-u": "editor::ConvertToUpperCase",
"cmd-[": "pane::GoBack",
}
},
]
"#,
);
}
} }

View file

@ -190,6 +190,7 @@ fn replace_value_in_json_text(
} }
} }
let mut removed_comma = false;
// Look backward for a preceding comma first // Look backward for a preceding comma first
let preceding_text = text.get(0..removal_start).unwrap_or(""); let preceding_text = text.get(0..removal_start).unwrap_or("");
if let Some(comma_pos) = preceding_text.rfind(',') { if let Some(comma_pos) = preceding_text.rfind(',') {
@ -197,10 +198,12 @@ fn replace_value_in_json_text(
let between_comma_and_key = text.get(comma_pos + 1..removal_start).unwrap_or(""); let between_comma_and_key = text.get(comma_pos + 1..removal_start).unwrap_or("");
if between_comma_and_key.trim().is_empty() { if between_comma_and_key.trim().is_empty() {
removal_start = comma_pos; removal_start = comma_pos;
removed_comma = true;
} }
} }
if let Some(remaining_text) = text.get(existing_value_range.end..)
if let Some(remaining_text) = text.get(existing_value_range.end..) { && !removed_comma
{
let mut chars = remaining_text.char_indices(); let mut chars = remaining_text.char_indices();
while let Some((offset, ch)) = chars.next() { while let Some((offset, ch)) = chars.next() {
if ch == ',' { if ch == ',' {

View file

@ -23,6 +23,7 @@ feature_flags.workspace = true
fs.workspace = true fs.workspace = true
fuzzy.workspace = true fuzzy.workspace = true
gpui.workspace = true gpui.workspace = true
itertools.workspace = true
language.workspace = true language.workspace = true
log.workspace = true log.workspace = true
menu.workspace = true menu.workspace = true
@ -34,6 +35,8 @@ search.workspace = true
serde.workspace = true serde.workspace = true
serde_json.workspace = true serde_json.workspace = true
settings.workspace = true settings.workspace = true
telemetry.workspace = true
tempfile.workspace = true
theme.workspace = true theme.workspace = true
tree-sitter-json.workspace = true tree-sitter-json.workspace = true
tree-sitter-rust.workspace = true tree-sitter-rust.workspace = true

File diff suppressed because it is too large Load diff

View file

@ -2,19 +2,24 @@ use std::{ops::Range, rc::Rc, time::Duration};
use editor::{EditorSettings, ShowScrollbar, scroll::ScrollbarAutoHide}; use editor::{EditorSettings, ShowScrollbar, scroll::ScrollbarAutoHide};
use gpui::{ use gpui::{
AppContext, Axis, Context, Entity, FocusHandle, Length, ListHorizontalSizingBehavior, AbsoluteLength, AppContext, Axis, Context, DefiniteLength, DragMoveEvent, Entity, FocusHandle,
ListSizingBehavior, MouseButton, Point, Task, UniformListScrollHandle, WeakEntity, Length, ListHorizontalSizingBehavior, ListSizingBehavior, MouseButton, Point, Stateful, Task,
transparent_black, uniform_list, UniformListScrollHandle, WeakEntity, transparent_black, uniform_list,
}; };
use itertools::intersperse_with;
use settings::Settings as _; use settings::Settings as _;
use ui::{ use ui::{
ActiveTheme as _, AnyElement, App, Button, ButtonCommon as _, ButtonStyle, Color, Component, ActiveTheme as _, AnyElement, App, Button, ButtonCommon as _, ButtonStyle, Color, Component,
ComponentScope, Div, ElementId, FixedWidth as _, FluentBuilder as _, Indicator, ComponentScope, Div, ElementId, FixedWidth as _, FluentBuilder as _, Indicator,
InteractiveElement as _, IntoElement, ParentElement, Pixels, RegisterComponent, RenderOnce, InteractiveElement, IntoElement, ParentElement, Pixels, RegisterComponent, RenderOnce,
Scrollbar, ScrollbarState, StatefulInteractiveElement as _, Styled, StyledExt as _, Scrollbar, ScrollbarState, StatefulInteractiveElement, Styled, StyledExt as _,
StyledTypography, Window, div, example_group_with_title, h_flex, px, single_example, v_flex, StyledTypography, Window, div, example_group_with_title, h_flex, px, single_example, v_flex,
}; };
#[derive(Debug)]
struct DraggedColumn(usize);
struct UniformListData<const COLS: usize> { struct UniformListData<const COLS: usize> {
render_item_fn: Box<dyn Fn(Range<usize>, &mut Window, &mut App) -> Vec<[AnyElement; COLS]>>, render_item_fn: Box<dyn Fn(Range<usize>, &mut Window, &mut App) -> Vec<[AnyElement; COLS]>>,
element_id: ElementId, element_id: ElementId,
@ -40,6 +45,10 @@ impl<const COLS: usize> TableContents<COLS> {
TableContents::UniformList(data) => data.row_count, TableContents::UniformList(data) => data.row_count,
} }
} }
fn is_empty(&self) -> bool {
self.len() == 0
}
} }
pub struct TableInteractionState { pub struct TableInteractionState {
@ -187,6 +196,87 @@ impl TableInteractionState {
} }
} }
fn render_resize_handles<const COLS: usize>(
&self,
column_widths: &[Length; COLS],
resizable_columns: &[ResizeBehavior; COLS],
initial_sizes: [DefiniteLength; COLS],
columns: Option<Entity<ColumnWidths<COLS>>>,
window: &mut Window,
cx: &mut App,
) -> AnyElement {
let spacers = column_widths
.iter()
.map(|width| base_cell_style(Some(*width)).into_any_element());
let mut column_ix = 0;
let resizable_columns_slice = *resizable_columns;
let mut resizable_columns = resizable_columns.into_iter();
let dividers = intersperse_with(spacers, || {
window.with_id(column_ix, |window| {
let mut resize_divider = div()
// This is required because this is evaluated at a different time than the use_state call above
.id(column_ix)
.relative()
.top_0()
.w_0p5()
.h_full()
.bg(cx.theme().colors().border.opacity(0.5));
let mut resize_handle = div()
.id("column-resize-handle")
.absolute()
.left_neg_0p5()
.w(px(5.0))
.h_full();
if resizable_columns
.next()
.is_some_and(ResizeBehavior::is_resizable)
{
let hovered = window.use_state(cx, |_window, _cx| false);
resize_divider = resize_divider.when(*hovered.read(cx), |div| {
div.bg(cx.theme().colors().border_focused)
});
resize_handle = resize_handle
.on_hover(move |&was_hovered, _, cx| hovered.write(cx, was_hovered))
.cursor_col_resize()
.when_some(columns.clone(), |this, columns| {
this.on_click(move |event, window, cx| {
if event.down.click_count >= 2 {
columns.update(cx, |columns, _| {
columns.on_double_click(
column_ix,
&initial_sizes,
&resizable_columns_slice,
window,
);
})
}
cx.stop_propagation();
})
})
.on_drag(DraggedColumn(column_ix), |_, _offset, _window, cx| {
cx.new(|_cx| gpui::Empty)
})
}
column_ix += 1;
resize_divider.child(resize_handle).into_any_element()
})
});
div()
.id("resize-handles")
.h_flex()
.absolute()
.w_full()
.inset_0()
.children(dividers)
.into_any_element()
}
fn render_vertical_scrollbar_track( fn render_vertical_scrollbar_track(
this: &Entity<Self>, this: &Entity<Self>,
parent: Div, parent: Div,
@ -365,6 +455,242 @@ impl TableInteractionState {
} }
} }
#[derive(Debug, Copy, Clone, PartialEq)]
pub enum ResizeBehavior {
None,
Resizable,
MinSize(f32),
}
impl ResizeBehavior {
pub fn is_resizable(&self) -> bool {
*self != ResizeBehavior::None
}
pub fn min_size(&self) -> Option<f32> {
match self {
ResizeBehavior::None => None,
ResizeBehavior::Resizable => Some(0.05),
ResizeBehavior::MinSize(min_size) => Some(*min_size),
}
}
}
pub struct ColumnWidths<const COLS: usize> {
widths: [DefiniteLength; COLS],
cached_bounds_width: Pixels,
initialized: bool,
}
impl<const COLS: usize> ColumnWidths<COLS> {
pub fn new(_: &mut App) -> Self {
Self {
widths: [DefiniteLength::default(); COLS],
cached_bounds_width: Default::default(),
initialized: false,
}
}
fn get_fraction(length: &DefiniteLength, bounds_width: Pixels, rem_size: Pixels) -> f32 {
match length {
DefiniteLength::Absolute(AbsoluteLength::Pixels(pixels)) => *pixels / bounds_width,
DefiniteLength::Absolute(AbsoluteLength::Rems(rems_width)) => {
rems_width.to_pixels(rem_size) / bounds_width
}
DefiniteLength::Fraction(fraction) => *fraction,
}
}
fn on_double_click(
&mut self,
double_click_position: usize,
initial_sizes: &[DefiniteLength; COLS],
resize_behavior: &[ResizeBehavior; COLS],
window: &mut Window,
) {
let bounds_width = self.cached_bounds_width;
let rem_size = window.rem_size();
let initial_sizes =
initial_sizes.map(|length| Self::get_fraction(&length, bounds_width, rem_size));
let mut widths = self
.widths
.map(|length| Self::get_fraction(&length, bounds_width, rem_size));
let diff = initial_sizes[double_click_position] - widths[double_click_position];
if diff > 0.0 {
let diff_remaining = self.propagate_resize_diff_right(
diff,
double_click_position,
&mut widths,
resize_behavior,
);
if diff_remaining > 0.0 && double_click_position > 0 {
self.propagate_resize_diff_left(
-diff_remaining,
double_click_position - 1,
&mut widths,
resize_behavior,
);
}
} else if double_click_position > 0 {
let diff_remaining = self.propagate_resize_diff_left(
diff,
double_click_position,
&mut widths,
resize_behavior,
);
if diff_remaining < 0.0 {
self.propagate_resize_diff_right(
-diff_remaining,
double_click_position,
&mut widths,
resize_behavior,
);
}
}
self.widths = widths.map(DefiniteLength::Fraction);
}
fn on_drag_move(
&mut self,
drag_event: &DragMoveEvent<DraggedColumn>,
resize_behavior: &[ResizeBehavior; COLS],
window: &mut Window,
cx: &mut Context<Self>,
) {
let drag_position = drag_event.event.position;
let bounds = drag_event.bounds;
let mut col_position = 0.0;
let rem_size = window.rem_size();
let bounds_width = bounds.right() - bounds.left();
let col_idx = drag_event.drag(cx).0;
let mut widths = self
.widths
.map(|length| Self::get_fraction(&length, bounds_width, rem_size));
for length in widths[0..=col_idx].iter() {
col_position += length;
}
let mut total_length_ratio = col_position;
for length in widths[col_idx + 1..].iter() {
total_length_ratio += length;
}
let drag_fraction = (drag_position.x - bounds.left()) / bounds_width;
let drag_fraction = drag_fraction * total_length_ratio;
let diff = drag_fraction - col_position;
let is_dragging_right = diff > 0.0;
if is_dragging_right {
self.propagate_resize_diff_right(diff, col_idx, &mut widths, resize_behavior);
} else {
// Resize behavior should be improved in the future by also seeking to the right column when there's not enough space
self.propagate_resize_diff_left(diff, col_idx, &mut widths, resize_behavior);
}
self.widths = widths.map(DefiniteLength::Fraction);
}
fn propagate_resize_diff_right(
&self,
diff: f32,
col_idx: usize,
widths: &mut [f32; COLS],
resize_behavior: &[ResizeBehavior; COLS],
) -> f32 {
let mut diff_remaining = diff;
let mut curr_column = col_idx + 1;
while diff_remaining > 0.0 && curr_column < COLS {
let Some(min_size) = resize_behavior[curr_column - 1].min_size() else {
curr_column += 1;
continue;
};
let mut curr_width = widths[curr_column] - diff_remaining;
diff_remaining = 0.0;
if min_size > curr_width {
diff_remaining += min_size - curr_width;
curr_width = min_size;
}
widths[curr_column] = curr_width;
curr_column += 1;
}
widths[col_idx] = widths[col_idx] + (diff - diff_remaining);
return diff_remaining;
}
fn propagate_resize_diff_left(
&mut self,
diff: f32,
mut curr_column: usize,
widths: &mut [f32; COLS],
resize_behavior: &[ResizeBehavior; COLS],
) -> f32 {
let mut diff_remaining = diff;
let col_idx = curr_column;
while diff_remaining < 0.0 {
let Some(min_size) = resize_behavior[curr_column].min_size() else {
if curr_column == 0 {
break;
}
curr_column -= 1;
continue;
};
let mut curr_width = widths[curr_column] + diff_remaining;
diff_remaining = 0.0;
if curr_width < min_size {
diff_remaining = curr_width - min_size;
curr_width = min_size
}
widths[curr_column] = curr_width;
if curr_column == 0 {
break;
}
curr_column -= 1;
}
widths[col_idx + 1] = widths[col_idx + 1] - (diff - diff_remaining);
return diff_remaining;
}
}
pub struct TableWidths<const COLS: usize> {
initial: [DefiniteLength; COLS],
current: Option<Entity<ColumnWidths<COLS>>>,
resizable: [ResizeBehavior; COLS],
}
impl<const COLS: usize> TableWidths<COLS> {
pub fn new(widths: [impl Into<DefiniteLength>; COLS]) -> Self {
let widths = widths.map(Into::into);
TableWidths {
initial: widths,
current: None,
resizable: [ResizeBehavior::None; COLS],
}
}
fn lengths(&self, cx: &App) -> [Length; COLS] {
self.current
.as_ref()
.map(|entity| entity.read(cx).widths.map(Length::Definite))
.unwrap_or(self.initial.map(Length::Definite))
}
}
/// A table component /// A table component
#[derive(RegisterComponent, IntoElement)] #[derive(RegisterComponent, IntoElement)]
pub struct Table<const COLS: usize = 3> { pub struct Table<const COLS: usize = 3> {
@ -373,21 +699,23 @@ pub struct Table<const COLS: usize = 3> {
headers: Option<[AnyElement; COLS]>, headers: Option<[AnyElement; COLS]>,
rows: TableContents<COLS>, rows: TableContents<COLS>,
interaction_state: Option<WeakEntity<TableInteractionState>>, interaction_state: Option<WeakEntity<TableInteractionState>>,
column_widths: Option<[Length; COLS]>, col_widths: Option<TableWidths<COLS>>,
map_row: Option<Rc<dyn Fn((usize, Div), &mut Window, &mut App) -> AnyElement>>, map_row: Option<Rc<dyn Fn((usize, Stateful<Div>), &mut Window, &mut App) -> AnyElement>>,
empty_table_callback: Option<Rc<dyn Fn(&mut Window, &mut App) -> AnyElement>>,
} }
impl<const COLS: usize> Table<COLS> { impl<const COLS: usize> Table<COLS> {
/// number of headers provided. /// number of headers provided.
pub fn new() -> Self { pub fn new() -> Self {
Table { Self {
striped: false, striped: false,
width: None, width: None,
headers: None, headers: None,
rows: TableContents::Vec(Vec::new()), rows: TableContents::Vec(Vec::new()),
interaction_state: None, interaction_state: None,
column_widths: None,
map_row: None, map_row: None,
empty_table_callback: None,
col_widths: None,
} }
} }
@ -448,32 +776,68 @@ impl<const COLS: usize> Table<COLS> {
self self
} }
pub fn column_widths(mut self, widths: [impl Into<Length>; COLS]) -> Self { pub fn column_widths(mut self, widths: [impl Into<DefiniteLength>; COLS]) -> Self {
self.column_widths = Some(widths.map(Into::into)); if self.col_widths.is_none() {
self.col_widths = Some(TableWidths::new(widths));
}
self
}
pub fn resizable_columns(
mut self,
resizable: [ResizeBehavior; COLS],
column_widths: &Entity<ColumnWidths<COLS>>,
cx: &mut App,
) -> Self {
if let Some(table_widths) = self.col_widths.as_mut() {
table_widths.resizable = resizable;
let column_widths = table_widths
.current
.get_or_insert_with(|| column_widths.clone());
column_widths.update(cx, |widths, _| {
if !widths.initialized {
widths.initialized = true;
widths.widths = table_widths.initial;
}
})
}
self self
} }
pub fn map_row( pub fn map_row(
mut self, mut self,
callback: impl Fn((usize, Div), &mut Window, &mut App) -> AnyElement + 'static, callback: impl Fn((usize, Stateful<Div>), &mut Window, &mut App) -> AnyElement + 'static,
) -> Self { ) -> Self {
self.map_row = Some(Rc::new(callback)); self.map_row = Some(Rc::new(callback));
self self
} }
/// Provide a callback that is invoked when the table is rendered without any rows
pub fn empty_table_callback(
mut self,
callback: impl Fn(&mut Window, &mut App) -> AnyElement + 'static,
) -> Self {
self.empty_table_callback = Some(Rc::new(callback));
self
}
} }
fn base_cell_style(width: Option<Length>, cx: &App) -> Div { fn base_cell_style(width: Option<Length>) -> Div {
div() div()
.px_1p5() .px_1p5()
.when_some(width, |this, width| this.w(width)) .when_some(width, |this, width| this.w(width))
.when(width.is_none(), |this| this.flex_1()) .when(width.is_none(), |this| this.flex_1())
.justify_start() .justify_start()
.text_ui(cx)
.whitespace_nowrap() .whitespace_nowrap()
.text_ellipsis() .text_ellipsis()
.overflow_hidden() .overflow_hidden()
} }
fn base_cell_style_text(width: Option<Length>, cx: &App) -> Div {
base_cell_style(width).text_ui(cx)
}
pub fn render_row<const COLS: usize>( pub fn render_row<const COLS: usize>(
row_index: usize, row_index: usize,
items: [impl IntoElement; COLS], items: [impl IntoElement; COLS],
@ -492,33 +856,33 @@ pub fn render_row<const COLS: usize>(
.column_widths .column_widths
.map_or([None; COLS], |widths| widths.map(Some)); .map_or([None; COLS], |widths| widths.map(Some));
let row = div().w_full().child( let mut row = h_flex()
h_flex() .h_full()
.id("table_row") .id(("table_row", row_index))
.w_full() .w_full()
.justify_between() .justify_between()
.px_1p5() .when_some(bg, |row, bg| row.bg(bg))
.py_1() .when(!is_striped, |row| {
.when_some(bg, |row, bg| row.bg(bg)) row.border_b_1()
.when(!is_striped, |row| { .border_color(transparent_black())
row.border_b_1() .when(!is_last, |row| row.border_color(cx.theme().colors().border))
.border_color(transparent_black()) });
.when(!is_last, |row| row.border_color(cx.theme().colors().border))
}) row = row.children(
.children( items
items .map(IntoElement::into_any_element)
.map(IntoElement::into_any_element) .into_iter()
.into_iter() .zip(column_widths)
.zip(column_widths) .map(|(cell, width)| base_cell_style_text(width, cx).px_1p5().py_1().child(cell)),
.map(|(cell, width)| base_cell_style(width, cx).child(cell)),
),
); );
if let Some(map_row) = table_context.map_row { let row = if let Some(map_row) = table_context.map_row {
map_row((row_index, row), window, cx) map_row((row_index, row), window, cx)
} else { } else {
row.into_any_element() row.into_any_element()
} };
div().h_full().w_full().child(row).into_any_element()
} }
pub fn render_header<const COLS: usize>( pub fn render_header<const COLS: usize>(
@ -542,7 +906,7 @@ pub fn render_header<const COLS: usize>(
headers headers
.into_iter() .into_iter()
.zip(column_widths) .zip(column_widths)
.map(|(h, width)| base_cell_style(width, cx).child(h)), .map(|(h, width)| base_cell_style_text(width, cx).child(h)),
) )
} }
@ -551,15 +915,15 @@ pub struct TableRenderContext<const COLS: usize> {
pub striped: bool, pub striped: bool,
pub total_row_count: usize, pub total_row_count: usize,
pub column_widths: Option<[Length; COLS]>, pub column_widths: Option<[Length; COLS]>,
pub map_row: Option<Rc<dyn Fn((usize, Div), &mut Window, &mut App) -> AnyElement>>, pub map_row: Option<Rc<dyn Fn((usize, Stateful<Div>), &mut Window, &mut App) -> AnyElement>>,
} }
impl<const COLS: usize> TableRenderContext<COLS> { impl<const COLS: usize> TableRenderContext<COLS> {
fn new(table: &Table<COLS>) -> Self { fn new(table: &Table<COLS>, cx: &App) -> Self {
Self { Self {
striped: table.striped, striped: table.striped,
total_row_count: table.rows.len(), total_row_count: table.rows.len(),
column_widths: table.column_widths, column_widths: table.col_widths.as_ref().map(|widths| widths.lengths(cx)),
map_row: table.map_row.clone(), map_row: table.map_row.clone(),
} }
} }
@ -567,8 +931,13 @@ impl<const COLS: usize> TableRenderContext<COLS> {
impl<const COLS: usize> RenderOnce for Table<COLS> { impl<const COLS: usize> RenderOnce for Table<COLS> {
fn render(mut self, window: &mut Window, cx: &mut App) -> impl IntoElement { fn render(mut self, window: &mut Window, cx: &mut App) -> impl IntoElement {
let table_context = TableRenderContext::new(&self); let table_context = TableRenderContext::new(&self, cx);
let interaction_state = self.interaction_state.and_then(|state| state.upgrade()); let interaction_state = self.interaction_state.and_then(|state| state.upgrade());
let current_widths = self
.col_widths
.as_ref()
.and_then(|widths| Some((widths.current.as_ref()?, widths.resizable)))
.map(|(curr, resize_behavior)| (curr.downgrade(), resize_behavior));
let scroll_track_size = px(16.); let scroll_track_size = px(16.);
let h_scroll_offset = if interaction_state let h_scroll_offset = if interaction_state
@ -582,6 +951,7 @@ impl<const COLS: usize> RenderOnce for Table<COLS> {
}; };
let width = self.width; let width = self.width;
let no_rows_rendered = self.rows.is_empty();
let table = div() let table = div()
.when_some(width, |this, width| this.w(width)) .when_some(width, |this, width| this.w(width))
@ -590,6 +960,31 @@ impl<const COLS: usize> RenderOnce for Table<COLS> {
.when_some(self.headers.take(), |this, headers| { .when_some(self.headers.take(), |this, headers| {
this.child(render_header(headers, table_context.clone(), cx)) this.child(render_header(headers, table_context.clone(), cx))
}) })
.when_some(current_widths, {
|this, (widths, resize_behavior)| {
this.on_drag_move::<DraggedColumn>({
let widths = widths.clone();
move |e, window, cx| {
widths
.update(cx, |widths, cx| {
widths.on_drag_move(e, &resize_behavior, window, cx);
})
.ok();
}
})
.on_children_prepainted(move |bounds, _, cx| {
widths
.update(cx, |widths, _| {
// This works because all children x axis bounds are the same
widths.cached_bounds_width = bounds[0].right() - bounds[0].left();
})
.ok();
})
}
})
.on_drop::<DraggedColumn>(|_, _, _| {
// Finish the resize operation
})
.child( .child(
div() div()
.flex_grow() .flex_grow()
@ -644,6 +1039,25 @@ impl<const COLS: usize> RenderOnce for Table<COLS> {
), ),
), ),
}) })
.when_some(
self.col_widths.as_ref().zip(interaction_state.as_ref()),
|parent, (table_widths, state)| {
parent.child(state.update(cx, |state, cx| {
let resizable_columns = table_widths.resizable;
let column_widths = table_widths.lengths(cx);
let columns = table_widths.current.clone();
let initial_sizes = table_widths.initial;
state.render_resize_handles(
&column_widths,
&resizable_columns,
initial_sizes,
columns,
window,
cx,
)
}))
},
)
.when_some(interaction_state.as_ref(), |this, interaction_state| { .when_some(interaction_state.as_ref(), |this, interaction_state| {
this.map(|this| { this.map(|this| {
TableInteractionState::render_vertical_scrollbar_track( TableInteractionState::render_vertical_scrollbar_track(
@ -662,6 +1076,21 @@ impl<const COLS: usize> RenderOnce for Table<COLS> {
}) })
}), }),
) )
.when_some(
no_rows_rendered
.then_some(self.empty_table_callback)
.flatten(),
|this, callback| {
this.child(
h_flex()
.size_full()
.p_3()
.items_start()
.justify_center()
.child(callback(window, cx)),
)
},
)
.when_some( .when_some(
width.and(interaction_state.as_ref()), width.and(interaction_state.as_ref()),
|this, interaction_state| { |this, interaction_state| {

View file

@ -320,7 +320,39 @@ impl History {
last_edit_at: now, last_edit_at: now,
suppress_grouping: false, suppress_grouping: false,
}); });
self.redo_stack.clear(); }
/// Differs from `push_transaction` in that it does not clear the redo
/// stack. Intended to be used to create a parent transaction to merge
/// potential child transactions into.
///
/// The caller is responsible for removing it from the undo history using
/// `forget_transaction` if no edits are merged into it. Otherwise, if edits
/// are merged into this transaction, the caller is responsible for ensuring
/// the redo stack is cleared. The easiest way to ensure the redo stack is
/// cleared is to create transactions with the usual `start_transaction` and
/// `end_transaction` methods and merging the resulting transactions into
/// the transaction created by this method
fn push_empty_transaction(
&mut self,
start: clock::Global,
now: Instant,
clock: &mut clock::Lamport,
) -> TransactionId {
assert_eq!(self.transaction_depth, 0);
let id = clock.tick();
let transaction = Transaction {
id,
start,
edit_ids: Vec::new(),
};
self.undo_stack.push(HistoryEntry {
transaction,
first_edit_at: now,
last_edit_at: now,
suppress_grouping: false,
});
id
} }
fn push_undo(&mut self, op_id: clock::Lamport) { fn push_undo(&mut self, op_id: clock::Lamport) {
@ -1495,6 +1527,24 @@ impl Buffer {
self.history.push_transaction(transaction, now); self.history.push_transaction(transaction, now);
} }
/// Differs from `push_transaction` in that it does not clear the redo stack.
/// The caller responsible for
/// Differs from `push_transaction` in that it does not clear the redo
/// stack. Intended to be used to create a parent transaction to merge
/// potential child transactions into.
///
/// The caller is responsible for removing it from the undo history using
/// `forget_transaction` if no edits are merged into it. Otherwise, if edits
/// are merged into this transaction, the caller is responsible for ensuring
/// the redo stack is cleared. The easiest way to ensure the redo stack is
/// cleared is to create transactions with the usual `start_transaction` and
/// `end_transaction` methods and merging the resulting transactions into
/// the transaction created by this method
pub fn push_empty_transaction(&mut self, now: Instant) -> TransactionId {
self.history
.push_empty_transaction(self.version.clone(), now, &mut self.lamport_clock)
}
pub fn edited_ranges_for_transaction_id<D>( pub fn edited_ranges_for_transaction_id<D>(
&self, &self,
transaction_id: TransactionId, transaction_id: TransactionId,

View file

@ -83,6 +83,8 @@ impl ThemeColors {
panel_indent_guide: neutral().light_alpha().step_5(), panel_indent_guide: neutral().light_alpha().step_5(),
panel_indent_guide_hover: neutral().light_alpha().step_6(), panel_indent_guide_hover: neutral().light_alpha().step_6(),
panel_indent_guide_active: neutral().light_alpha().step_6(), panel_indent_guide_active: neutral().light_alpha().step_6(),
panel_overlay_background: neutral().light().step_2(),
panel_overlay_hover: neutral().light_alpha().step_4(),
pane_focused_border: blue().light().step_5(), pane_focused_border: blue().light().step_5(),
pane_group_border: neutral().light().step_6(), pane_group_border: neutral().light().step_6(),
scrollbar_thumb_background: neutral().light_alpha().step_3(), scrollbar_thumb_background: neutral().light_alpha().step_3(),
@ -206,6 +208,8 @@ impl ThemeColors {
panel_indent_guide: neutral().dark_alpha().step_4(), panel_indent_guide: neutral().dark_alpha().step_4(),
panel_indent_guide_hover: neutral().dark_alpha().step_6(), panel_indent_guide_hover: neutral().dark_alpha().step_6(),
panel_indent_guide_active: neutral().dark_alpha().step_6(), panel_indent_guide_active: neutral().dark_alpha().step_6(),
panel_overlay_background: neutral().dark().step_2(),
panel_overlay_hover: neutral().dark_alpha().step_4(),
pane_focused_border: blue().dark().step_5(), pane_focused_border: blue().dark().step_5(),
pane_group_border: neutral().dark().step_6(), pane_group_border: neutral().dark().step_6(),
scrollbar_thumb_background: neutral().dark_alpha().step_3(), scrollbar_thumb_background: neutral().dark_alpha().step_3(),

View file

@ -59,6 +59,7 @@ pub(crate) fn zed_default_dark() -> Theme {
let bg = hsla(215. / 360., 12. / 100., 15. / 100., 1.); let bg = hsla(215. / 360., 12. / 100., 15. / 100., 1.);
let editor = hsla(220. / 360., 12. / 100., 18. / 100., 1.); let editor = hsla(220. / 360., 12. / 100., 18. / 100., 1.);
let elevated_surface = hsla(225. / 360., 12. / 100., 17. / 100., 1.); let elevated_surface = hsla(225. / 360., 12. / 100., 17. / 100., 1.);
let hover = hsla(225.0 / 360., 11.8 / 100., 26.7 / 100., 1.0);
let blue = hsla(207.8 / 360., 81. / 100., 66. / 100., 1.0); let blue = hsla(207.8 / 360., 81. / 100., 66. / 100., 1.0);
let gray = hsla(218.8 / 360., 10. / 100., 40. / 100., 1.0); let gray = hsla(218.8 / 360., 10. / 100., 40. / 100., 1.0);
@ -108,14 +109,14 @@ pub(crate) fn zed_default_dark() -> Theme {
surface_background: bg, surface_background: bg,
background: bg, background: bg,
element_background: hsla(223.0 / 360., 13. / 100., 21. / 100., 1.0), element_background: hsla(223.0 / 360., 13. / 100., 21. / 100., 1.0),
element_hover: hsla(225.0 / 360., 11.8 / 100., 26.7 / 100., 1.0), element_hover: hover,
element_active: hsla(220.0 / 360., 11.8 / 100., 20.0 / 100., 1.0), element_active: hsla(220.0 / 360., 11.8 / 100., 20.0 / 100., 1.0),
element_selected: hsla(224.0 / 360., 11.3 / 100., 26.1 / 100., 1.0), element_selected: hsla(224.0 / 360., 11.3 / 100., 26.1 / 100., 1.0),
element_disabled: SystemColors::default().transparent, element_disabled: SystemColors::default().transparent,
element_selection_background: player.local().selection.alpha(0.25), element_selection_background: player.local().selection.alpha(0.25),
drop_target_background: hsla(220.0 / 360., 8.3 / 100., 21.4 / 100., 1.0), drop_target_background: hsla(220.0 / 360., 8.3 / 100., 21.4 / 100., 1.0),
ghost_element_background: SystemColors::default().transparent, ghost_element_background: SystemColors::default().transparent,
ghost_element_hover: hsla(225.0 / 360., 11.8 / 100., 26.7 / 100., 1.0), ghost_element_hover: hover,
ghost_element_active: hsla(220.0 / 360., 11.8 / 100., 20.0 / 100., 1.0), ghost_element_active: hsla(220.0 / 360., 11.8 / 100., 20.0 / 100., 1.0),
ghost_element_selected: hsla(224.0 / 360., 11.3 / 100., 26.1 / 100., 1.0), ghost_element_selected: hsla(224.0 / 360., 11.3 / 100., 26.1 / 100., 1.0),
ghost_element_disabled: SystemColors::default().transparent, ghost_element_disabled: SystemColors::default().transparent,
@ -202,10 +203,12 @@ pub(crate) fn zed_default_dark() -> Theme {
panel_indent_guide: hsla(228. / 360., 8. / 100., 25. / 100., 1.), panel_indent_guide: hsla(228. / 360., 8. / 100., 25. / 100., 1.),
panel_indent_guide_hover: hsla(225. / 360., 13. / 100., 12. / 100., 1.), panel_indent_guide_hover: hsla(225. / 360., 13. / 100., 12. / 100., 1.),
panel_indent_guide_active: hsla(225. / 360., 13. / 100., 12. / 100., 1.), panel_indent_guide_active: hsla(225. / 360., 13. / 100., 12. / 100., 1.),
panel_overlay_background: bg,
panel_overlay_hover: hover,
pane_focused_border: blue, pane_focused_border: blue,
pane_group_border: hsla(225. / 360., 13. / 100., 12. / 100., 1.), pane_group_border: hsla(225. / 360., 13. / 100., 12. / 100., 1.),
scrollbar_thumb_background: gpui::transparent_black(), scrollbar_thumb_background: gpui::transparent_black(),
scrollbar_thumb_hover_background: hsla(225.0 / 360., 11.8 / 100., 26.7 / 100., 1.0), scrollbar_thumb_hover_background: hover,
scrollbar_thumb_active_background: hsla( scrollbar_thumb_active_background: hsla(
225.0 / 360., 225.0 / 360.,
11.8 / 100., 11.8 / 100.,

View file

@ -352,6 +352,12 @@ pub struct ThemeColorsContent {
#[serde(rename = "panel.indent_guide_active")] #[serde(rename = "panel.indent_guide_active")]
pub panel_indent_guide_active: Option<String>, pub panel_indent_guide_active: Option<String>,
#[serde(rename = "panel.overlay_background")]
pub panel_overlay_background: Option<String>,
#[serde(rename = "panel.overlay_hover")]
pub panel_overlay_hover: Option<String>,
#[serde(rename = "pane.focused_border")] #[serde(rename = "pane.focused_border")]
pub pane_focused_border: Option<String>, pub pane_focused_border: Option<String>,
@ -675,6 +681,14 @@ impl ThemeColorsContent {
.scrollbar_thumb_border .scrollbar_thumb_border
.as_ref() .as_ref()
.and_then(|color| try_parse_color(color).ok()); .and_then(|color| try_parse_color(color).ok());
let element_hover = self
.element_hover
.as_ref()
.and_then(|color| try_parse_color(color).ok());
let panel_background = self
.panel_background
.as_ref()
.and_then(|color| try_parse_color(color).ok());
ThemeColorsRefinement { ThemeColorsRefinement {
border, border,
border_variant: self border_variant: self
@ -713,10 +727,7 @@ impl ThemeColorsContent {
.element_background .element_background
.as_ref() .as_ref()
.and_then(|color| try_parse_color(color).ok()), .and_then(|color| try_parse_color(color).ok()),
element_hover: self element_hover,
.element_hover
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
element_active: self element_active: self
.element_active .element_active
.as_ref() .as_ref()
@ -833,10 +844,7 @@ impl ThemeColorsContent {
.search_match_background .search_match_background
.as_ref() .as_ref()
.and_then(|color| try_parse_color(color).ok()), .and_then(|color| try_parse_color(color).ok()),
panel_background: self panel_background,
.panel_background
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
panel_focused_border: self panel_focused_border: self
.panel_focused_border .panel_focused_border
.as_ref() .as_ref()
@ -853,6 +861,16 @@ impl ThemeColorsContent {
.panel_indent_guide_active .panel_indent_guide_active
.as_ref() .as_ref()
.and_then(|color| try_parse_color(color).ok()), .and_then(|color| try_parse_color(color).ok()),
panel_overlay_background: self
.panel_overlay_background
.as_ref()
.and_then(|color| try_parse_color(color).ok())
.or(panel_background),
panel_overlay_hover: self
.panel_overlay_hover
.as_ref()
.and_then(|color| try_parse_color(color).ok())
.or(element_hover),
pane_focused_border: self pane_focused_border: self
.pane_focused_border .pane_focused_border
.as_ref() .as_ref()

View file

@ -131,6 +131,12 @@ pub struct ThemeColors {
pub panel_indent_guide: Hsla, pub panel_indent_guide: Hsla,
pub panel_indent_guide_hover: Hsla, pub panel_indent_guide_hover: Hsla,
pub panel_indent_guide_active: Hsla, pub panel_indent_guide_active: Hsla,
/// The color of the overlay surface on top of panel.
pub panel_overlay_background: Hsla,
/// The color of the overlay surface on top of panel when hovered over.
pub panel_overlay_hover: Hsla,
pub pane_focused_border: Hsla, pub pane_focused_border: Hsla,
pub pane_group_border: Hsla, pub pane_group_border: Hsla,
/// The color of the scrollbar thumb. /// The color of the scrollbar thumb.
@ -326,6 +332,8 @@ pub enum ThemeColorField {
PanelIndentGuide, PanelIndentGuide,
PanelIndentGuideHover, PanelIndentGuideHover,
PanelIndentGuideActive, PanelIndentGuideActive,
PanelOverlayBackground,
PanelOverlayHover,
PaneFocusedBorder, PaneFocusedBorder,
PaneGroupBorder, PaneGroupBorder,
ScrollbarThumbBackground, ScrollbarThumbBackground,
@ -438,6 +446,8 @@ impl ThemeColors {
ThemeColorField::PanelIndentGuide => self.panel_indent_guide, ThemeColorField::PanelIndentGuide => self.panel_indent_guide,
ThemeColorField::PanelIndentGuideHover => self.panel_indent_guide_hover, ThemeColorField::PanelIndentGuideHover => self.panel_indent_guide_hover,
ThemeColorField::PanelIndentGuideActive => self.panel_indent_guide_active, ThemeColorField::PanelIndentGuideActive => self.panel_indent_guide_active,
ThemeColorField::PanelOverlayBackground => self.panel_overlay_background,
ThemeColorField::PanelOverlayHover => self.panel_overlay_hover,
ThemeColorField::PaneFocusedBorder => self.pane_focused_border, ThemeColorField::PaneFocusedBorder => self.pane_focused_border,
ThemeColorField::PaneGroupBorder => self.pane_group_border, ThemeColorField::PaneGroupBorder => self.pane_group_border,
ThemeColorField::ScrollbarThumbBackground => self.scrollbar_thumb_background, ThemeColorField::ScrollbarThumbBackground => self.scrollbar_thumb_background,

View file

@ -40,6 +40,7 @@ rpc.workspace = true
schemars.workspace = true schemars.workspace = true
serde.workspace = true serde.workspace = true
settings.workspace = true settings.workspace = true
settings_ui.workspace = true
smallvec.workspace = true smallvec.workspace = true
story = { workspace = true, optional = true } story = { workspace = true, optional = true }
telemetry.workspace = true telemetry.workspace = true

View file

@ -30,6 +30,7 @@ use onboarding_banner::OnboardingBanner;
use project::Project; use project::Project;
use rpc::proto; use rpc::proto;
use settings::Settings as _; use settings::Settings as _;
use settings_ui::keybindings;
use std::sync::Arc; use std::sync::Arc;
use theme::ActiveTheme; use theme::ActiveTheme;
use title_bar_settings::TitleBarSettings; use title_bar_settings::TitleBarSettings;
@ -683,7 +684,7 @@ impl TitleBar {
) )
.separator() .separator()
.action("Settings", zed_actions::OpenSettings.boxed_clone()) .action("Settings", zed_actions::OpenSettings.boxed_clone())
.action("Key Bindings", Box::new(zed_actions::OpenKeymap)) .action("Key Bindings", Box::new(keybindings::OpenKeymapEditor))
.action( .action(
"Themes…", "Themes…",
zed_actions::theme_selector::Toggle::default().boxed_clone(), zed_actions::theme_selector::Toggle::default().boxed_clone(),
@ -727,7 +728,7 @@ impl TitleBar {
.menu(|window, cx| { .menu(|window, cx| {
ContextMenu::build(window, cx, |menu, _, _| { ContextMenu::build(window, cx, |menu, _, _| {
menu.action("Settings", zed_actions::OpenSettings.boxed_clone()) menu.action("Settings", zed_actions::OpenSettings.boxed_clone())
.action("Key Bindings", Box::new(zed_actions::OpenKeymap)) .action("Key Bindings", Box::new(keybindings::OpenKeymapEditor))
.action( .action(
"Themes…", "Themes…",
zed_actions::theme_selector::Toggle::default().boxed_clone(), zed_actions::theme_selector::Toggle::default().boxed_clone(),

View file

@ -972,12 +972,10 @@ impl ContextMenu {
.children(action.as_ref().and_then(|action| { .children(action.as_ref().and_then(|action| {
self.action_context self.action_context
.as_ref() .as_ref()
.map(|focus| { .and_then(|focus| {
KeyBinding::for_action_in(&**action, focus, window, cx) KeyBinding::for_action_in(&**action, focus, window, cx)
}) })
.unwrap_or_else(|| { .or_else(|| KeyBinding::for_action(&**action, window, cx))
KeyBinding::for_action(&**action, window, cx)
})
.map(|binding| { .map(|binding| {
div().ml_4().child(binding.disabled(*disabled)).when( div().ml_4().child(binding.disabled(*disabled)).when(
*disabled && documentation_aside.is_some(), *disabled && documentation_aside.is_some(),

View file

@ -943,6 +943,8 @@ mod element {
pub struct PaneAxisElement { pub struct PaneAxisElement {
axis: Axis, axis: Axis,
basis: usize, basis: usize,
/// Equivalent to ColumnWidths (but in terms of flexes instead of percentages)
/// For example, flexes "1.33, 1, 1", instead of "40%, 30%, 30%"
flexes: Arc<Mutex<Vec<f32>>>, flexes: Arc<Mutex<Vec<f32>>>,
bounding_boxes: Arc<Mutex<Vec<Option<Bounds<Pixels>>>>>, bounding_boxes: Arc<Mutex<Vec<Option<Bounds<Pixels>>>>>,
children: SmallVec<[AnyElement; 2]>, children: SmallVec<[AnyElement; 2]>,
@ -998,6 +1000,7 @@ mod element {
let mut flexes = flexes.lock(); let mut flexes = flexes.lock();
debug_assert!(flex_values_in_bounds(flexes.as_slice())); debug_assert!(flex_values_in_bounds(flexes.as_slice()));
// Math to convert a flex value to a pixel value
let size = move |ix, flexes: &[f32]| { let size = move |ix, flexes: &[f32]| {
container_size.along(axis) * (flexes[ix] / flexes.len() as f32) container_size.along(axis) * (flexes[ix] / flexes.len() as f32)
}; };
@ -1007,9 +1010,13 @@ mod element {
return; return;
} }
// This is basically a "bucket" of pixel changes that need to be applied in response to this
// mouse event. Probably a small, fractional number like 0.5 or 1.5 pixels
let mut proposed_current_pixel_change = let mut proposed_current_pixel_change =
(e.position - child_start).along(axis) - size(ix, flexes.as_slice()); (e.position - child_start).along(axis) - size(ix, flexes.as_slice());
// This takes a pixel change, and computes the flex changes that correspond to this pixel change
// as well as the next one, for some reason
let flex_changes = |pixel_dx, target_ix, next: isize, flexes: &[f32]| { let flex_changes = |pixel_dx, target_ix, next: isize, flexes: &[f32]| {
let flex_change = pixel_dx / container_size.along(axis); let flex_change = pixel_dx / container_size.along(axis);
let current_target_flex = flexes[target_ix] + flex_change; let current_target_flex = flexes[target_ix] + flex_change;
@ -1017,6 +1024,9 @@ mod element {
(current_target_flex, next_target_flex) (current_target_flex, next_target_flex)
}; };
// Generate the list of flex successors, from the current index.
// If you're dragging column 3 forward, out of 6 columns, then this code will produce [4, 5, 6]
// If you're dragging column 3 backward, out of 6 columns, then this code will produce [2, 1, 0]
let mut successors = iter::from_fn({ let mut successors = iter::from_fn({
let forward = proposed_current_pixel_change > px(0.); let forward = proposed_current_pixel_change > px(0.);
let mut ix_offset = 0; let mut ix_offset = 0;
@ -1034,6 +1044,7 @@ mod element {
} }
}); });
// Now actually loop over these, and empty our bucket of pixel changes
while proposed_current_pixel_change.abs() > px(0.) { while proposed_current_pixel_change.abs() > px(0.) {
let Some(current_ix) = successors.next() else { let Some(current_ix) = successors.next() else {
break; break;

View file

@ -73,7 +73,7 @@ impl Workspace {
if let Some(terminal_provider) = self.terminal_provider.as_ref() { if let Some(terminal_provider) = self.terminal_provider.as_ref() {
let task_status = terminal_provider.spawn(spawn_in_terminal, window, cx); let task_status = terminal_provider.spawn(spawn_in_terminal, window, cx);
cx.background_spawn(async move { let task = cx.background_spawn(async move {
match task_status.await { match task_status.await {
Some(Ok(status)) => { Some(Ok(status)) => {
if status.success() { if status.success() {
@ -82,11 +82,11 @@ impl Workspace {
log::debug!("Task spawn failed, code: {:?}", status.code()); log::debug!("Task spawn failed, code: {:?}", status.code());
} }
} }
Some(Err(e)) => log::error!("Task spawn failed: {e}"), Some(Err(e)) => log::error!("Task spawn failed: {e:#}"),
None => log::debug!("Task spawn got cancelled"), None => log::debug!("Task spawn got cancelled"),
} }
}) });
.detach(); self.scheduled_tasks.push(task);
} }
} }

View file

@ -1088,6 +1088,7 @@ pub struct Workspace {
serialized_ssh_project: Option<SerializedSshProject>, serialized_ssh_project: Option<SerializedSshProject>,
_items_serializer: Task<Result<()>>, _items_serializer: Task<Result<()>>,
session_id: Option<String>, session_id: Option<String>,
scheduled_tasks: Vec<Task<()>>,
} }
impl EventEmitter<Event> for Workspace {} impl EventEmitter<Event> for Workspace {}
@ -1420,6 +1421,7 @@ impl Workspace {
_items_serializer, _items_serializer,
session_id: Some(session_id), session_id: Some(session_id),
serialized_ssh_project: None, serialized_ssh_project: None,
scheduled_tasks: Vec::new(),
} }
} }

View file

@ -2,7 +2,7 @@
description = "The fast, collaborative code editor." description = "The fast, collaborative code editor."
edition.workspace = true edition.workspace = true
name = "zed" name = "zed"
version = "0.196.0" version = "0.196.7"
publish.workspace = true publish.workspace = true
license = "GPL-3.0-or-later" license = "GPL-3.0-or-later"
authors = ["Zed Team <hi@zed.dev>"] authors = ["Zed Team <hi@zed.dev>"]

View file

@ -1 +1 @@
dev stable

View file

@ -1,5 +1,6 @@
use collab_ui::collab_panel; use collab_ui::collab_panel;
use gpui::{Menu, MenuItem, OsAction}; use gpui::{Menu, MenuItem, OsAction};
use settings_ui::keybindings;
use terminal_view::terminal_panel; use terminal_view::terminal_panel;
pub fn app_menus() -> Vec<Menu> { pub fn app_menus() -> Vec<Menu> {
@ -16,7 +17,7 @@ pub fn app_menus() -> Vec<Menu> {
name: "Settings".into(), name: "Settings".into(),
items: vec![ items: vec![
MenuItem::action("Open Settings", super::OpenSettings), MenuItem::action("Open Settings", super::OpenSettings),
MenuItem::action("Open Key Bindings", zed_actions::OpenKeymap), MenuItem::action("Open Key Bindings", keybindings::OpenKeymapEditor),
MenuItem::action("Open Default Settings", super::OpenDefaultSettings), MenuItem::action("Open Default Settings", super::OpenDefaultSettings),
MenuItem::action( MenuItem::action(
"Open Default Key Bindings", "Open Default Key Bindings",

View file

@ -148,7 +148,7 @@ On some systems the file `/etc/prime-discrete` can be used to enforce the use of
On others, you may be able to the environment variable `DRI_PRIME=1` when running Zed to force the use of the discrete GPU. On others, you may be able to the environment variable `DRI_PRIME=1` when running Zed to force the use of the discrete GPU.
If you're using an AMD GPU and Zed crashes when selecting long lines, try setting the `ZED_SAMPLE_COUNT=0` environment variable. (See [#26143](https://github.com/zed-industries/zed/issues/26143)) If you're using an AMD GPU and Zed crashes when selecting long lines, try setting the `ZED_PATH_SAMPLE_COUNT=0` environment variable. (See [#26143](https://github.com/zed-industries/zed/issues/26143))
If you're using an AMD GPU, you might get a 'Broken Pipe' error. Try using the RADV or Mesa drivers. (See [#13880](https://github.com/zed-industries/zed/issues/13880)) If you're using an AMD GPU, you might get a 'Broken Pipe' error. Try using the RADV or Mesa drivers. (See [#13880](https://github.com/zed-industries/zed/issues/13880))