Debugger implementation (#13433)

###  DISCLAIMER

> As of 6th March 2025, debugger is still in development. We plan to
merge it behind a staff-only feature flag for staff use only, followed
by non-public release and then finally a public one (akin to how Git
panel release was handled). This is done to ensure the best experience
when it gets released.

### END OF DISCLAIMER 

**The current state of the debugger implementation:**


https://github.com/user-attachments/assets/c4deff07-80dd-4dc6-ad2e-0c252a478fe9


https://github.com/user-attachments/assets/e1ed2345-b750-4bb6-9c97-50961b76904f

----

All the todo's are in the following channel, so it's easier to work on
this together:
https://zed.dev/channel/zed-debugger-11370

If you are on Linux, you can use the following command to join the
channel:
```cli
zed https://zed.dev/channel/zed-debugger-11370 
```

## Current Features

- Collab
  - Breakpoints
    - Sync when you (re)join a project
    - Sync when you add/remove a breakpoint
  - Sync active debug line
  - Stack frames
    - Click on stack frame
      - View variables that belong to the stack frame
      - Visit the source file
    - Restart stack frame (if adapter supports this)
  - Variables
  - Loaded sources
  - Modules
  - Controls
    - Continue
    - Step back
      - Stepping granularity (configurable)
    - Step into
      - Stepping granularity (configurable)
    - Step over
      - Stepping granularity (configurable)
    - Step out
      - Stepping granularity (configurable)
  - Debug console
- Breakpoints
  - Log breakpoints
  - line breakpoints
  - Persistent between zed sessions (configurable)
  - Multi buffer support
  - Toggle disable/enable all breakpoints
- Stack frames
  - Click on stack frame
    - View variables that belong to the stack frame
    - Visit the source file
    - Show collapsed stack frames
  - Restart stack frame (if adapter supports this)
- Loaded sources
  - View all used loaded sources if supported by adapter.
- Modules
  - View all used modules (if adapter supports this)
- Variables
  - Copy value
  - Copy name
  - Copy memory reference
  - Set value (if adapter supports this)
  - keyboard navigation
- Debug Console
  - See logs
  - View output that was sent from debug adapter
    - Output grouping
  - Evaluate code
    - Updates the variable list
    - Auto completion
- If not supported by adapter, we will show auto-completion for existing
variables
- Debug Terminal
- Run custom commands and change env values right inside your Zed
terminal
- Attach to process (if adapter supports this)
  - Process picker
- Controls
  - Continue
  - Step back
    - Stepping granularity (configurable)
  - Step into
    - Stepping granularity (configurable)
  - Step over
    - Stepping granularity (configurable)
  - Step out
    - Stepping granularity (configurable)
  - Disconnect
  - Restart
  - Stop
- Warning when a debug session exited without hitting any breakpoint
- Debug view to see Adapter/RPC log messages
- Testing
  - Fake debug adapter
    - Fake requests & events

---

Release Notes:

- N/A

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
Co-authored-by: Anthony Eid <hello@anthonyeid.me>
Co-authored-by: Anthony <anthony@zed.dev>
Co-authored-by: Piotr Osiewicz <peterosiewicz@gmail.com>
Co-authored-by: Piotr <piotr@zed.dev>
This commit is contained in:
Remco Smits 2025-03-18 17:55:25 +01:00 committed by GitHub
parent ed4e654fdf
commit 41a60ffecf
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
156 changed files with 25840 additions and 451 deletions

View file

@ -0,0 +1,350 @@
# Debugger
Zed uses the Debug Adapter Protocol (DAP) to provide debugging functionality across multiple programming languages.
DAP is a standardized protocol that defines how debuggers, editors, and IDEs communicate with each other.
It allows Zed to support various debuggers without needing to implement language-specific debugging logic.
This protocol enables features like setting breakpoints, stepping through code, inspecting variables,
and more, in a consistent manner across different programming languages and runtime environments.
## Supported Debug Adapters
Zed supports a variety of debug adapters for different programming languages:
- JavaScript (node): Enables debugging of Node.js applications, including setting breakpoints, stepping through code, and inspecting variables in JavaScript.
- Python (debugpy): Provides debugging capabilities for Python applications, supporting features like remote debugging, multi-threaded debugging, and Django/Flask application debugging.
- LLDB: A powerful debugger for C, C++, Objective-C, and Swift, offering low-level debugging features and support for Apple platforms.
- GDB: The GNU Debugger, which supports debugging for multiple programming languages including C, C++, Go, and Rust, across various platforms.
- Go (dlv): Delve, a debugger for the Go programming language, offering both local and remote debugging capabilities with full support for Go's runtime and standard library.
- PHP (xdebug): Provides debugging and profiling capabilities for PHP applications, including remote debugging and code coverage analysis.
- Custom: Allows you to configure any debug adapter that supports the Debug Adapter Protocol, enabling debugging for additional languages or specialized environments not natively supported by Zed.
These adapters enable Zed to provide a consistent debugging experience across multiple languages while leveraging the specific features and capabilities of each debugger.
## How To Get Started
To start a debug session, we added few default debug configurations for each supported language that supports generic configuration options. To see all the available debug configurations, you can use the command palette `debugger: start` action, this should list all the available debug configurations.
### Configuration
To create a custom debug configuration you have to create a `.zed/debug.json` file in your project root directory. This file should contain an array of debug configurations, each with a unique label and adapter the other option are optional/required based on the adapter.
```json
[
{
// The label for the debug configuration and used to identify the debug session inside the debug panel
"label": "Example Start debugger config"
// The debug adapter that Zed should use to debug the program
"adapter": "custom",
// Request: defaults to launch
// - launch: Zed will launch the program if specified or shows a debug terminal with the right configuration
// - attach: Zed will attach to a running program to debug it or when the process_id is not specified we will show a process picker (only supported for node currently)
"request": "launch",
// cwd: defaults to the current working directory of your project ($ZED_WORKTREE_ROOT)
// this field also supports task variables e.g. $ZED_WORKTREE_ROOT
"cwd": "$ZED_WORKTREE_ROOT",
// program: The program that you want to debug
// this fields also support task variables e.g. $ZED_FILE
// Note: this field should only contain the path to the program you want to debug
"program": "path_to_program",
// initialize_args: This field should contain all the adapter specific initialization arguments that are directly send to the debug adapter
"initialize_args": {
// "stopOnEntry": true // e.g. to stop on the first line of the program (These args are DAP specific)
}
}
]
```
### Using Attach [WIP]
Only javascript and lldb supports starting a debug session using attach.
When using the attach request with a process ID the syntax is as follows:
```json
{
"label": "Attach to Process",
"adapter": "javascript",
"request": {
"attach": {
"process_id": "12345"
}
}
}
```
Without process ID the syntax is as follows:
```json
{
"label": "Attach to Process",
"adapter": "javascript",
"request": {
"attach": {}
}
}
```
#### JavaScript Configuration
##### Debug Active File
This configuration allows you to debug a JavaScript file in your project.
```json
{
"label": "JavaScript: Debug Active File",
"adapter": "javascript",
"program": "$ZED_FILE",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT"
}
```
##### Debug Terminal
This configuration will spawn a debug terminal where you could start you program by typing `node test.js`, and the debug adapter will automatically attach to the process.
```json
{
"label": "JavaScript: Debug Terminal",
"adapter": "javascript",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT",
// "program": "$ZED_FILE", // optional if you pass this in, you will see the output inside the terminal itself
"initialize_args": {
"console": "integratedTerminal"
}
}
```
#### PHP Configuration
##### Debug Active File
This configuration allows you to debug a PHP file in your project.
```json
{
"label": "PHP: Debug Active File",
"adapter": "php",
"program": "$ZED_FILE",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT"
}
```
#### Python Configuration
##### Debug Active File
This configuration allows you to debug a Python file in your project.
```json
{
"label": "Python: Debug Active File",
"adapter": "python",
"program": "$ZED_FILE",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT"
}
```
#### GDB Configuration
**NOTE:** This configuration is for Linux systems only & intel macbooks.
##### Debug Program
This configuration allows you to debug a program using GDB e.g. Zed itself.
```json
{
"label": "GDB: Debug program",
"adapter": "gdb",
"program": "$ZED_WORKTREE_ROOT/target/debug/zed",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT"
}
```
#### LLDB Configuration
##### Debug Program
This configuration allows you to debug a program using LLDB e.g. Zed itself.
```json
{
"label": "LLDB: Debug program",
"adapter": "lldb",
"program": "$ZED_WORKTREE_ROOT/target/debug/zed",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT"
}
```
## Breakpoints
Zed currently supports these types of breakpoints
- Log Breakpoints: Output a log message instead of stopping at the breakpoint when it's hit
- Standard Breakpoints: Stop at the breakpoint when it's hit
Standard breakpoints can be toggled by left clicking on the editor gutter or using the Toggle Breakpoint action. Right clicking on a breakpoint, code action symbol, or code runner symbol brings up the breakpoint context menu. That has options for toggling breakpoints and editing log breakpoints.
Log breakpoints can also be edited/added through the edit log breakpoint action
## Settings
- `stepping_granularity`: Determines the stepping granularity.
- `save_breakpoints`: Whether the breakpoints should be reused across Zed sessions.
- `button`: Whether to show the debug button in the status bar.
- `timeout`: Time in milliseconds until timeout error when connecting to a TCP debug adapter.
- `log_dap_communications`: Whether to log messages between active debug adapters and Zed
- `format_dap_log_messages`: Whether to format dap messages in when adding them to debug adapter logger
### Stepping granularity
- Description: The Step granularity that the debugger will use
- Default: line
- Setting: debugger.stepping_granularity
**Options**
1. Statement - The step should allow the program to run until the current statement has finished executing.
The meaning of a statement is determined by the adapter and it may be considered equivalent to a line.
For example 'for(int i = 0; i < 10; i++)' could be considered to have 3 statements 'int i = 0', 'i < 10', and 'i++'.
```json
{
"debugger": {
"stepping_granularity": "statement"
}
}
```
2. Line - The step should allow the program to run until the current source line has executed.
```json
{
"debugger": {
"stepping_granularity": "line"
}
}
```
3. Instruction - The step should allow one instruction to execute (e.g. one x86 instruction).
```json
{
"debugger": {
"stepping_granularity": "instruction"
}
}
```
### Save Breakpoints
- Description: Whether the breakpoints should be saved across Zed sessions.
- Default: true
- Setting: debugger.save_breakpoints
**Options**
`boolean` values
```json
{
"debugger": {
"save_breakpoints": true
}
}
```
### Button
- Description: Whether the button should be displayed in the debugger toolbar.
- Default: true
- Setting: debugger.show_button
**Options**
`boolean` values
```json
{
"debugger": {
"show_button": true
}
}
```
### Timeout
- Description: Time in milliseconds until timeout error when connecting to a TCP debug adapter.
- Default: 2000ms
- Setting: debugger.timeout
**Options**
`integer` values
```json
{
"debugger": {
"timeout": 3000
}
}
```
### Log Dap Communications
- Description: Whether to log messages between active debug adapters and Zed. (Used for DAP development)
- Default: false
- Setting: debugger.log_dap_communications
**Options**
`boolean` values
```json
{
"debugger": {
"log_dap_communications": true
}
}
```
### Format Dap Log Messages
- Description: Whether to format dap messages in when adding them to debug adapter logger. (Used for DAP development)
- Default: false
- Setting: debugger.format_dap_log_messages
**Options**
`boolean` values
```json
{
"debugger": {
"format_dap_log_messages": true
}
}
```
## Theme
The Debugger supports the following theme options
/// Color used to accent some of the debuggers elements
/// Only accent breakpoint & breakpoint related symbols right now
**debugger.accent**: Color used to accent breakpoint & breakpoint related symbols
**editor.debugger_active_line.background**: Background color of active debug line

View file

@ -0,0 +1,612 @@
//! Module for managing breakpoints in a project.
//!
//! Breakpoints are separate from a session because they're not associated with any particular debug session. They can also be set up without a session running.
use anyhow::{anyhow, Result};
use breakpoints_in_file::BreakpointsInFile;
use collections::BTreeMap;
use dap::client::SessionId;
use gpui::{App, AppContext, AsyncApp, Context, Entity, EventEmitter, Task};
use language::{proto::serialize_anchor as serialize_text_anchor, Buffer, BufferSnapshot};
use rpc::{
proto::{self},
AnyProtoClient, TypedEnvelope,
};
use std::{
hash::{Hash, Hasher},
ops::Range,
path::Path,
sync::Arc,
};
use text::PointUtf16;
use crate::{buffer_store::BufferStore, worktree_store::WorktreeStore, Project, ProjectPath};
mod breakpoints_in_file {
use language::BufferEvent;
use super::*;
#[derive(Clone)]
pub(super) struct BreakpointsInFile {
pub(super) buffer: Entity<Buffer>,
// TODO: This is.. less than ideal, as it's O(n) and does not return entries in order. We'll have to change TreeMap to support passing in the context for comparisons
pub(super) breakpoints: Vec<(text::Anchor, Breakpoint)>,
_subscription: Arc<gpui::Subscription>,
}
impl BreakpointsInFile {
pub(super) fn new(buffer: Entity<Buffer>, cx: &mut Context<BreakpointStore>) -> Self {
let subscription =
Arc::from(cx.subscribe(&buffer, |_, buffer, event, cx| match event {
BufferEvent::Saved => {
if let Some(abs_path) = BreakpointStore::abs_path_from_buffer(&buffer, cx) {
cx.emit(BreakpointStoreEvent::BreakpointsUpdated(
abs_path,
BreakpointUpdatedReason::FileSaved,
));
}
}
_ => {}
}));
BreakpointsInFile {
buffer,
breakpoints: Vec::new(),
_subscription: subscription,
}
}
}
}
#[derive(Clone)]
struct RemoteBreakpointStore {
upstream_client: AnyProtoClient,
_upstream_project_id: u64,
}
#[derive(Clone)]
struct LocalBreakpointStore {
worktree_store: Entity<WorktreeStore>,
buffer_store: Entity<BufferStore>,
}
#[derive(Clone)]
enum BreakpointStoreMode {
Local(LocalBreakpointStore),
Remote(RemoteBreakpointStore),
}
pub struct BreakpointStore {
breakpoints: BTreeMap<Arc<Path>, BreakpointsInFile>,
downstream_client: Option<(AnyProtoClient, u64)>,
active_stack_frame: Option<(SessionId, Arc<Path>, text::Anchor)>,
// E.g ssh
mode: BreakpointStoreMode,
}
impl BreakpointStore {
pub fn init(client: &AnyProtoClient) {
client.add_entity_request_handler(Self::handle_toggle_breakpoint);
client.add_entity_message_handler(Self::handle_breakpoints_for_file);
}
pub fn local(worktree_store: Entity<WorktreeStore>, buffer_store: Entity<BufferStore>) -> Self {
BreakpointStore {
breakpoints: BTreeMap::new(),
mode: BreakpointStoreMode::Local(LocalBreakpointStore {
worktree_store,
buffer_store,
}),
downstream_client: None,
active_stack_frame: Default::default(),
}
}
pub(crate) fn remote(upstream_project_id: u64, upstream_client: AnyProtoClient) -> Self {
BreakpointStore {
breakpoints: BTreeMap::new(),
mode: BreakpointStoreMode::Remote(RemoteBreakpointStore {
upstream_client,
_upstream_project_id: upstream_project_id,
}),
downstream_client: None,
active_stack_frame: Default::default(),
}
}
pub(crate) fn shared(&mut self, project_id: u64, downstream_client: AnyProtoClient) {
self.downstream_client = Some((downstream_client.clone(), project_id));
}
pub(crate) fn unshared(&mut self, cx: &mut Context<Self>) {
self.downstream_client.take();
cx.notify();
}
async fn handle_breakpoints_for_file(
this: Entity<Project>,
message: TypedEnvelope<proto::BreakpointsForFile>,
mut cx: AsyncApp,
) -> Result<()> {
let breakpoints = cx.update(|cx| this.read(cx).breakpoint_store())?;
if message.payload.breakpoints.is_empty() {
return Ok(());
}
let buffer = this
.update(&mut cx, |this, cx| {
let path =
this.project_path_for_absolute_path(message.payload.path.as_ref(), cx)?;
Some(this.open_buffer(path, cx))
})
.ok()
.flatten()
.ok_or_else(|| anyhow!("Invalid project path"))?
.await?;
breakpoints.update(&mut cx, move |this, cx| {
let bps = this
.breakpoints
.entry(Arc::<Path>::from(message.payload.path.as_ref()))
.or_insert_with(|| BreakpointsInFile::new(buffer, cx));
bps.breakpoints = message
.payload
.breakpoints
.into_iter()
.filter_map(|breakpoint| {
let anchor = language::proto::deserialize_anchor(breakpoint.position.clone()?)?;
let breakpoint = Breakpoint::from_proto(breakpoint)?;
Some((anchor, breakpoint))
})
.collect();
cx.notify();
})?;
Ok(())
}
async fn handle_toggle_breakpoint(
this: Entity<Project>,
message: TypedEnvelope<proto::ToggleBreakpoint>,
mut cx: AsyncApp,
) -> Result<proto::Ack> {
let breakpoints = this.update(&mut cx, |this, _| this.breakpoint_store())?;
let path = this
.update(&mut cx, |this, cx| {
this.project_path_for_absolute_path(message.payload.path.as_ref(), cx)
})?
.ok_or_else(|| anyhow!("Could not resolve provided abs path"))?;
let buffer = this
.update(&mut cx, |this, cx| {
this.buffer_store().read(cx).get_by_path(&path, cx)
})?
.ok_or_else(|| anyhow!("Could not find buffer for a given path"))?;
let breakpoint = message
.payload
.breakpoint
.ok_or_else(|| anyhow!("Breakpoint not present in RPC payload"))?;
let anchor = language::proto::deserialize_anchor(
breakpoint
.position
.clone()
.ok_or_else(|| anyhow!("Anchor not present in RPC payload"))?,
)
.ok_or_else(|| anyhow!("Anchor deserialization failed"))?;
let breakpoint = Breakpoint::from_proto(breakpoint)
.ok_or_else(|| anyhow!("Could not deserialize breakpoint"))?;
breakpoints.update(&mut cx, |this, cx| {
this.toggle_breakpoint(
buffer,
(anchor, breakpoint),
BreakpointEditAction::Toggle,
cx,
);
})?;
Ok(proto::Ack {})
}
pub(crate) fn broadcast(&self) {
if let Some((client, project_id)) = &self.downstream_client {
for (path, breakpoint_set) in &self.breakpoints {
let _ = client.send(proto::BreakpointsForFile {
project_id: *project_id,
path: path.to_str().map(ToOwned::to_owned).unwrap(),
breakpoints: breakpoint_set
.breakpoints
.iter()
.filter_map(|(anchor, bp)| bp.to_proto(&path, anchor))
.collect(),
});
}
}
}
fn abs_path_from_buffer(buffer: &Entity<Buffer>, cx: &App) -> Option<Arc<Path>> {
worktree::File::from_dyn(buffer.read(cx).file())
.and_then(|file| file.worktree.read(cx).absolutize(&file.path).ok())
.map(Arc::<Path>::from)
}
pub fn toggle_breakpoint(
&mut self,
buffer: Entity<Buffer>,
mut breakpoint: (text::Anchor, Breakpoint),
edit_action: BreakpointEditAction,
cx: &mut Context<Self>,
) {
let Some(abs_path) = Self::abs_path_from_buffer(&buffer, cx) else {
return;
};
let breakpoint_set = self
.breakpoints
.entry(abs_path.clone())
.or_insert_with(|| BreakpointsInFile::new(buffer, cx));
match edit_action {
BreakpointEditAction::Toggle => {
let len_before = breakpoint_set.breakpoints.len();
breakpoint_set
.breakpoints
.retain(|value| &breakpoint != value);
if len_before == breakpoint_set.breakpoints.len() {
// We did not remove any breakpoint, hence let's toggle one.
breakpoint_set.breakpoints.push(breakpoint.clone());
}
}
BreakpointEditAction::EditLogMessage(log_message) => {
if !log_message.is_empty() {
breakpoint.1.kind = BreakpointKind::Log(log_message.clone());
let found_bp =
breakpoint_set
.breakpoints
.iter_mut()
.find_map(|(other_pos, other_bp)| {
if breakpoint.0 == *other_pos {
Some(other_bp)
} else {
None
}
});
if let Some(found_bp) = found_bp {
found_bp.kind = BreakpointKind::Log(log_message.clone());
} else {
// We did not remove any breakpoint, hence let's toggle one.
breakpoint_set.breakpoints.push(breakpoint.clone());
}
} else if matches!(&breakpoint.1.kind, BreakpointKind::Log(_)) {
breakpoint_set
.breakpoints
.retain(|(other_pos, other_kind)| {
&breakpoint.0 != other_pos
&& matches!(other_kind.kind, BreakpointKind::Standard)
});
}
}
}
if breakpoint_set.breakpoints.is_empty() {
self.breakpoints.remove(&abs_path);
}
if let BreakpointStoreMode::Remote(remote) = &self.mode {
if let Some(breakpoint) = breakpoint.1.to_proto(&abs_path, &breakpoint.0) {
cx.background_spawn(remote.upstream_client.request(proto::ToggleBreakpoint {
project_id: remote._upstream_project_id,
path: abs_path.to_str().map(ToOwned::to_owned).unwrap(),
breakpoint: Some(breakpoint),
}))
.detach();
}
} else if let Some((client, project_id)) = &self.downstream_client {
let breakpoints = self
.breakpoints
.get(&abs_path)
.map(|breakpoint_set| {
breakpoint_set
.breakpoints
.iter()
.filter_map(|(anchor, bp)| bp.to_proto(&abs_path, anchor))
.collect()
})
.unwrap_or_default();
let _ = client.send(proto::BreakpointsForFile {
project_id: *project_id,
path: abs_path.to_str().map(ToOwned::to_owned).unwrap(),
breakpoints,
});
}
cx.emit(BreakpointStoreEvent::BreakpointsUpdated(
abs_path,
BreakpointUpdatedReason::Toggled,
));
cx.notify();
}
pub fn on_file_rename(
&mut self,
old_path: Arc<Path>,
new_path: Arc<Path>,
cx: &mut Context<Self>,
) {
if let Some(breakpoints) = self.breakpoints.remove(&old_path) {
self.breakpoints.insert(new_path.clone(), breakpoints);
cx.notify();
}
}
pub fn breakpoints<'a>(
&'a self,
buffer: &'a Entity<Buffer>,
range: Option<Range<text::Anchor>>,
buffer_snapshot: BufferSnapshot,
cx: &App,
) -> impl Iterator<Item = &'a (text::Anchor, Breakpoint)> + 'a {
let abs_path = Self::abs_path_from_buffer(buffer, cx);
abs_path
.and_then(|path| self.breakpoints.get(&path))
.into_iter()
.flat_map(move |file_breakpoints| {
file_breakpoints.breakpoints.iter().filter({
let range = range.clone();
let buffer_snapshot = buffer_snapshot.clone();
move |(position, _)| {
if let Some(range) = &range {
position.cmp(&range.start, &buffer_snapshot).is_ge()
&& position.cmp(&range.end, &buffer_snapshot).is_le()
} else {
true
}
}
})
})
}
pub fn active_position(&self) -> Option<&(SessionId, Arc<Path>, text::Anchor)> {
self.active_stack_frame.as_ref()
}
pub fn remove_active_position(
&mut self,
session_id: Option<SessionId>,
cx: &mut Context<Self>,
) {
if let Some(session_id) = session_id {
self.active_stack_frame
.take_if(|(id, _, _)| *id == session_id);
} else {
self.active_stack_frame.take();
}
cx.emit(BreakpointStoreEvent::ActiveDebugLineChanged);
cx.notify();
}
pub fn set_active_position(
&mut self,
position: (SessionId, Arc<Path>, text::Anchor),
cx: &mut Context<Self>,
) {
self.active_stack_frame = Some(position);
cx.emit(BreakpointStoreEvent::ActiveDebugLineChanged);
cx.notify();
}
pub fn breakpoints_from_path(&self, path: &Arc<Path>, cx: &App) -> Vec<SerializedBreakpoint> {
self.breakpoints
.get(path)
.map(|bp| {
let snapshot = bp.buffer.read(cx).snapshot();
bp.breakpoints
.iter()
.map(|(position, breakpoint)| {
let position = snapshot.summary_for_anchor::<PointUtf16>(position).row;
SerializedBreakpoint {
position,
path: path.clone(),
kind: breakpoint.kind.clone(),
}
})
.collect()
})
.unwrap_or_default()
}
pub fn all_breakpoints(&self, cx: &App) -> BTreeMap<Arc<Path>, Vec<SerializedBreakpoint>> {
self.breakpoints
.iter()
.map(|(path, bp)| {
let snapshot = bp.buffer.read(cx).snapshot();
(
path.clone(),
bp.breakpoints
.iter()
.map(|(position, breakpoint)| {
let position = snapshot.summary_for_anchor::<PointUtf16>(position).row;
SerializedBreakpoint {
position,
path: path.clone(),
kind: breakpoint.kind.clone(),
}
})
.collect(),
)
})
.collect()
}
pub fn with_serialized_breakpoints(
&self,
breakpoints: BTreeMap<Arc<Path>, Vec<SerializedBreakpoint>>,
cx: &mut Context<'_, BreakpointStore>,
) -> Task<Result<()>> {
if let BreakpointStoreMode::Local(mode) = &self.mode {
let mode = mode.clone();
cx.spawn(move |this, mut cx| async move {
let mut new_breakpoints = BTreeMap::default();
for (path, bps) in breakpoints {
if bps.is_empty() {
continue;
}
let (worktree, relative_path) = mode
.worktree_store
.update(&mut cx, |this, cx| {
this.find_or_create_worktree(&path, false, cx)
})?
.await?;
let buffer = mode
.buffer_store
.update(&mut cx, |this, cx| {
let path = ProjectPath {
worktree_id: worktree.read(cx).id(),
path: relative_path.into(),
};
this.open_buffer(path, cx)
})?
.await;
let Ok(buffer) = buffer else {
log::error!("Todo: Serialized breakpoints which do not have buffer (yet)");
continue;
};
let snapshot = buffer.update(&mut cx, |buffer, _| buffer.snapshot())?;
let mut breakpoints_for_file =
this.update(&mut cx, |_, cx| BreakpointsInFile::new(buffer, cx))?;
for bp in bps {
let position = snapshot.anchor_before(PointUtf16::new(bp.position, 0));
breakpoints_for_file
.breakpoints
.push((position, Breakpoint { kind: bp.kind }))
}
new_breakpoints.insert(path, breakpoints_for_file);
}
this.update(&mut cx, |this, cx| {
this.breakpoints = new_breakpoints;
cx.notify();
})?;
Ok(())
})
} else {
Task::ready(Ok(()))
}
}
}
#[derive(Clone, Copy)]
pub enum BreakpointUpdatedReason {
Toggled,
FileSaved,
}
pub enum BreakpointStoreEvent {
ActiveDebugLineChanged,
BreakpointsUpdated(Arc<Path>, BreakpointUpdatedReason),
}
impl EventEmitter<BreakpointStoreEvent> for BreakpointStore {}
type LogMessage = Arc<str>;
#[derive(Clone, Debug)]
pub enum BreakpointEditAction {
Toggle,
EditLogMessage(LogMessage),
}
#[derive(Clone, Debug)]
pub enum BreakpointKind {
Standard,
Log(LogMessage),
}
impl BreakpointKind {
pub fn to_int(&self) -> i32 {
match self {
BreakpointKind::Standard => 0,
BreakpointKind::Log(_) => 1,
}
}
pub fn log_message(&self) -> Option<LogMessage> {
match self {
BreakpointKind::Standard => None,
BreakpointKind::Log(message) => Some(message.clone()),
}
}
}
impl PartialEq for BreakpointKind {
fn eq(&self, other: &Self) -> bool {
std::mem::discriminant(self) == std::mem::discriminant(other)
}
}
impl Eq for BreakpointKind {}
impl Hash for BreakpointKind {
fn hash<H: Hasher>(&self, state: &mut H) {
std::mem::discriminant(self).hash(state);
}
}
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct Breakpoint {
pub kind: BreakpointKind,
}
impl Breakpoint {
fn to_proto(&self, _path: &Path, position: &text::Anchor) -> Option<client::proto::Breakpoint> {
Some(client::proto::Breakpoint {
position: Some(serialize_text_anchor(position)),
kind: match self.kind {
BreakpointKind::Standard => proto::BreakpointKind::Standard.into(),
BreakpointKind::Log(_) => proto::BreakpointKind::Log.into(),
},
message: if let BreakpointKind::Log(message) = &self.kind {
Some(message.to_string())
} else {
None
},
})
}
fn from_proto(breakpoint: client::proto::Breakpoint) -> Option<Self> {
Some(Self {
kind: match proto::BreakpointKind::from_i32(breakpoint.kind) {
Some(proto::BreakpointKind::Log) => {
BreakpointKind::Log(breakpoint.message.clone().unwrap_or_default().into())
}
None | Some(proto::BreakpointKind::Standard) => BreakpointKind::Standard,
},
})
}
}
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
pub struct SerializedBreakpoint {
pub position: u32,
pub path: Arc<Path>,
pub kind: BreakpointKind,
}
impl From<SerializedBreakpoint> for dap::SourceBreakpoint {
fn from(bp: SerializedBreakpoint) -> Self {
Self {
line: bp.position as u64 + 1,
column: None,
condition: None,
hit_condition: None,
log_message: bp.kind.log_message().as_deref().map(Into::into),
mode: None,
}
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,882 @@
use super::{
breakpoint_store::BreakpointStore,
// Will need to uncomment this once we implement rpc message handler again
// dap_command::{
// ContinueCommand, DapCommand, DisconnectCommand, NextCommand, PauseCommand, RestartCommand,
// RestartStackFrameCommand, StepBackCommand, StepCommand, StepInCommand, StepOutCommand,
// TerminateCommand, TerminateThreadsCommand, VariablesCommand,
// },
session::{self, Session},
};
use crate::{debugger, worktree_store::WorktreeStore, ProjectEnvironment};
use anyhow::{anyhow, Result};
use async_trait::async_trait;
use collections::HashMap;
use dap::{
adapters::{DapStatus, DebugAdapterName},
client::SessionId,
messages::Message,
requests::{
Completions, Evaluate, Request as _, RunInTerminal, SetExpression, SetVariable,
StartDebugging,
},
Capabilities, CompletionItem, CompletionsArguments, ErrorResponse, EvaluateArguments,
EvaluateArgumentsContext, EvaluateResponse, RunInTerminalRequestArguments,
SetExpressionArguments, SetVariableArguments, Source, StartDebuggingRequestArguments,
StartDebuggingRequestArgumentsRequest,
};
use fs::Fs;
use futures::{
channel::{mpsc, oneshot},
future::Shared,
};
use gpui::{App, AppContext, AsyncApp, Context, Entity, EventEmitter, SharedString, Task};
use http_client::HttpClient;
use language::{BinaryStatus, LanguageRegistry, LanguageToolchainStore};
use lsp::LanguageServerName;
use node_runtime::NodeRuntime;
use rpc::{
proto::{self},
AnyProtoClient, TypedEnvelope,
};
use serde_json::Value;
use settings::WorktreeId;
use smol::{lock::Mutex, stream::StreamExt};
use std::{
borrow::Borrow,
collections::{BTreeMap, HashSet},
ffi::OsStr,
path::PathBuf,
sync::{atomic::Ordering::SeqCst, Arc},
};
use std::{collections::VecDeque, sync::atomic::AtomicU32};
use task::{AttachConfig, DebugAdapterConfig, DebugRequestType};
use util::ResultExt as _;
use worktree::Worktree;
pub enum DapStoreEvent {
DebugClientStarted(SessionId),
DebugClientShutdown(SessionId),
DebugClientEvent {
session_id: SessionId,
message: Message,
},
RunInTerminal {
session_id: SessionId,
title: Option<String>,
cwd: PathBuf,
command: Option<String>,
args: Vec<String>,
envs: HashMap<String, String>,
sender: mpsc::Sender<Result<u32>>,
},
Notification(String),
RemoteHasInitialized,
}
#[allow(clippy::large_enum_variant)]
pub enum DapStoreMode {
Local(LocalDapStore), // ssh host and collab host
Remote(RemoteDapStore), // collab guest
}
pub struct LocalDapStore {
fs: Arc<dyn Fs>,
node_runtime: NodeRuntime,
next_session_id: AtomicU32,
http_client: Arc<dyn HttpClient>,
worktree_store: Entity<WorktreeStore>,
environment: Entity<ProjectEnvironment>,
language_registry: Arc<LanguageRegistry>,
toolchain_store: Arc<dyn LanguageToolchainStore>,
start_debugging_tx: futures::channel::mpsc::UnboundedSender<(SessionId, Message)>,
_start_debugging_task: Task<()>,
}
impl LocalDapStore {
fn next_session_id(&self) -> SessionId {
SessionId(self.next_session_id.fetch_add(1, SeqCst))
}
}
pub struct RemoteDapStore {
upstream_client: AnyProtoClient,
upstream_project_id: u64,
event_queue: Option<VecDeque<DapStoreEvent>>,
}
pub struct DapStore {
mode: DapStoreMode,
downstream_client: Option<(AnyProtoClient, u64)>,
breakpoint_store: Entity<BreakpointStore>,
sessions: BTreeMap<SessionId, Entity<Session>>,
}
impl EventEmitter<DapStoreEvent> for DapStore {}
impl DapStore {
pub fn init(_client: &AnyProtoClient) {
// todo(debugger): Reenable these after we finish handle_dap_command refactor
// client.add_entity_request_handler(Self::handle_dap_command::<NextCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<StepInCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<StepOutCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<StepBackCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<ContinueCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<PauseCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<DisconnectCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<TerminateThreadsCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<TerminateCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<RestartCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<VariablesCommand>);
// client.add_entity_request_handler(Self::handle_dap_command::<RestartStackFrameCommand>);
}
#[expect(clippy::too_many_arguments)]
pub fn new_local(
http_client: Arc<dyn HttpClient>,
node_runtime: NodeRuntime,
fs: Arc<dyn Fs>,
language_registry: Arc<LanguageRegistry>,
environment: Entity<ProjectEnvironment>,
toolchain_store: Arc<dyn LanguageToolchainStore>,
breakpoint_store: Entity<BreakpointStore>,
worktree_store: Entity<WorktreeStore>,
cx: &mut Context<Self>,
) -> Self {
cx.on_app_quit(Self::shutdown_sessions).detach();
let (start_debugging_tx, mut message_rx) =
futures::channel::mpsc::unbounded::<(SessionId, Message)>();
let _start_debugging_task = cx.spawn(move |this, mut cx| async move {
while let Some((session_id, message)) = message_rx.next().await {
match message {
Message::Request(request) => {
let _ = this
.update(&mut cx, |this, cx| {
if request.command == StartDebugging::COMMAND {
this.handle_start_debugging_request(session_id, request, cx)
.detach_and_log_err(cx);
} else if request.command == RunInTerminal::COMMAND {
this.handle_run_in_terminal_request(session_id, request, cx)
.detach_and_log_err(cx);
}
})
.log_err();
}
_ => {}
}
}
});
Self {
mode: DapStoreMode::Local(LocalDapStore {
fs,
environment,
http_client,
node_runtime,
worktree_store,
toolchain_store,
language_registry,
start_debugging_tx,
_start_debugging_task,
next_session_id: Default::default(),
}),
downstream_client: None,
breakpoint_store,
sessions: Default::default(),
}
}
pub fn new_remote(
project_id: u64,
upstream_client: AnyProtoClient,
breakpoint_store: Entity<BreakpointStore>,
) -> Self {
Self {
mode: DapStoreMode::Remote(RemoteDapStore {
upstream_client,
upstream_project_id: project_id,
event_queue: Some(VecDeque::default()),
}),
downstream_client: None,
breakpoint_store,
sessions: Default::default(),
}
}
pub fn as_remote(&self) -> Option<&RemoteDapStore> {
match &self.mode {
DapStoreMode::Remote(remote_dap_store) => Some(remote_dap_store),
_ => None,
}
}
pub fn remote_event_queue(&mut self) -> Option<VecDeque<DapStoreEvent>> {
if let DapStoreMode::Remote(remote) = &mut self.mode {
remote.event_queue.take()
} else {
None
}
}
pub fn as_local(&self) -> Option<&LocalDapStore> {
match &self.mode {
DapStoreMode::Local(local_dap_store) => Some(local_dap_store),
_ => None,
}
}
pub fn as_local_mut(&mut self) -> Option<&mut LocalDapStore> {
match &mut self.mode {
DapStoreMode::Local(local_dap_store) => Some(local_dap_store),
_ => None,
}
}
pub fn upstream_client(&self) -> Option<(AnyProtoClient, u64)> {
match &self.mode {
DapStoreMode::Remote(RemoteDapStore {
upstream_client,
upstream_project_id,
..
}) => Some((upstream_client.clone(), *upstream_project_id)),
DapStoreMode::Local(_) => None,
}
}
pub fn downstream_client(&self) -> Option<&(AnyProtoClient, u64)> {
self.downstream_client.as_ref()
}
pub fn add_remote_client(
&mut self,
session_id: SessionId,
ignore: Option<bool>,
cx: &mut Context<Self>,
) {
if let DapStoreMode::Remote(remote) = &self.mode {
self.sessions.insert(
session_id,
cx.new(|_| {
debugger::session::Session::remote(
session_id,
remote.upstream_client.clone(),
remote.upstream_project_id,
ignore.unwrap_or(false),
)
}),
);
} else {
debug_assert!(false);
}
}
pub fn session_by_id(
&self,
session_id: impl Borrow<SessionId>,
) -> Option<Entity<session::Session>> {
let session_id = session_id.borrow();
let client = self.sessions.get(session_id).cloned();
client
}
pub fn sessions(&self) -> impl Iterator<Item = &Entity<Session>> {
self.sessions.values()
}
pub fn capabilities_by_id(
&self,
session_id: impl Borrow<SessionId>,
cx: &App,
) -> Option<Capabilities> {
let session_id = session_id.borrow();
self.sessions
.get(session_id)
.map(|client| client.read(cx).capabilities.clone())
}
pub fn breakpoint_store(&self) -> &Entity<BreakpointStore> {
&self.breakpoint_store
}
#[allow(dead_code)]
async fn handle_ignore_breakpoint_state(
this: Entity<Self>,
envelope: TypedEnvelope<proto::IgnoreBreakpointState>,
mut cx: AsyncApp,
) -> Result<()> {
let session_id = SessionId::from_proto(envelope.payload.session_id);
this.update(&mut cx, |this, cx| {
if let Some(session) = this.session_by_id(&session_id) {
session.update(cx, |session, cx| {
session.set_ignore_breakpoints(envelope.payload.ignore, cx)
})
} else {
Task::ready(())
}
})?
.await;
Ok(())
}
pub fn new_session(
&mut self,
config: DebugAdapterConfig,
worktree: &Entity<Worktree>,
parent_session: Option<Entity<Session>>,
cx: &mut Context<Self>,
) -> (SessionId, Task<Result<Entity<Session>>>) {
let Some(local_store) = self.as_local() else {
unimplemented!("Starting session on remote side");
};
let delegate = DapAdapterDelegate::new(
local_store.fs.clone(),
worktree.read(cx).id(),
local_store.node_runtime.clone(),
local_store.http_client.clone(),
local_store.language_registry.clone(),
local_store.toolchain_store.clone(),
local_store.environment.update(cx, |env, cx| {
let worktree = worktree.read(cx);
env.get_environment(Some(worktree.id()), Some(worktree.abs_path()), cx)
}),
);
let session_id = local_store.next_session_id();
let (initialized_tx, initialized_rx) = oneshot::channel();
let start_client_task = Session::local(
self.breakpoint_store.clone(),
session_id,
parent_session,
delegate,
config,
local_store.start_debugging_tx.clone(),
initialized_tx,
cx,
);
let task = cx.spawn(|this, mut cx| async move {
let session = match start_client_task.await {
Ok(session) => session,
Err(error) => {
this.update(&mut cx, |_, cx| {
cx.emit(DapStoreEvent::Notification(error.to_string()));
})
.log_err();
return Err(error);
}
};
// we have to insert the session early, so we can handle reverse requests
// that need the session to be available
this.update(&mut cx, |store, cx| {
store.sessions.insert(session_id, session.clone());
cx.emit(DapStoreEvent::DebugClientStarted(session_id));
cx.notify();
})?;
match session
.update(&mut cx, |session, cx| {
session.initialize_sequence(initialized_rx, cx)
})?
.await
{
Ok(_) => {}
Err(error) => {
this.update(&mut cx, |this, cx| {
cx.emit(DapStoreEvent::Notification(error.to_string()));
this.shutdown_session(session_id, cx)
})?
.await
.log_err();
return Err(error);
}
}
Ok(session)
});
(session_id, task)
}
fn handle_start_debugging_request(
&mut self,
session_id: SessionId,
request: dap::messages::Request,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let Some(local_store) = self.as_local() else {
unreachable!("Cannot response for non-local session");
};
let Some(parent_session) = self.session_by_id(session_id) else {
return Task::ready(Err(anyhow!("Session not found")));
};
let args = serde_json::from_value::<StartDebuggingRequestArguments>(
request.arguments.unwrap_or_default(),
)
.expect("To parse StartDebuggingRequestArguments");
let worktree = local_store
.worktree_store
.update(cx, |this, _| this.worktrees().next())
.expect("worktree-less project");
let Some(config) = parent_session.read(cx).configuration() else {
unreachable!("there must be a config for local sessions");
};
let (_, new_session_task) = self.new_session(
DebugAdapterConfig {
label: config.label,
kind: config.kind,
request: match &args.request {
StartDebuggingRequestArgumentsRequest::Launch => DebugRequestType::Launch,
StartDebuggingRequestArgumentsRequest::Attach => {
DebugRequestType::Attach(AttachConfig::default())
}
},
program: config.program,
cwd: config.cwd,
initialize_args: Some(args.configuration),
supports_attach: config.supports_attach,
},
&worktree,
Some(parent_session.clone()),
cx,
);
let request_seq = request.seq;
cx.spawn(|_, mut cx| async move {
let (success, body) = match new_session_task.await {
Ok(_) => (true, None),
Err(error) => (
false,
Some(serde_json::to_value(ErrorResponse {
error: Some(dap::Message {
id: request_seq,
format: error.to_string(),
variables: None,
send_telemetry: None,
show_user: None,
url: None,
url_label: None,
}),
})?),
),
};
parent_session
.update(&mut cx, |session, cx| {
session.respond_to_client(
request_seq,
success,
StartDebugging::COMMAND.to_string(),
body,
cx,
)
})?
.await
})
}
fn handle_run_in_terminal_request(
&mut self,
session_id: SessionId,
request: dap::messages::Request,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let Some(session) = self.session_by_id(session_id) else {
return Task::ready(Err(anyhow!("Session not found")));
};
let request_args = serde_json::from_value::<RunInTerminalRequestArguments>(
request.arguments.unwrap_or_default(),
)
.expect("To parse StartDebuggingRequestArguments");
let seq = request.seq;
let cwd = PathBuf::from(request_args.cwd);
match cwd.try_exists() {
Ok(true) => (),
Ok(false) | Err(_) => {
return session.update(cx, |session, cx| {
session.respond_to_client(
seq,
false,
RunInTerminal::COMMAND.to_string(),
serde_json::to_value(dap::ErrorResponse {
error: Some(dap::Message {
id: seq,
format: format!("Received invalid/unknown cwd: {cwd:?}"),
variables: None,
send_telemetry: None,
show_user: None,
url: None,
url_label: None,
}),
})
.ok(),
cx,
)
})
}
}
let mut args = request_args.args.clone();
// Handle special case for NodeJS debug adapter
// If only the Node binary path is provided, we set the command to None
// This prevents the NodeJS REPL from appearing, which is not the desired behavior
// The expected usage is for users to provide their own Node command, e.g., `node test.js`
// This allows the NodeJS debug client to attach correctly
let command = if args.len() > 1 {
Some(args.remove(0))
} else {
None
};
let mut envs: HashMap<String, String> = Default::default();
if let Some(Value::Object(env)) = request_args.env {
for (key, value) in env {
let value_str = match (key.as_str(), value) {
(_, Value::String(value)) => value,
_ => continue,
};
envs.insert(key, value_str);
}
}
let (tx, mut rx) = mpsc::channel::<Result<u32>>(1);
cx.emit(DapStoreEvent::RunInTerminal {
session_id,
title: request_args.title,
cwd,
command,
args,
envs,
sender: tx,
});
cx.notify();
let session = session.downgrade();
cx.spawn(|_, mut cx| async move {
let (success, body) = match rx.next().await {
Some(Ok(pid)) => (
true,
serde_json::to_value(dap::RunInTerminalResponse {
process_id: None,
shell_process_id: Some(pid as u64),
})
.ok(),
),
Some(Err(error)) => (
false,
serde_json::to_value(dap::ErrorResponse {
error: Some(dap::Message {
id: seq,
format: error.to_string(),
variables: None,
send_telemetry: None,
show_user: None,
url: None,
url_label: None,
}),
})
.ok(),
),
None => (
false,
serde_json::to_value(dap::ErrorResponse {
error: Some(dap::Message {
id: seq,
format: "failed to receive response from spawn terminal".to_string(),
variables: None,
send_telemetry: None,
show_user: None,
url: None,
url_label: None,
}),
})
.ok(),
),
};
session
.update(&mut cx, |session, cx| {
session.respond_to_client(
seq,
success,
RunInTerminal::COMMAND.to_string(),
body,
cx,
)
})?
.await
})
}
pub fn evaluate(
&self,
session_id: &SessionId,
stack_frame_id: u64,
expression: String,
context: EvaluateArgumentsContext,
source: Option<Source>,
cx: &mut Context<Self>,
) -> Task<Result<EvaluateResponse>> {
let Some(client) = self
.session_by_id(session_id)
.and_then(|client| client.read(cx).adapter_client())
else {
return Task::ready(Err(anyhow!("Could not find client: {:?}", session_id)));
};
cx.background_executor().spawn(async move {
client
.request::<Evaluate>(EvaluateArguments {
expression: expression.clone(),
frame_id: Some(stack_frame_id),
context: Some(context),
format: None,
line: None,
column: None,
source,
})
.await
})
}
pub fn completions(
&self,
session_id: &SessionId,
stack_frame_id: u64,
text: String,
completion_column: u64,
cx: &mut Context<Self>,
) -> Task<Result<Vec<CompletionItem>>> {
let Some(client) = self
.session_by_id(session_id)
.and_then(|client| client.read(cx).adapter_client())
else {
return Task::ready(Err(anyhow!("Could not find client: {:?}", session_id)));
};
cx.background_executor().spawn(async move {
Ok(client
.request::<Completions>(CompletionsArguments {
frame_id: Some(stack_frame_id),
line: None,
text,
column: completion_column,
})
.await?
.targets)
})
}
#[allow(clippy::too_many_arguments)]
pub fn set_variable_value(
&self,
session_id: &SessionId,
stack_frame_id: u64,
variables_reference: u64,
name: String,
value: String,
evaluate_name: Option<String>,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let Some(client) = self
.session_by_id(session_id)
.and_then(|client| client.read(cx).adapter_client())
else {
return Task::ready(Err(anyhow!("Could not find client: {:?}", session_id)));
};
let supports_set_expression = self
.capabilities_by_id(session_id, cx)
.and_then(|caps| caps.supports_set_expression)
.unwrap_or_default();
cx.background_executor().spawn(async move {
if let Some(evaluate_name) = supports_set_expression.then(|| evaluate_name).flatten() {
client
.request::<SetExpression>(SetExpressionArguments {
expression: evaluate_name,
value,
frame_id: Some(stack_frame_id),
format: None,
})
.await?;
} else {
client
.request::<SetVariable>(SetVariableArguments {
variables_reference,
name,
value,
format: None,
})
.await?;
}
Ok(())
})
}
// .. get the client and what not
// let _ = client.modules(); // This can fire a request to a dap adapter or be a cheap getter.
// client.wait_for_request(request::Modules); // This ensures that the request that we've fired off runs to completions
// let returned_value = client.modules(); // this is a cheap getter.
pub fn shutdown_sessions(&mut self, cx: &mut Context<Self>) -> Task<()> {
let mut tasks = vec![];
for session_id in self.sessions.keys().cloned().collect::<Vec<_>>() {
tasks.push(self.shutdown_session(session_id, cx));
}
cx.background_executor().spawn(async move {
futures::future::join_all(tasks).await;
})
}
pub fn shutdown_session(
&mut self,
session_id: SessionId,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let Some(_) = self.as_local_mut() else {
return Task::ready(Err(anyhow!("Cannot shutdown session on remote side")));
};
let Some(session) = self.sessions.remove(&session_id) else {
return Task::ready(Err(anyhow!("Could not find session: {:?}", session_id)));
};
let shutdown_parent_task = session
.read(cx)
.parent_id()
.map(|parent_id| self.shutdown_session(parent_id, cx));
let shutdown_task = session.update(cx, |this, cx| this.shutdown(cx));
cx.background_spawn(async move {
shutdown_task.await;
if let Some(parent_task) = shutdown_parent_task {
parent_task.await?;
}
Ok(())
})
}
pub fn shared(
&mut self,
project_id: u64,
downstream_client: AnyProtoClient,
_: &mut Context<Self>,
) {
self.downstream_client = Some((downstream_client.clone(), project_id));
}
pub fn unshared(&mut self, cx: &mut Context<Self>) {
self.downstream_client.take();
cx.notify();
}
}
#[derive(Clone)]
pub struct DapAdapterDelegate {
fs: Arc<dyn Fs>,
worktree_id: WorktreeId,
node_runtime: NodeRuntime,
http_client: Arc<dyn HttpClient>,
language_registry: Arc<LanguageRegistry>,
toolchain_store: Arc<dyn LanguageToolchainStore>,
updated_adapters: Arc<Mutex<HashSet<DebugAdapterName>>>,
load_shell_env_task: Shared<Task<Option<HashMap<String, String>>>>,
}
impl DapAdapterDelegate {
pub fn new(
fs: Arc<dyn Fs>,
worktree_id: WorktreeId,
node_runtime: NodeRuntime,
http_client: Arc<dyn HttpClient>,
language_registry: Arc<LanguageRegistry>,
toolchain_store: Arc<dyn LanguageToolchainStore>,
load_shell_env_task: Shared<Task<Option<HashMap<String, String>>>>,
) -> Self {
Self {
fs,
worktree_id,
http_client,
node_runtime,
toolchain_store,
language_registry,
load_shell_env_task,
updated_adapters: Default::default(),
}
}
}
#[async_trait(?Send)]
impl dap::adapters::DapDelegate for DapAdapterDelegate {
fn worktree_id(&self) -> WorktreeId {
self.worktree_id
}
fn http_client(&self) -> Arc<dyn HttpClient> {
self.http_client.clone()
}
fn node_runtime(&self) -> NodeRuntime {
self.node_runtime.clone()
}
fn fs(&self) -> Arc<dyn Fs> {
self.fs.clone()
}
fn updated_adapters(&self) -> Arc<Mutex<HashSet<DebugAdapterName>>> {
self.updated_adapters.clone()
}
fn update_status(&self, dap_name: DebugAdapterName, status: dap::adapters::DapStatus) {
let name = SharedString::from(dap_name.to_string());
let status = match status {
DapStatus::None => BinaryStatus::None,
DapStatus::Downloading => BinaryStatus::Downloading,
DapStatus::Failed { error } => BinaryStatus::Failed { error },
DapStatus::CheckingForUpdate => BinaryStatus::CheckingForUpdate,
};
self.language_registry
.update_dap_status(LanguageServerName(name), status);
}
fn which(&self, command: &OsStr) -> Option<PathBuf> {
which::which(command).ok()
}
async fn shell_env(&self) -> HashMap<String, String> {
let task = self.load_shell_env_task.clone();
task.await.unwrap_or_default()
}
fn toolchain_store(&self) -> Arc<dyn LanguageToolchainStore> {
self.toolchain_store.clone()
}
}

File diff suppressed because it is too large Load diff