ZIm/crates/assistant_eval
Agus Zubiaga 315f1bf168
agent: Snapshot context in user message instead of recreating it (#27967)
This makes context essentially work the same way as `read-file`,
increasing the likelihood of cache hits.

Just like with `read-file`, we'll notify the model when the user makes
an edit to one of the tracked files. In the future, we want to send a
diff instead of just a list of files, but that's an orthogonal change.


Release Notes:
- agent: Improved caching of files in context

---------

Co-authored-by: Antonio Scandurra <me@as-cii.com>
2025-04-03 15:52:28 -03:00
..
src agent: Snapshot context in user message instead of recreating it (#27967) 2025-04-03 15:52:28 -03:00
build.rs Switch fully to Rust Livekit (redux) (#27126) 2025-03-28 17:58:23 +00:00
Cargo.toml assistant_eval: Add ACE framework (#27181) 2025-04-02 23:02:06 -05:00
LICENSE-GPL Add initial implementation of evaluating changes generated by the assistant (#26799) 2025-03-14 23:10:25 +00:00
README.md assistant_eval: Add ACE framework (#27181) 2025-04-02 23:02:06 -05:00

Tool Evals

A framework for evaluating and benchmarking the agent panel generations.

Overview

Tool Evals provides a headless environment for running assistants evaluations on code repositories. It automates the process of:

  1. Setting up test code and repositories
  2. Sending prompts to language models
  3. Allowing the assistant to use tools to modify code
  4. Collecting metrics on performance and tool usage
  5. Evaluating results against known good solutions

How It Works

The system consists of several key components:

  • Eval: Loads exercises from the zed-ace-framework repository, creates temporary repos, and executes evaluations
  • HeadlessAssistant: Provides a headless environment for running the AI assistant
  • Judge: Evaluates AI-generated solutions against reference implementations and assigns scores
  • Templates: Defines evaluation frameworks for different tasks (Project Creation, Code Modification, Conversational Guidance)

Setup Requirements

Prerequisites

  • Rust and Cargo
  • Git
  • Python (for report generation)
  • Network access to clone repositories
  • Appropriate API keys for language models and git services (Anthropic, GitHub, etc.)

Environment Variables

Ensure you have the required API keys set, either from a dev run of Zed or via these environment variables:

  • ZED_ANTHROPIC_API_KEY for Claude models
  • ZED_GITHUB_API_KEY for GitHub API (or similar)

Usage

Running Evaluations

# Run all tests
cargo run -p assistant_eval -- --all

# Run only specific languages
cargo run -p assistant_eval -- --all --languages python,rust

# Limit concurrent evaluations
cargo run -p assistant_eval -- --all --concurrency 5

# Limit number of exercises per language
cargo run -p assistant_eval -- --all --max-exercises-per-language 3

Evaluation Template Types

The system supports three types of evaluation templates:

  1. ProjectCreation: Tests the model's ability to create new implementations from scratch
  2. CodeModification: Tests the model's ability to modify existing code to meet new requirements
  3. ConversationalGuidance: Tests the model's ability to provide guidance without writing code

Support Repo

The zed-industries/zed-ace-framework contains the analytics and reporting scripts.