ZIm/crates/language_model/src
Agus Zubiaga 0286b8ab3e
agent: Fix conversation token usage and estimate unsent message (#28878)
The UI was mistakenly using the cumulative token usage for the token
counter. It will now display the last request token count, plus an
estimation of the tokens in the message editor and context entries that
haven't been sent yet.


https://github.com/user-attachments/assets/0438c501-b850-4397-9135-57214ca3c07a

Additionally, when the user edits a message, we'll display the actual
token count up to it and estimate the tokens in the new message.

Note: We don't currently estimate the delta when switching profiles. In
the future, we want to use the count tokens API to measure every part of
the request and display a breakdown.

Release Notes:

- agent: Made the token count more accurate and added back estimation of
used tokens as you type and add context.

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-04-16 16:27:36 -03:00
..
model proto: Add ZedProTrial to Plan (#28885) 2025-04-16 18:13:00 +00:00
fake_provider.rs language_model: Remove use_any_tool method from LanguageModel (#27930) 2025-04-02 15:49:21 +00:00
language_model.rs agent: Fix conversation token usage and estimate unsent message (#28878) 2025-04-16 16:27:36 -03:00
rate_limiter.rs chore: Prepare for Rust edition bump to 2024 (without autofix) (#27791) 2025-03-31 20:10:36 +02:00
registry.rs ai: Separate model settings for each feature (#28088) 2025-04-04 11:40:55 -03:00
request.rs chore: Bump Rust edition to 2024 (#27800) 2025-03-31 20:55:27 +02:00
role.rs language_model: Remove dependencies on individual model provider crates (#25503) 2025-02-24 16:41:35 -05:00
telemetry.rs telemetry_events: Rename AssistantEvent to AssistantEventData (#28133) 2025-04-04 19:28:32 -04:00