Standardize on u64 for token counts (#32869)
Previously we were using a mix of `u32` and `usize`, e.g. `max_tokens: usize, max_output_tokens: Option<u32>` in the same `struct`. Although [tiktoken](https://github.com/openai/tiktoken) uses `usize`, token counts should be consistent across targets (e.g. the same model doesn't suddenly get a smaller context window if you're compiling for wasm32), and these token counts could end up getting serialized using a binary protocol, so `usize` is not the right choice for token counts. I chose to standardize on `u64` over `u32` because we don't store many of them (so the extra size should be insignificant) and future models may exceed `u32::MAX` tokens. Release Notes: - N/A
This commit is contained in:
parent
a391d67366
commit
5405c2c2d3
32 changed files with 191 additions and 192 deletions
|
@ -154,7 +154,7 @@ pub struct RulesLibrary {
|
|||
struct RuleEditor {
|
||||
title_editor: Entity<Editor>,
|
||||
body_editor: Entity<Editor>,
|
||||
token_count: Option<usize>,
|
||||
token_count: Option<u64>,
|
||||
pending_token_count: Task<Option<()>>,
|
||||
next_title_and_body_to_save: Option<(String, Rope)>,
|
||||
pending_save: Option<Task<Option<()>>>,
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue