assistant: Use GPT 4 tokenizer for o3-mini
(#24068)
Sorry to dump an unsolicited PR for a hot feature! I'm sure someone else was taking a look at this. I noticed that token counting was disabled and I was getting error logs of the form `[2025-01-31T22:59:01-05:00 ERROR assistant_context_editor] No tokenizer found for model o3-mini` when using the new model. To fix the issue, this PR registers the `gpt-4` tokenizer for this model. Release Notes: - openai: Fixed Assistant token counts for `o3-mini` models
This commit is contained in:
parent
f6824e3eaa
commit
af461f8165
1 changed files with 4 additions and 1 deletions
|
@ -361,7 +361,10 @@ pub fn count_open_ai_tokens(
|
|||
.collect::<Vec<_>>();
|
||||
|
||||
match model {
|
||||
open_ai::Model::Custom { .. } | open_ai::Model::O1Mini | open_ai::Model::O1 => {
|
||||
open_ai::Model::Custom { .. }
|
||||
| open_ai::Model::O1Mini
|
||||
| open_ai::Model::O1
|
||||
| open_ai::Model::O3Mini => {
|
||||
tiktoken_rs::num_tokens_from_messages("gpt-4", &messages)
|
||||
}
|
||||
_ => tiktoken_rs::num_tokens_from_messages(model.id(), &messages),
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue