ZIm/crates/language_models/src/provider
Roshan Padaki af461f8165
assistant: Use GPT 4 tokenizer for o3-mini (#24068)
Sorry to dump an unsolicited PR for a hot feature! I'm sure someone else
was taking a look at this.

I noticed that token counting was disabled and I was getting error logs
of the form `[2025-01-31T22:59:01-05:00 ERROR assistant_context_editor]
No tokenizer found for model o3-mini` when using the new model. To fix
the issue, this PR registers the `gpt-4` tokenizer for this model.

Release Notes:

- openai: Fixed Assistant token counts for `o3-mini` models
2025-02-01 12:08:44 -05:00
..
anthropic.rs gpui: Add line_clamp to truncate text after a specified number of lines (#23058) 2025-01-29 22:14:24 +02:00
cloud.rs Remove more references to 'model' in GPUI APIs (#23693) 2025-01-27 04:00:27 +00:00
copilot_chat.rs Fix missed renames in #22632 (#23688) 2025-01-26 23:37:34 +00:00
deepseek.rs gpui: Add line_clamp to truncate text after a specified number of lines (#23058) 2025-01-29 22:14:24 +02:00
google.rs gpui: Add line_clamp to truncate text after a specified number of lines (#23058) 2025-01-29 22:14:24 +02:00
lmstudio.rs assistant2: Tweak the settings UI (#23845) 2025-01-29 16:20:09 -03:00
ollama.rs assistant2: Tweak the settings UI (#23845) 2025-01-29 16:20:09 -03:00
open_ai.rs assistant: Use GPT 4 tokenizer for o3-mini (#24068) 2025-02-01 12:08:44 -05:00