[This PR has been sitting around for a
bit](https://github.com/zed-industries/zed/pull/2845). I received a bit
of mixed opinions from the team on how this setting should work, if it
should use the full model names or some simpler form of it, etc. I went
ahead and made the decision to do the following:
- Use the full model names in settings - ex: `gpt-4-0613`
- Default to `gpt-4-0613` when no setting is present
- Save the full model names in the conversation history files (this is
how it was prior) - ex: `gpt-4-0613`
- Display the shortened model names in the assistant - ex: `gpt-4`
- Not worry about adding an option to add custom models (can add in a
follow-up PR)
- Not query what models are available to the user via their api key (can
add in a follow-up PR)
Release Notes:
- Added a `default_open_ai_model` setting for the assistant (defaults to
`gpt-4-0613`).
---------
Co-authored-by: Mikayla <mikayla@zed.dev>
Still need to implement loading / listing.
I'd really be rather write operations to a database. Maybe we
should be auto-saving? Integrating with panes? I just did
the simple thing for now.
Each message is represented as a multibuffer excerpt to allow for
fluid editing of the conversation transcript.
Co-Authored-By: Antonio Scandurra <antonio@zed.dev>
Drop dependency on tokio introduced by async-openai and do it ourselves.
The approach I'm taking of replacing instead of appending is causing issues. Need to just append.