assistant2: Handle LLM providers that do not emit StartMessage events (#23485)

This PR updates Assistant2's response streaming to work with LLM
providers that do not emit `StartMessage` events.

Now if we get a `Text` event without having received a `StartMessage`
event we will still insert an Assistant message so we can stream in the
response from the model.

Release Notes:

- N/A
This commit is contained in:
Marshall Bowers 2025-01-22 15:15:16 -05:00 committed by GitHub
parent 6aab82c180
commit 2c2a3ef13d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -308,6 +308,13 @@ impl Thread {
last_message.id,
chunk,
));
} else {
// If we won't have an Assistant message yet, assume this chunk marks the beginning
// of a new Assistant response.
//
// Importantly: We do *not* want to emit a `StreamedAssistantText` event here, as it
// will result in duplicating the text of the chunk in the rendered Markdown.
thread.insert_message(Role::Assistant, chunk, cx);
}
}
}