assistant2: Handle LLM providers that do not emit StartMessage
events (#23485)
This PR updates Assistant2's response streaming to work with LLM providers that do not emit `StartMessage` events. Now if we get a `Text` event without having received a `StartMessage` event we will still insert an Assistant message so we can stream in the response from the model. Release Notes: - N/A
This commit is contained in:
parent
6aab82c180
commit
2c2a3ef13d
1 changed files with 7 additions and 0 deletions
|
@ -308,6 +308,13 @@ impl Thread {
|
||||||
last_message.id,
|
last_message.id,
|
||||||
chunk,
|
chunk,
|
||||||
));
|
));
|
||||||
|
} else {
|
||||||
|
// If we won't have an Assistant message yet, assume this chunk marks the beginning
|
||||||
|
// of a new Assistant response.
|
||||||
|
//
|
||||||
|
// Importantly: We do *not* want to emit a `StreamedAssistantText` event here, as it
|
||||||
|
// will result in duplicating the text of the chunk in the rendered Markdown.
|
||||||
|
thread.insert_message(Role::Assistant, chunk, cx);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue