fix(chat): empty assistant bubble for plain LLM responses#290
Merged
Conversation
@cacheplane/partial-markdown@0.3 does not flush trailing text on finish() unless the buffer ends with a newline. Plain LLM responses typically omit the trailing newline, so the parser produced a document with zero children — the assistant bubble shell rendered but the message text was missing entirely. Reproduce on main: send 'Say hello in one sentence.' to gpt-5-mini. Backend logs show success (1.8s); agent.messages() carries 'Hello — nice to meet you!'; the DOM shows the bubble structure (checkpoint marker, action buttons) but no text. Fix: push a sentinel newline before calling finish() in both branches of the root() computed (content-changed-then-flush and streaming-flipped- without-new-content). The newline only ever exists inside the parser buffer; the original content() signal is unchanged. Two new regression tests cover plain text without trailing newline in both branches. Found while walking the smoke checklist post-#288.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
4 tasks
blove
added a commit
that referenced
this pull request
May 13, 2026
* docs: add Phase 1 CI testing coverage spec Defines the input-variance table-test approach for four streaming-render units (chat-streaming-md, content-classifier, partial-args-bridge, a2ui parser). Motivated by PR #290 — the empty-assistant-bubble bug shipped because every chat-streaming-md test used input ending in '\n', and the 'no trailing newline' LLM-response shape was uncovered. Phase 0 (test-infrastructure audit) and Phase 3 (AIMock E2E + CI wiring) remain deferred. * docs: add Phase 1 CI testing coverage implementation plan Bite-sized TDD-style tasks for the four target units. All test files new or append-only — no production code changes. * test(chat): add chat-streaming-md input-variance table * test(chat): add content-classifier input-variance table * test(chat): add partial-args-bridge input-variance table * test(a2ui): add message-parser input-variance table * docs: amend Phase 1 spec/plan with row corrections learned in flight - Drop `selectorAbsent: 'p'` from chat-streaming-md 'whitespace only' row (markdown-it emits a placeholder <p> for whitespace input; the trimmed-text invariant still holds and is the only assertion that matters). - Drop the char-by-char progressive-prefix row from partial-args-bridge variance. Partial-json materializes partially-parsed strings as their incomplete text, so the bridge's mount-once gate fires with a partial id ('r') and never re-targets when the full id ('root') resolves. LLM streams are token-chunked, not char-chunked, so this edge case has never bitten production. Spec footnote logs it as a latent concern for a future phase.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Regression discovered during smoke-checklist walk. Sending any prompt with a plain-text response (no markdown structure) produces an empty assistant bubble. The bubble shell renders (checkpoint marker, action buttons) but the message text is missing entirely.
Reproduction
Root cause
`@cacheplane/partial-markdown@0.3.0` does NOT flush trailing text on `finish()` unless the buffer ends with `\n`. Verified with a direct test:
```
"Hello world." → 0 children ❌
"Hello world.\n" → 1 paragraph ✅
"Hello world.\n\n" → 1 paragraph ✅
"# Hello\n\nworld" → 1 heading ✅
```
OpenAI responses for short turns typically omit the trailing newline, so `finish()` returns without committing the in-progress paragraph and the document is empty.
Fix
In `streaming-markdown.component.ts`, push a sentinel `\n` before `finish()` in both branches of the `root()` computed:
The newline only exists inside the parser buffer; the `content()` signal stays unchanged.
Tests
Why this regression slipped through
Test plan