fix(agent): consolidate chat graphs into streaming LangGraph deployment#113
Merged
Conversation
normalizeMessages() had two code paths: event['messages'] (returned
unfiltered) and event['data'] (filtered by isMessageLike). In
production, FetchStreamTransport's normalizeSdkEvent wraps the raw SDK
data array—which includes metadata objects like { langgraph_node,
langgraph_triggers }—into event.messages. These metadata objects lack
content/type/id fields, causing messageContent() to return undefined
and crashing the content classifier's detectType() on
undefined.length.
The fix applies the existing isMessageLike filter to the
event['messages'] path. Tests now simulate post-normalization event
shapes matching what FetchStreamTransport produces.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…G0600 classifyMessage() is called during Angular template rendering (in the AI message template via @let). The classifier's update() method writes to signals (typeSignal.set, markdownSignal.set, etc.), which Angular 21's stricter signal write guards flag as NG0600 — writing signals during change detection is forbidden. Wrapping update() in untracked() opts out of the reactive graph for this imperative push-based API. The template reads the classifier's signals after the update call returns, so reactivity is preserved. Verified with multi-turn streaming conversation against production LangGraph backend — markdown renders correctly, zero console errors. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…er and frame-synced pipeline Architecture changes to fix streaming chat jank at the root: **Streaming markdown renderer** (new) - Append-only DOM renderer that never uses innerHTML during streaming - Processes text deltas incrementally via a line-based state machine - Handles paragraphs, bold/italic, headers, lists, code blocks, links, blockquotes, and tables — all rendered by appending DOM nodes - On stream completion, does a single high-quality marked.parse() render - 38 new tests covering all markdown features + streaming simulation **Frame-synced signal pipeline** - Default throttle changed from 0 (every token) to 16ms (~60fps) - Batches SSE token updates so at most one signal update fires per frame - Eliminates change detection storms during high-throughput streaming **Typing indicator fix** - Now only shows before the first AI token arrives (time-to-first-token) - Previously showed the entire duration of streaming, overlapping content **Optimistic human message** - Human message bubble appears immediately on submit - Previously waited for server to echo it back via messages/partial **Auto-scroll fix** - Removed isLoading tracking from scroll effect - Now triggers on message content changes, not loading state transitions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
All chat/* cockpit examples (generative-ui, messages, input, debug, interrupts, theming, threads, timeline, tool-calls, subagents, a2ui) used separate assistant IDs but shared the same /api proxy — which only routed to the streaming LangGraph Cloud deployment. This caused HTTP 422 "Invalid assistant" errors for every non-streaming example. Fix: register all 12 graph names in the streaming deployment's langgraph.json. A factory module (chat_graphs.py) creates prompt-based graph instances for each chat example. The a2ui graph (custom logic) is copied as a standalone module. No Angular code changes — the assistant IDs in the production environment files already match the newly registered graph names. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
blove
added a commit
that referenced
this pull request
Apr 12, 2026
All chat/* standalone LangGraph Cloud deployments are unreachable (TCP timeout). The streaming deployment is healthy and already has all 12 graphs consolidated (PR #113). Changes: - examples-middleware.ts: route all chat/* paths to 'streaming' key - a2ui environment: use 'a2ui_form' (streaming deployment's name) - a2ui_graph.py in streaming deployment: update to hardcoded v0.9 JSONL matching PR #120's fix Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3 tasks
blove
added a commit
that referenced
this pull request
Apr 12, 2026
All chat/* standalone LangGraph Cloud deployments are unreachable (TCP timeout). The streaming deployment is healthy and already has all 12 graphs consolidated (PR #113). Changes: - examples-middleware.ts: route all chat/* paths to 'streaming' key - a2ui environment: use 'a2ui_form' (streaming deployment's name) - a2ui_graph.py in streaming deployment: update to hardcoded v0.9 JSONL matching PR #120's fix Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
blove
added a commit
that referenced
this pull request
Apr 12, 2026
All chat/* standalone LangGraph Cloud deployments are unreachable (TCP timeout). The streaming deployment is healthy and already has all 12 graphs consolidated (PR #113). Changes: - examples-middleware.ts: route all chat/* paths to 'streaming' key - a2ui environment: use 'a2ui_form' (streaming deployment's name) - a2ui_graph.py in streaming deployment: update to hardcoded v0.9 JSONL matching PR #120's fix Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
All chat/* cockpit examples (generative-ui, messages, input, debug, interrupts, theming, threads, timeline, tool-calls, subagents, a2ui) use separate assistant IDs but share the same
/apiproxy — which only routes to thestreamingLangGraph Cloud deployment. This causes HTTP 422 "Invalid assistant" errors for every non-streaming chat example.deploy-langgraph.ymlonly deploys langgraph/* and deep-agents/* graphs. Chat/* graphs were never deployed, and all Angular apps share one API proxy endpoint.langgraph.json. A factory module (chat_graphs.py) creates prompt-based graph instances for each chat example. The a2ui graph (custom logic with hardcoded JSONL) is copied as a standalone module.Files
langgraph.json— expanded from 1 graph to 12src/chat_graphs.py— factory that builds prompt-based graphs (new)src/a2ui_graph.py— a2ui contact form graph with custom logic (new)prompts/*.md— 10 chat prompt files copied from chat/* examples (new)Test plan
langgraph.jsonvalid JSON with 12 registered graphsnx test agent— 35/35 passnx test chat— 214/214 pass🤖 Generated with Claude Code