Skip to content

fix(agent): consolidate chat graphs into streaming LangGraph deployment#113

Merged
blove merged 4 commits into
mainfrom
claude/condescending-sanderson
Apr 11, 2026
Merged

fix(agent): consolidate chat graphs into streaming LangGraph deployment#113
blove merged 4 commits into
mainfrom
claude/condescending-sanderson

Conversation

@blove
Copy link
Copy Markdown
Contributor

@blove blove commented Apr 11, 2026

Summary

All chat/* cockpit examples (generative-ui, messages, input, debug, interrupts, theming, threads, timeline, tool-calls, subagents, a2ui) use separate assistant IDs but share the same /api proxy — which only routes to the streaming LangGraph Cloud deployment. This causes HTTP 422 "Invalid assistant" errors for every non-streaming chat example.

  • Root cause: The deployment matrix in deploy-langgraph.yml only deploys langgraph/* and deep-agents/* graphs. Chat/* graphs were never deployed, and all Angular apps share one API proxy endpoint.
  • Fix: Register all 12 graph names in the streaming deployment's langgraph.json. A factory module (chat_graphs.py) creates prompt-based graph instances for each chat example. The a2ui graph (custom logic with hardcoded JSONL) is copied as a standalone module.
  • No Angular changes: The assistant IDs in production environment files already match the newly registered graph names.

Files

  • langgraph.json — expanded from 1 graph to 12
  • src/chat_graphs.py — factory that builds prompt-based graphs (new)
  • src/a2ui_graph.py — a2ui contact form graph with custom logic (new)
  • prompts/*.md — 10 chat prompt files copied from chat/* examples (new)

Test plan

  • Python syntax validated for all new modules
  • langgraph.json valid JSON with 12 registered graphs
  • nx test agent — 35/35 pass
  • nx test chat — 214/214 pass
  • After merge + deploy: verify generative-ui example at cockpit.cacheplane.ai

🤖 Generated with Claude Code

blove and others added 4 commits April 10, 2026 12:09
normalizeMessages() had two code paths: event['messages'] (returned
unfiltered) and event['data'] (filtered by isMessageLike). In
production, FetchStreamTransport's normalizeSdkEvent wraps the raw SDK
data array—which includes metadata objects like { langgraph_node,
langgraph_triggers }—into event.messages. These metadata objects lack
content/type/id fields, causing messageContent() to return undefined
and crashing the content classifier's detectType() on
undefined.length.

The fix applies the existing isMessageLike filter to the
event['messages'] path. Tests now simulate post-normalization event
shapes matching what FetchStreamTransport produces.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…G0600

classifyMessage() is called during Angular template rendering (in the
AI message template via @let). The classifier's update() method writes
to signals (typeSignal.set, markdownSignal.set, etc.), which Angular
21's stricter signal write guards flag as NG0600 — writing signals
during change detection is forbidden.

Wrapping update() in untracked() opts out of the reactive graph for
this imperative push-based API. The template reads the classifier's
signals after the update call returns, so reactivity is preserved.

Verified with multi-turn streaming conversation against production
LangGraph backend — markdown renders correctly, zero console errors.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…er and frame-synced pipeline

Architecture changes to fix streaming chat jank at the root:

**Streaming markdown renderer** (new)
- Append-only DOM renderer that never uses innerHTML during streaming
- Processes text deltas incrementally via a line-based state machine
- Handles paragraphs, bold/italic, headers, lists, code blocks, links,
  blockquotes, and tables — all rendered by appending DOM nodes
- On stream completion, does a single high-quality marked.parse() render
- 38 new tests covering all markdown features + streaming simulation

**Frame-synced signal pipeline**
- Default throttle changed from 0 (every token) to 16ms (~60fps)
- Batches SSE token updates so at most one signal update fires per frame
- Eliminates change detection storms during high-throughput streaming

**Typing indicator fix**
- Now only shows before the first AI token arrives (time-to-first-token)
- Previously showed the entire duration of streaming, overlapping content

**Optimistic human message**
- Human message bubble appears immediately on submit
- Previously waited for server to echo it back via messages/partial

**Auto-scroll fix**
- Removed isLoading tracking from scroll effect
- Now triggers on message content changes, not loading state transitions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
All chat/* cockpit examples (generative-ui, messages, input, debug,
interrupts, theming, threads, timeline, tool-calls, subagents, a2ui)
used separate assistant IDs but shared the same /api proxy — which
only routed to the streaming LangGraph Cloud deployment. This caused
HTTP 422 "Invalid assistant" errors for every non-streaming example.

Fix: register all 12 graph names in the streaming deployment's
langgraph.json. A factory module (chat_graphs.py) creates prompt-based
graph instances for each chat example. The a2ui graph (custom logic)
is copied as a standalone module.

No Angular code changes — the assistant IDs in the production
environment files already match the newly registered graph names.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 11, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
cacheplane Ready Ready Preview, Comment Apr 11, 2026 4:29pm

Request Review

@blove blove merged commit 6dce78c into main Apr 11, 2026
14 checks passed
blove added a commit that referenced this pull request Apr 12, 2026
All chat/* standalone LangGraph Cloud deployments are unreachable
(TCP timeout). The streaming deployment is healthy and already has
all 12 graphs consolidated (PR #113).

Changes:
- examples-middleware.ts: route all chat/* paths to 'streaming' key
- a2ui environment: use 'a2ui_form' (streaming deployment's name)
- a2ui_graph.py in streaming deployment: update to hardcoded v0.9
  JSONL matching PR #120's fix

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
blove added a commit that referenced this pull request Apr 12, 2026
All chat/* standalone LangGraph Cloud deployments are unreachable
(TCP timeout). The streaming deployment is healthy and already has
all 12 graphs consolidated (PR #113).

Changes:
- examples-middleware.ts: route all chat/* paths to 'streaming' key
- a2ui environment: use 'a2ui_form' (streaming deployment's name)
- a2ui_graph.py in streaming deployment: update to hardcoded v0.9
  JSONL matching PR #120's fix

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
blove added a commit that referenced this pull request Apr 12, 2026
All chat/* standalone LangGraph Cloud deployments are unreachable
(TCP timeout). The streaming deployment is healthy and already has
all 12 graphs consolidated (PR #113).

Changes:
- examples-middleware.ts: route all chat/* paths to 'streaming' key
- a2ui environment: use 'a2ui_form' (streaming deployment's name)
- a2ui_graph.py in streaming deployment: update to hardcoded v0.9
  JSONL matching PR #120's fix

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant