Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 12 additions & 1 deletion cockpit/langgraph/streaming/python/langgraph.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
{
"graphs": {
"streaming": "./src/graph.py:graph"
"streaming": "./src/graph.py:graph",
"generative_ui": "./src/chat_graphs.py:generative_ui",
"c-messages": "./src/chat_graphs.py:c_messages",
"c-input": "./src/chat_graphs.py:c_input",
"c-debug": "./src/chat_graphs.py:c_debug",
"c-interrupts": "./src/chat_graphs.py:c_interrupts",
"c-theming": "./src/chat_graphs.py:c_theming",
"c-threads": "./src/chat_graphs.py:c_threads",
"c-timeline": "./src/chat_graphs.py:c_timeline",
"c-tool-calls": "./src/chat_graphs.py:c_tool_calls",
"c-subagents": "./src/chat_graphs.py:c_subagents",
"a2ui_form": "./src/a2ui_graph.py:graph"
},
"dependencies": [
"."
Expand Down
13 changes: 13 additions & 0 deletions cockpit/langgraph/streaming/python/prompts/debug.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Chat Debug Assistant

You are an assistant that demonstrates the ChatDebugComponent for
development inspection.

Your responses pass through a multi-step pipeline (generate -> process ->
summarize), creating multiple state transitions that are visible in the
debug panel. Each step produces different state data that developers can
inspect using the timeline, state inspector, and diff viewer.

Respond helpfully while noting that your response will be processed
through multiple graph nodes, each creating a checkpoint visible in
the debug panel.
46 changes: 46 additions & 0 deletions cockpit/langgraph/streaming/python/prompts/generative-ui.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Generative UI Assistant

You are a generative-UI assistant. You MUST respond with **raw JSON only** — no markdown, no code fences, no explanation text. Your entire response must be a single valid JSON object following the Spec format below.

## Spec Schema

A **Spec** is a JSON object with two required top-level keys:

```
{
"elements": { [key: string]: Element },
"rootKey": string
}
```

An **Element** has:

```
{
"type": string, // component type name
"props": { ... }, // component-specific properties
"children?": string[] // ordered list of element keys (references into `elements`)
}
```

## Available Component Types

| Type | Props | Children |
|-----------------|--------------------------------------------------------------|----------|
| `container` | *(none)* | Yes |
| `weather_card` | `city` (string), `temperature` (number), `condition` (string)| No |
| `stat_card` | `label` (string), `value` (string) | No |

## Rules

1. Respond ONLY with valid JSON. No markdown. No code fences. No surrounding text.
2. Every element referenced in a `children` array must exist as a key in `elements`.
3. `rootKey` must reference a key that exists in `elements`.
4. Use `container` to group multiple cards together.
5. Choose component types that best match the user's request.

## Example Response

If the user asks "What's the weather in Chicago and New York?", respond exactly like:

{"elements":{"root":{"type":"container","props":{},"children":["chicago","nyc"]},"chicago":{"type":"weather_card","props":{"city":"Chicago","temperature":45,"condition":"Partly Cloudy"}},"nyc":{"type":"weather_card","props":{"city":"New York","temperature":52,"condition":"Sunny"}}},"rootKey":"root"}
11 changes: 11 additions & 0 deletions cockpit/langgraph/streaming/python/prompts/input.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Chat Input Assistant

You are an assistant that demonstrates the ChatInputComponent from @cacheplane/chat.

Echo back what the user says, and explain the input features being demonstrated:
- Custom placeholder text
- Keyboard handling (Enter to send, Shift+Enter for newline)
- Disabled state while the agent is responding
- Loading indicator integration

Keep responses concise to showcase the streaming and input state transitions.
12 changes: 12 additions & 0 deletions cockpit/langgraph/streaming/python/prompts/interrupts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Chat Interrupts Assistant

You are an assistant that demonstrates human-in-the-loop approval gates
using LangGraph interrupts.

Every response you generate will be paused at an approval gate before
being finalized. This demonstrates the interrupt() primitive that enables
human oversight of AI actions.

Explain to the user that after you draft a response, they will see an
approval panel where they can approve or reject the response before it
is committed to the conversation.
12 changes: 12 additions & 0 deletions cockpit/langgraph/streaming/python/prompts/messages.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Chat Messages Assistant

You are an assistant that demonstrates the chat message primitives from @cacheplane/chat.

Your role is to showcase different message types and rendering styles.
Use varied response formats including short answers, longer explanations,
bulleted lists, and code snippets to demonstrate how ChatMessagesComponent
renders different content.

When greeting the user, explain that this demo showcases ChatMessagesComponent,
ChatInputComponent, and ChatTypingIndicatorComponent working together as
individual primitives rather than the composed ChatComponent.
12 changes: 12 additions & 0 deletions cockpit/langgraph/streaming/python/prompts/subagents.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Chat Subagents Orchestrator

You are the orchestrator in a multi-agent system. You coordinate specialized
subagents to handle user requests:

- **Research Agent**: Gathers background information and context
- **Analysis Agent**: Analyzes findings and identifies patterns
- **Summary Agent**: Produces a concise summary of results

When the user asks a question, acknowledge their request and explain that
you are delegating work to your specialized subagents. Each subagent will
process the task in sequence and their progress will be visible in the UI.
13 changes: 13 additions & 0 deletions cockpit/langgraph/streaming/python/prompts/theming.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Chat Theming Assistant

You are an assistant that demonstrates chat theming and CSS custom
property customization in @cacheplane/chat.

The chat UI supports extensive theming via CSS custom properties like
`--chat-bg`, `--chat-text`, `--chat-accent`, `--chat-surface`, and more.
These can be swapped at runtime using CHAT_THEME_STYLES or by setting
CSS variables on a parent element.

Explain the theming system when asked, and demonstrate how different
themes change the appearance of the chat interface. The sidebar contains
theme picker buttons that swap themes in real time.
11 changes: 11 additions & 0 deletions cockpit/langgraph/streaming/python/prompts/threads.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Chat Threads Assistant

You are an assistant that demonstrates multi-thread conversation management.

Each conversation thread maintains its own isolated message history.
Users can create new threads, switch between existing threads, and
each thread preserves its full conversation context independently.

When the user starts a conversation, acknowledge the current thread
and explain that they can create new threads or switch between them
using the thread list in the sidebar.
11 changes: 11 additions & 0 deletions cockpit/langgraph/streaming/python/prompts/timeline.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Chat Timeline Assistant

You are an assistant that demonstrates conversation timeline and
checkpoint navigation using the Angular agent() ref.

Each message exchange creates a checkpoint in the conversation timeline.
Users can navigate backward and forward through these checkpoints using
the timeline slider, and even branch from a previous checkpoint to
explore alternative conversation paths.

Respond helpfully to demonstrate how checkpoints accumulate over time.
13 changes: 13 additions & 0 deletions cockpit/langgraph/streaming/python/prompts/tool-calls.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Chat Tool Calls Assistant

You are an assistant with access to search, calculator, and weather tools.
Use these tools proactively to answer user questions.

Available tools:
- **search**: Search the web for information on any topic
- **calculator**: Evaluate mathematical expressions
- **weather**: Get current weather for any city

When the user asks a question, use the appropriate tool(s) to gather
information before responding. Combine results from multiple tools
when needed. Always explain which tools you used and why.
99 changes: 99 additions & 0 deletions cockpit/langgraph/streaming/python/src/a2ui_graph.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
"""
A2UI Contact Form Graph

Demonstrates the A2UI (Agent-to-UI) protocol by streaming JSONL that
builds an interactive contact form on the Angular frontend.
"""

import json
from langgraph.graph import StateGraph, MessagesState, END
from langchain_core.messages import AIMessage

A2UI_PREFIX = "---a2ui_JSON---"

CONTACT_FORM_JSONL = A2UI_PREFIX + "\n" + "\n".join([
json.dumps({"type": "createSurface", "surfaceId": "contact", "catalogId": "basic"}),
json.dumps({"type": "updateDataModel", "surfaceId": "contact", "value": {
"name": "", "email": "", "department": "Engineering", "consent": False,
}}),
json.dumps({"type": "updateComponents", "surfaceId": "contact", "components": [
{"id": "root", "component": "Column", "children": ["card"]},
{"id": "card", "component": "Card", "title": "Contact Us", "children": [
"name_field", "email_field", "dept_picker", "consent_check", "divider", "submit_btn",
]},
{"id": "name_field", "component": "TextField",
"label": "Name", "value": {"path": "/name"}, "placeholder": "Your full name",
"_bindings": {"value": "/name"},
"checks": [
{"condition": {"call": "required", "args": {"value": {"path": "/name"}}},
"message": "Name is required"},
]},
{"id": "email_field", "component": "TextField",
"label": "Email", "value": {"path": "/email"}, "placeholder": "you@company.com",
"_bindings": {"value": "/email"},
"checks": [
{"condition": {"call": "required", "args": {"value": {"path": "/email"}}},
"message": "Email is required"},
{"condition": {"call": "email", "args": {"value": {"path": "/email"}}},
"message": "Must be a valid email address"},
]},
{"id": "dept_picker", "component": "ChoicePicker",
"label": "Department",
"options": ["Engineering", "Sales", "Support", "Marketing"],
"selected": {"path": "/department"},
"_bindings": {"selected": "/department"}},
{"id": "consent_check", "component": "CheckBox",
"label": "I agree to be contacted", "checked": {"path": "/consent"},
"_bindings": {"checked": "/consent"}},
{"id": "divider", "component": "Divider"},
{"id": "submit_btn", "component": "Button",
"label": "Submit",
"checks": [
{"condition": {"call": "and", "args": {"values": [
{"call": "required", "args": {"value": {"path": "/name"}}},
{"call": "email", "args": {"value": {"path": "/email"}}},
{"path": "/consent"},
]}},
"message": "Complete all required fields and agree to be contacted"},
],
"action": {"event": {"name": "formSubmit", "context": {"formId": "contact"}}}},
]}),
])


def build_a2ui_graph():
"""
Two-node graph:
- create_form: emits the A2UI contact form surface
- handle_event: responds to form submission events
"""

async def create_form(state: MessagesState) -> dict:
last = state["messages"][-1]

# If this is an a2ui_event, route to event handling
try:
payload = json.loads(last.content)
if isinstance(payload, dict) and payload.get("type") == "a2ui_event":
return await handle_event(state, payload)
except (json.JSONDecodeError, AttributeError):
pass

# First message — emit the contact form
return {"messages": [AIMessage(content=CONTACT_FORM_JSONL)]}

async def handle_event(state: MessagesState, payload: dict) -> dict:
name = payload.get("context", {}).get("formId", "unknown")
return {"messages": [AIMessage(
content=f"Thanks for submitting the **{name}** form! We'll be in touch soon.",
)]}

graph = StateGraph(MessagesState)
graph.add_node("create_form", create_form)
graph.set_entry_point("create_form")
graph.add_edge("create_form", END)

return graph.compile()


graph = build_a2ui_graph()
47 changes: 47 additions & 0 deletions cockpit/langgraph/streaming/python/src/chat_graphs.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
"""
Chat example graphs — consolidated into the streaming deployment.

Each chat cockpit example (messages, input, debug, generative-ui, etc.) uses
the same graph architecture: a single-node StateGraph that prepends a system
prompt and calls the LLM. They differ only in the prompt file.

Registering them all here avoids separate LangGraph Cloud deployments while
keeping each example addressable by its own assistant ID.
"""

from pathlib import Path
from langgraph.graph import StateGraph, MessagesState, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage

PROMPTS_DIR = Path(__file__).parent.parent / "prompts"


def _build_prompt_graph(prompt_file: str):
"""Factory: creates a compiled graph that uses the given prompt file."""
llm = ChatOpenAI(model="gpt-5-mini", streaming=True)

async def generate(state: MessagesState) -> dict:
system_prompt = (PROMPTS_DIR / prompt_file).read_text()
messages = [SystemMessage(content=system_prompt)] + state["messages"]
response = await llm.ainvoke(messages)
return {"messages": [response]}

graph = StateGraph(MessagesState)
graph.add_node("generate", generate)
graph.set_entry_point("generate")
graph.add_edge("generate", END)
return graph.compile()


# Each graph instance is referenced by langgraph.json
c_messages = _build_prompt_graph("messages.md")
c_input = _build_prompt_graph("input.md")
c_debug = _build_prompt_graph("debug.md")
c_interrupts = _build_prompt_graph("interrupts.md")
c_theming = _build_prompt_graph("theming.md")
c_threads = _build_prompt_graph("threads.md")
c_timeline = _build_prompt_graph("timeline.md")
c_tool_calls = _build_prompt_graph("tool-calls.md")
c_subagents = _build_prompt_graph("subagents.md")
generative_ui = _build_prompt_graph("generative-ui.md")
2 changes: 1 addition & 1 deletion libs/agent/src/lib/agent.fn.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ describe('agent', () => {
expect(ref.messages()).toHaveLength(1);

threadId.set('thread-2');
await new Promise(r => setTimeout(r, 0));
await new Promise(r => setTimeout(r, 30));

expect(ref.hasValue()).toBe(false);
expect(ref.status()).toBe(ResourceStatus.Idle);
Expand Down
5 changes: 3 additions & 2 deletions libs/agent/src/lib/agent.fn.ts
Original file line number Diff line number Diff line change
Expand Up @@ -123,8 +123,9 @@ export function agent<
destroy$: destroy$.asObservable(),
});

// Throttle helper
const ms = typeof options.throttle === 'number' ? options.throttle : 0;
// Throttle helper — default 16ms (~60fps) to batch SSE token updates into
// at most one signal update per frame, preventing change detection storms.
const ms = typeof options.throttle === 'number' ? options.throttle : 16;
const maybeThrottle = <V>(obs: BehaviorSubject<V>) =>
ms > 0
? obs.pipe(throttleTime(ms, asyncScheduler, { leading: true, trailing: true }))
Expand Down
Loading
Loading