-
Notifications
You must be signed in to change notification settings - Fork 215
feat: Support Gemini realtime text interruptions #998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: Support Gemini realtime text interruptions #998
Conversation
|
📝 WalkthroughWalkthroughAdds interrupt-driven realtime input and preemption handling to RealtimeSession: introduces private flags to defer or early-complete generations, forwards accumulated user text as Changes
Sequence DiagramsequenceDiagram
participant User as User
participant Session as RealtimeSession
participant Gen as Generation
participant Server as Server
User->>Session: send interrupt (with text)
activate Session
Session->>Session: set pendingInterruptText or earlyCompletionPending
alt active generation exists and generationHasOutput(gen)
Session->>Gen: finalize current generation (early completion / preempt)
end
Session->>Server: emit realtime_input (accumulated user text) if pending
deactivate Session
Server->>Session: incoming realtime_input (text/media)
activate Session
Session->>Session: treat text as realtime input / forward to model
deactivate Session
Server->>Session: modelTurn / outputTranscription
activate Session
alt earlyCompletionPending == true
Session->>Session: discard server output
else
Session->>Session: process and append server output
end
deactivate Session
Server->>Session: turnComplete / generationComplete
Session->>Session: reset earlyCompletionPending / discard state
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (3)**/*.{ts,tsx,js,jsx}📄 CodeRabbit inference engine (.cursor/rules/agent-core.mdc)
Files:
**/*.{ts,tsx}?(test|example|spec)📄 CodeRabbit inference engine (.cursor/rules/agent-core.mdc)
Files:
**/*.{ts,tsx}?(test|example)📄 CodeRabbit inference engine (.cursor/rules/agent-core.mdc)
Files:
🧬 Code graph analysis (1)plugins/google/src/beta/realtime/realtime_api.ts (2)
🪛 Biome (2.1.2)plugins/google/src/beta/realtime/realtime_api.ts[error] 944-944: Other switch clauses can erroneously access this declaration. The declaration is defined in this switch clause: Safe fix: Wrap the declaration in a block. (lint/correctness/noSwitchDeclarations) 🔇 Additional comments (8)
✏️ Tip: You can disable this entire section by setting Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@plugins/google/src/beta/realtime/realtime_api.ts`:
- Around line 573-592: The code clears this.pendingInterruptText unconditionally
even when no user text is found; change the logic in the block that iterates
over turns (types.Content) so you only clear this.pendingInterruptText after you
have actually sent at least one realtime_input via this.sendClientEvent.
Concretely, add a local boolean (e.g., sentRealtime) before the loop, set it to
true whenever you call this.sendClientEvent({ type: 'realtime_input', ... }),
and after the loop set this.pendingInterruptText = false only if sentRealtime is
true (leave it true otherwise) so future user text can still trigger
realtime_input.
- Around line 934-943: The switch case handling realtime_input declares const {
mediaChunks, activityStart, activityEnd, text } without a block which triggers
noSwitchDeclarations; wrap the case body in a block (add { ... } around the
existing statements) so the const lives inside a lexical block and keep the
existing awaits (session.sendRealtimeInput calls for mediaChunks, text, and
activityStart) unchanged; locate the case labeled realtime_input and enclose the
current lines starting with "const { mediaChunks, activityStart, activityEnd,
text } = msg.value;" through the activityStart await in a new { } block.
- Around line 1576-1578: The discardServerOutput flag can cause new-generation
content to be dropped if protocol ordering allows content for the next
generation to arrive before generationComplete/turnComplete; update the logic to
be robust by either (1) adding a clear explanatory comment near
discardServerOutput and the isNewGeneration() call documenting the protocol
ordering guarantee expected, or (2) proactively clearing discardServerOutput
when serverContent.interrupted is observed (in the same handler that checks
serverContent), or (3) modifying isNewGeneration() to treat a differing
generationId as a new generation even if discardServerOutput is true so
new-generation messages are accepted; locate and change behavior around
isNewGeneration(), discardServerOutput, and the serverContent handling to
implement one of these options.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
plugins/google/src/beta/realtime/realtime_api.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/agent-core.mdc)
Add SPDX-FileCopyrightText and SPDX-License-Identifier headers to all newly added files with '// SPDX-FileCopyrightText: 2025 LiveKit, Inc.' and '// SPDX-License-Identifier: Apache-2.0'
Files:
plugins/google/src/beta/realtime/realtime_api.ts
**/*.{ts,tsx}?(test|example|spec)
📄 CodeRabbit inference engine (.cursor/rules/agent-core.mdc)
When testing inference LLM, always use full model names from
agents/src/inference/models.ts(e.g., 'openai/gpt-4o-mini' instead of 'gpt-4o-mini')
Files:
plugins/google/src/beta/realtime/realtime_api.ts
**/*.{ts,tsx}?(test|example)
📄 CodeRabbit inference engine (.cursor/rules/agent-core.mdc)
Initialize logger before using any LLM functionality with
initializeLogger({ pretty: true })from '@livekit/agents'
Files:
plugins/google/src/beta/realtime/realtime_api.ts
🧬 Code graph analysis (1)
plugins/google/src/beta/realtime/realtime_api.ts (2)
plugins/openai/src/realtime/api_proto.ts (1)
Content(181-186)plugins/openai/src/realtime/realtime_model.ts (1)
session(305-307)
🪛 Biome (2.1.2)
plugins/google/src/beta/realtime/realtime_api.ts
[error] 934-934: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🔇 Additional comments (3)
plugins/google/src/beta/realtime/realtime_api.ts (3)
413-414: Clean state flags for interrupt handling.
The added flags make the interrupt/discard state explicit and easy to reason about.
748-752: Interrupt flow sequencing looks good.
Marking the current generation done before starting activity keeps preemption consistent.
1329-1420: Output-discard gating is solid.
Suppressing modelTurn/outputTranscription while discarding avoids stale playout and text leakage.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
|
found some issues with the state machine. will re-open once fixed. |
chore: Keep pendingInterruptText true until a realtime text is actually emitted. chore: add comment to explaion protocol ordering assumption fix(google): guard realtime interrupt completion
3cf6ffe to
0d6259c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@plugins/google/src/beta/realtime/realtime_api.ts`:
- Around line 1418-1429: The earlyCompletionPending flag only gets cleared in
the generation-handling path (around markCurrentGenerationDone) and can remain
true if the session closes or restarts, causing new server content to be
misrouted; update the session lifecycle handlers (e.g., closeActiveSession,
onclose, and markRestartNeeded) to explicitly set this. Locate the class field
earlyCompletionPending and add a reset (earlyCompletionPending = false) in those
lifecycle methods so the flag is cleared whenever a session is closed or
restarted, ensuring new generations are processed normally. Ensure any
unit/cleanup paths that abort a session also reset earlyCompletionPending to
avoid a stuck state.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
plugins/google/src/beta/realtime/realtime_api.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/agent-core.mdc)
Add SPDX-FileCopyrightText and SPDX-License-Identifier headers to all newly added files with '// SPDX-FileCopyrightText: 2025 LiveKit, Inc.' and '// SPDX-License-Identifier: Apache-2.0'
Files:
plugins/google/src/beta/realtime/realtime_api.ts
**/*.{ts,tsx}?(test|example|spec)
📄 CodeRabbit inference engine (.cursor/rules/agent-core.mdc)
When testing inference LLM, always use full model names from
agents/src/inference/models.ts(e.g., 'openai/gpt-4o-mini' instead of 'gpt-4o-mini')
Files:
plugins/google/src/beta/realtime/realtime_api.ts
**/*.{ts,tsx}?(test|example)
📄 CodeRabbit inference engine (.cursor/rules/agent-core.mdc)
Initialize logger before using any LLM functionality with
initializeLogger({ pretty: true })from '@livekit/agents'
Files:
plugins/google/src/beta/realtime/realtime_api.ts
🪛 Biome (2.1.2)
plugins/google/src/beta/realtime/realtime_api.ts
[error] 942-942: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🔇 Additional comments (2)
plugins/google/src/beta/realtime/realtime_api.ts (2)
572-599: Interrupt-driven realtime text emission looks good.The gating on
pendingInterruptTextwhile always sending full turns preserves context and keeps interruption behavior responsive.
742-760: Interrupt flow + output detection are consistent.The helper and interrupt branching read cleanly and align with the preemption intent.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
|
Hey @tillkolter, let me know when the PR is ready to review! |
Description
This PR refines Gemini Realtime interrupt handling by sending
realtime_inputtext when an interrupt is active, while always keeping full user turns incontent.The goal is to preserve context fidelity while still triggering low‑latency interruption behavior.
In contrast to OpenAI's well defined message/event schema to support interruption, truncation and request cancelation, Gemini is falls short of providing an explicit interface to manage the state of the request. This solution tries to simulate the internal behavior of the OpenAI plugin to achieve similar behaviour for text interruptions.
Changes Made
realtime_inputtext on an explicit interrupt flag instead of generation state, so interrupts remain responsive even after_doneis set.content(no stripping), so the model’s context stays consistent.Pre-Review Checklist
Additional Notes
This keeps Gemini’s interruption responsiveness while avoiding the context drift caused by stripping user turns from
content.Summary by CodeRabbit
New Features
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.