Summary
Allow tools to return two distinct outputs from a single invocation:
- A serialised result that is added to the LLM conversation context (what the model reasons over).
- An out-of-band result that is retained by the framework but never serialised into the conversation. This data is made available to the caller once the agent run completes.
Motivation
It is common for a tool to retrieve rich domain data from an external dependency, but for the LLM to only need a projected subset of that data to perform its reasoning. The remaining attributes are still valuable to the caller of the agent — for rendering, downstream processing, auditing, etc.
Today there is no first-class way to express this in the tool response. Workarounds include:
- Serialising everything into the tool result - wastes context window, increases cost, risks the LLM hallucinating over fields it doesn't need, and may expose data that should not be sent to a model provider.
- Re-fetching after the agent completes - introduces redundant calls, latency, and potential consistency issues.
- Storing data in external shared mutable state (e.g., a captured variable outside the tool definition) - works, but requires locking for it to work correctly with parallel tool calls.
A mechanism that would allow this data to be returned from the tool call, collected from any parallel calls and then handled within the graph would be more efficient.
Summary
Allow tools to return two distinct outputs from a single invocation:
Motivation
It is common for a tool to retrieve rich domain data from an external dependency, but for the LLM to only need a projected subset of that data to perform its reasoning. The remaining attributes are still valuable to the caller of the agent — for rendering, downstream processing, auditing, etc.
Today there is no first-class way to express this in the tool response. Workarounds include:
A mechanism that would allow this data to be returned from the tool call, collected from any parallel calls and then handled within the graph would be more efficient.