Skip to content

feat(prompt): introduce PromptExecutorHooks and align ContextualPromp…#1764

Draft
Amaneusz wants to merge 1 commit intodevelopfrom
amanowicz/develop/prompt-executor-hooks
Draft

feat(prompt): introduce PromptExecutorHooks and align ContextualPromp…#1764
Amaneusz wants to merge 1 commit intodevelopfrom
amanowicz/develop/prompt-executor-hooks

Conversation

@Amaneusz
Copy link
Copy Markdown
Collaborator

No description provided.

@Amaneusz Amaneusz force-pushed the amanowicz/develop/prompt-executor-hooks branch 7 times, most recently from 79da560 to 2f519c7 Compare March 30, 2026 15:44
@Amaneusz Amaneusz requested review from EugeneTheDev and sdubov March 30, 2026 15:46
@Amaneusz Amaneusz marked this pull request as ready for review March 30, 2026 15:47
@Amaneusz Amaneusz force-pushed the amanowicz/develop/prompt-executor-hooks branch from 2f519c7 to 45f6a42 Compare March 30, 2026 20:05
…inversing the control ownership of contextual hooks

# Conflicts:
#	agents/agents-core/src/commonMain/kotlin/ai/koog/agents/core/agent/session/AIAgentLLMReadSessionImpl.kt
#	prompt/prompt-executor/prompt-executor-model/src/jvmMain/kotlin/ai/koog/prompt/executor/model/PromptExecutor.kt
@Amaneusz Amaneusz force-pushed the amanowicz/develop/prompt-executor-hooks branch from 45f6a42 to e8a2283 Compare March 30, 2026 20:27
Copy link
Copy Markdown
Collaborator

@sdubov sdubov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the clean design! @Amaneusz , thank you a bunch. I raised a question about the API to discuss. Please let me know what do you think.

model: LLModel,
tools: List<ToolDescriptor> = emptyList()
tools: List<ToolDescriptor> = emptyList(),
hooks: PromptExecutorHooks? = null
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Amaneusz, I like the new design for the hooks! I have one question/concern about this approach that I want to discuss. Could you please clarify why do you use hooks as parameter for every method instead of including hooks into the PromptExecutor API itself? I mean something like this:

public interface PromptExecutorAPI : AutoCloseable {

    public suspend fun onModelChoiceFailed(intent: InitialExecutionIntent, error: Throwable)

    public suspend fun beforeExecution(intent: InitialExecutionIntent, effectiveModel: LLModel): ExecutionArgOverrides

    public suspend fun onFailure(intent: ResolvedExecutionIntent, effectiveModel: LLModel, error: Throwable)

    public suspend fun execute(...): List<Message.Response>

    public fun executeStreaming(...): Flow<StreamFrame>

Updated API should enforce any derived class to add implementation for hooks. Also, it does not need to handle the outer + inner hooks through the methods. You can just wrap the original hooks into a pipeline logic:

public class ContextualPromptExecutor(private val executor: PromptExecutor, ... ) {
    ...
    override suspend fun onCompleted(...) {
        context.pipeline.onLLMCallCompleted(...)
        executor.onCompleted() // executor instance is a base prompt executor passed to the contextual one as a parameter
    }
}

I looked through current implementations and I found that it might be quite hard to know that you need to wrap your logic into hooks when you make your own PromptExecutor implementation. It might be I miss some important design points here. Very welcome to discuss this.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @sdubov, really good question - I had similar idea at the beginning but it won't work out for us. Let me explain the reasoning behind the design.

tl;dr - there are two main rules to follow:

  1. The caller must define the logic of the hook (it's the Contextutal that knows about the pipeline)
  2. The executor must define the moment of hook trigger (it's the Multi/Routing that know when the prompt is actually executed - execute* methods are not only about executing, there's more arbitrary logic to them that we can't control)

Longer take:

The core problem this solves: the caller needs to observe execution facts that only the executor knows — specifically, which model actually ran. The caller requests a model, but executors can silently substitute it (fallback, routing). By the time execute() returns, that information is gone.

Solving this requires two things simultaneously:

  • the caller defines what to do with those facts (pipeline callbacks, tracing — context the executor has no access to)
  • the executor fires at the moment the facts exist (after model resolution — a moment invisible from outside)

This is why hooks can't live on the interface as methods. With your proposed design (where hooks are owned by executor):

contextual.execute(prompt, model, tools)
	nested.execute(prompt, model, tools)
		// `nested` logic before model choice
		val effectiveModel = chooseModel()
		// `nested` logic after model choice
	    nested.onCompleted(effectiveModel, result)
    contextual.OnCompleted // no way to get `effectiveModel` here

The inner executor fires its own methods, not the wrapper's. To fix that, you'd need the inner executor to hold a reference back to the outer wrapper — that would require some runtime injection mechanism - just as the on in this PR.

Hooks as a call parameter satisfy both constraints cleanly:

contextual.execute(prompt, model, tools, outerHooks)
	nested.execute(..., hooks = ContextualHooks(pipeline, outerHooks))
		// `nested` logic before model choice
		val effectiveModel = chooseModel()
		// `nested` logic after model choice
	    hooks.onCompleted(effectiveModel, result) // hooks allow executors to communicate
	        pipeline.onLLMCallCompleted(effectiveModel, ...)
            outerHooks?.onCompleted(...)

The caller provides the logic, the executor decides when to invoke it.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Amaneusz, thank you for detailed explanation of the approach and a new API changes. As we discussed it might be worth extracting the Hookable interface separately. So, we can use it for PromptExecutorAPI and append extra logic to set/get hooks that are used internally. We do not need to change the current PromptExecutor API in that case and append hooks parameter for every method. I'm open for discussion here. Please let me know wdyt?

* keeping each `execute` / `executeStreaming` override focused on client selection and
* the LLM call itself.
*/
public object ExecutorHooksHelper {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Up to your decision - maybe it worth extracting this into a separate file to not mix helper methods and the core hooks logic?

* (ie [ai.koog.prompt.executor.llms.MultiLLMPromptExecutor.fallback]).
* If given [PromptExecutor] implementation does not support dynamic model selection, it should use initially provided model - [InitialExecutionIntent.model]
*/
public suspend fun <T> executeWithHook(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems for me that this method should be internal or even private. As far as I understand, hooks are very internal logic that should not be exposed outside. Please let me know if I miss anything.


/** Hooks for [PromptExecutorAPI.execute]. */
public val execute: SimpleExecutorHook<List<Message.Response>>
get() = object : SimpleExecutorHook<List<Message.Response>> {}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here and below. It would be better to keep these objects in static variables. Currently, they are created on every get() invocation.


return response
return executeWithHook(InitialExecutionIntent(prompt, tools, model), hook = hooks?.execute) { finalIntent ->
llmClient.execute(finalIntent.prompt, model, finalIntent.tools)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here and below, should not it be finalIntent.model in the llmClient.execute(...) call?

model: LLModel,
tools: List<ToolDescriptor> = emptyList()
tools: List<ToolDescriptor> = emptyList(),
hooks: PromptExecutorHooks? = null
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Amaneusz, thank you for detailed explanation of the approach and a new API changes. As we discussed it might be worth extracting the Hookable interface separately. So, we can use it for PromptExecutorAPI and append extra logic to set/get hooks that are used internally. We do not need to change the current PromptExecutor API in that case and append hooks parameter for every method. I'm open for discussion here. Please let me know wdyt?

@Amaneusz Amaneusz marked this pull request as draft April 1, 2026 10:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants