Skip to content

feat: OpenAI-compatible LLM gateway provider #4

@dpup

Description

@dpup

Summary

Add an OpenAI decomposer to the LLM gateway alongside the existing Anthropic one. The gateway architecture already supports multiple providers — internal/gateway/anthropic/ is self-contained with a clean decomposer pattern. This is implementation work, not design work.

Scope

  • New package: internal/gateway/openai/
  • Decompose OpenAI chat completion requests into Keep calls:
    • llm.request — model, token estimate, system message summary
    • llm.tool_result — tool role messages (function call results)
    • llm.tool_use — assistant tool_calls in responses
    • llm.response — completion summary (finish_reason, tool_call count)
  • Support both regular and streaming (stream: true with SSE) responses
  • Mutation patching for redact actions (same pattern as Anthropic)
  • Gateway config: provider: openai

OpenAI-specific considerations

  • Chat completions use messages[] with role-based content (system, user, assistant, tool)
  • Tool calls are in tool_calls[] on assistant messages, not content blocks
  • Streaming uses data: [DONE] sentinel vs Anthropic's event types
  • Function calling has both legacy function_call and modern tool_calls — support modern only

Out of scope

  • OpenAI Responses API (different shape entirely — future issue if needed)
  • Assistants API / threads

References

  • Existing Anthropic decomposer: internal/gateway/anthropic/
  • Gateway proxy: internal/gateway/proxy.go
  • PRD R-3: "OpenAI-compatible as a fast follow"
  • gateway/config/config.go:54 — placeholder comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions