Skip to content

feat(inference): add OpenWork inference proxy#1774

Draft
src-opn wants to merge 3 commits into
devfrom
inference-proxy
Draft

feat(inference): add OpenWork inference proxy#1774
src-opn wants to merge 3 commits into
devfrom
inference-proxy

Conversation

@src-opn
Copy link
Copy Markdown
Collaborator

@src-opn src-opn commented May 13, 2026

Summary

  • Add an OpenWork-managed inference proxy app backed by OpenRouter, including proxy routes, webhook settlement, key validation, usage buckets, and model aliasing.
  • Add Den API ownership for inference enablement, per-member inference keys, org upstream OpenRouter keys, OpenWork LLM provider rows, and post-member-change sync hooks.
  • Add Den Web inference controls and update Den Web/Desktop provider parsing so source: openwork providers are visible/importable.

Evidence

Build verification

  • pnpm --filter @openwork-ee/den-db build -- passed
  • pnpm --filter @openwork/types build -- passed
  • pnpm --filter @openwork-ee/inference build -- passed
  • pnpm --filter @openwork-ee/den-api build -- passed
  • pnpm --filter @openwork-ee/den-web build -- passed
  • pnpm --filter @openwork/app build -- passed

API verification

  • Local inference proxy received POST /api/v1/chat/completions and returned 200 during desktop app testing.
  • Manual curl confirmed auth reaches the proxy; model alias handling was updated so both model1 and openwork/model1 are accepted.

UI verification

  • Den Web build passes for the new Inference page and OpenWork provider source support.
  • Full Chrome MCP screenshot/video evidence not captured yet.
  • Known follow-up: desktop active-session UI can remain on Thinking until switching sessions, despite the proxy returning 200; this draft PR captures the current implementation state for review.

Test instructions

  1. Set OPENROUTER_MANAGEMENT_API_KEY in ee/apps/den-api/.env.local.
  2. Run pnpm run dev:web-local.
  3. Open Den Web and enable OpenWork Models from the Inference tab.
  4. Confirm /api/den/v1/llm-providers returns an openwork provider with models openwork/model1 and openwork/model2.
  5. In the desktop app, import/use the OpenWork Models provider and send a message.
  6. Confirm the inference service logs POST /api/v1/chat/completions and returns 200.

@vercel
Copy link
Copy Markdown
Contributor

vercel Bot commented May 13, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
openwork-app Ready Ready Preview, Comment May 13, 2026 1:44am
openwork-den Ready Ready Preview, Comment May 13, 2026 1:44am
openwork-den-worker-proxy Ready Ready Preview, Comment May 13, 2026 1:44am
openwork-landing Ready Ready Preview, Comment, Open in v0 May 13, 2026 1:44am
openwork-share Ready Ready Preview, Comment May 13, 2026 1:44am

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant