Central knowledge base for policy-governed AI execution across strategy, operations, and finance.
A structured source-of-truth system that controls how AI operates inside an enterprise context. Not a chatbot. Not a template collection. A governed knowledge architecture with curated modules, hard policy constraints, routing rules, and end-to-end playbooks for real operational use cases.
PUPO is the orchestration layer between raw AI capability and enterprise-safe execution. It defines which knowledge modules get used, in which order, under which constraints — so AI output is consistent, auditable, and policy-compliant across different operational domains.
Built for product, execution, and finance workflows in banking and enterprise environments.
AI in enterprise settings fails in predictable ways: hallucinated financial advice, inconsistent output formats, no audit trail, no escalation path when risk is high. Teams either over-restrict AI (it does nothing useful) or under-govern it (it does dangerous things).
PUPO solves this with a structured knowledge hierarchy:
- Curated modules are reviewed and approved for direct use
- Upstream references are available for adaptation, never direct copy
- Routing rules determine which module applies to which request
- Policy overlays enforce hard constraints (especially in finance)
- Playbooks provide end-to-end workflow sequences for common operations
pupo-ai/
├── curated/ USE FIRST — approved modules ready for execution
│ ├── strategy/
│ │ ├── product-manager.md
│ │ ├── compliance-auditor.md
│ │ ├── trend-researcher.md
│ │ └── executive-summary.md
│ ├── execution/
│ │ ├── planner.md
│ │ ├── architect.md
│ │ ├── code-reviewer.md
│ │ └── security-reviewer.md
│ ├── finance/
│ │ ├── rm-copilot-pattern.md # Relationship manager AI copilot
│ │ ├── wealth-analysis.md
│ │ ├── investment-guardrails.md
│ │ └── finance-layer-boundary.md
│ └── official-patterns/
│ └── anthropic-skill-pattern.md
├── upstream/ REFERENCE ONLY — adapt, never copy directly
├── routing/
│ └── source-preference.yaml # Which module to use, when
├── policy/ Hard rules governing all execution
├── playbooks/ End-to-end workflow sequences
│ ├── ai-feature-prd-flow.md
│ ├── cpo-update-flow.md
│ └── premier-upgrade-flow.md
└── claude/
└── CLAUDE.md Claude Code execution rules
- Curated before upstream — always use approved modules first
- Never generate customer-facing financial execution without policy overlay
- Escalate if risk is high or no module resolves the request
- Document which module was used and why — auditability is non-negotiable
- Reuse over reinvent — extend existing modules, don't duplicate
| Module | Purpose | Constraint |
|---|---|---|
rm-copilot-pattern.md |
AI behaviour pattern for RM copilots | Never generate direct investment advice |
wealth-analysis.md |
Structured wealth portfolio analysis framework | Requires human review before client use |
investment-guardrails.md |
Hard constraints on investment-related AI output | Non-negotiable policy layer |
finance-layer-boundary.md |
Boundary between AI analysis and human decision | Escalation triggers |
ai-feature-prd-flow.md— AI feature idea to product requirements documentcpo-update-flow.md— Structured CPO/leadership update generationpremier-upgrade-flow.md— Client tier upgrade evaluation and documentation
- Enterprise AI teams building governed systems in regulated industries
- Product operators needing consistent AI output across different contexts
- Banking and financial services technologists implementing AI copilots with compliance constraints
- AI architects designing knowledge governance systems
- obsidian-forge — knowledge vault where product discovery and build documentation lives
aureus-rm(private) — the RM copilot that usesrm-copilot-pattern.mdas its behavioral foundation
MIT — structure and patterns are reusable. Policy content is context-specific and should be adapted for your organization.