Knowledge Base — Project Inspection Standards
⚠️ This repository contains ONLY knowledge (standards, schemas, reference docs). It has no code. Code implementations live in auto-evolve.
The Nineteen-Perspective Project Inspection Standard Library — the evaluation engine behind auto-evolve.
Every project, nineteen lenses: User → Product → Project → Tech → Market Influence → Business Sustainability → Security → Performance → Testing → Integration → Observability → Documentation → i18n → Accessibility → Reliability → Cost Efficiency → Compatibility → Industry Vertical → Business/Compliance. One consistent standard.
Version: v1.1.0 | Total checks: 116 | License: MIT
中文: README.zh-CN.md
"Is this project any good?"
↓
Developer A: "Code looks fine"
Developer B: "Features are implemented"
Developer C: "I think it's usable"
No common language → Everyone talks past each other.
project-standard gives project evaluation a shared language — nineteen perspectives + consistent standards = comparable, trackable inspection results.
Not nineteen teams' work — nineteen lenses for one person to examine any project:
| Perspective | Ask Yourself | Finds |
|---|---|---|
| 👤 User | Is it pleasant to use? | CLI design, error messages, learning curve |
| 📦 Product | Does it deliver what it promises? | README vs reality, unresolved pain points |
| 🏗 Project | Is it managed well? | Learnings loop, inspection rhythm |
| ⚙️ Tech | Is the code healthy? | Duplicates, complexity, dependencies |
| 📊 Market Influence | Is it visible and growing? | Stars, ecosystem, competitive edge |
| 💼 Business Sustainability | Can it survive long-term? | Funding, governance, maintenance plan |
| 🔒 Security | Is it vulnerable to attacks? | Injection, XSS, auth, secrets, threat modeling |
| ⚡ Performance | Is it efficient under load? | Latency budgets, memory, DB efficiency |
| 🧪 Testing | Is it protected from regressions? | Coverage, CI, integration, strategy |
| 🔗 Integration | Does it integrate cleanly? | Dependency health, API contracts |
| 📡 Observability | Can you diagnose it when it breaks? | Logs, metrics, traces, alerting |
| 📚 Documentation | Can users and contributors succeed? | Onboarding, reference, architecture |
| 🌍 i18n | Can global users use it? | Locale, RTL, content localization |
| ♿ Accessibility | Can users with disabilities use it? | WCAG, keyboard nav, screen reader |
| 🔄 Reliability | Does it fail gracefully? | SLA, failover, backup, DR |
| 💰 Cost Efficiency | Is it cost-effective? | Compute, storage, AI/ML, CI/CD costs |
| 🔀 Compatibility | Can it evolve without breaking users? | API versioning, backward/forward compat |
| 🏢 Industry Vertical | Does it meet industry requirements? | HIPAA/PCI-DSS/FERPA/FedRAMP/IEC 62443 |
| ⚖️ Business/Compliance | Is it legally compliant? | License compliance, data privacy, IP |
| Tier | Perspectives | Behavior |
|---|---|---|
| Required | User, Product, Project, Tech, Security, Testing | Always active, cannot be disabled |
| Type-Required | Performance, Integration, Observability, Documentation, i18n, Accessibility, Reliability, Cost Efficiency, Compatibility, Business/Compliance | Active based on project type |
| Optional | Market Influence, Business Sustainability, Industry Vertical | Off by default, enable explicitly |
| Mode | Scans | Use When |
|---|---|---|
| Quick | Required + Type-Required perspectives | Daily CI, fast feedback |
| Full | All 19 perspectives | Release inspection, comprehensive review |
Level 1: Business Form (determines which perspectives are active)
├── Frontend → Web / Mobile / Desktop / Plugin / Mini-app
├── Backend → REST API / Microservice / CLI / DevOps / Middleware
├── AI/Agent → Skill / Agent / ML Pipeline / AI Service
├── Infrastructure → IoT / Blockchain / Data Pipeline
├── Content → SSG Docs / API Docs / Static Blog
└── Generic
Level 2: Tech Stack (determines specific check items)
Python / JavaScript-TS / Go / Rust / Java / PHP / Solidity / ...
references/
├── user/ ← User experience checklist (9 checks)
├── product-requirements.md ← Product promises vs reality
├── project-inspection.md ← Project health checklist
├── project-types.md ← Taxonomy + weights + scan modes
├── tech/ ← Code quality checklist
├── market-influence/ ← Community traction + competitive visibility
├── business-sustainability/ ← Funding + governance + longevity
├── security/ ← Security audit + threat modeling (27 checks)
├── performance/ ← Performance budgets + efficiency (18 checks)
├── testing/ ← Test coverage + CI matrix (17 checks)
├── integration/ ← Dependency health + API contracts
├── observability/ ← Logs + metrics + tracing + alerting
├── documentation/ ← Onboarding + reference + architecture
├── i18n/ ← Locale support + RTL + content localization
├── accessibility/ ← WCAG + keyboard nav + screen reader
├── reliability/ ← SLA + failover + backup + DR
├── cost-efficiency/ ← Compute + storage + AI/ML + CI/CD costs
├── compatibility/ ← API versioning + backward/forward compat
├── industry-vertical/ ← Healthcare/Finance/Education/Gov/IoT (45 checks)
├── business-compliance/ ← License compliance + data privacy + IP + regulatory
│
├── scoring-algorithm.md ← Unified scoring methodology
├── arbitration-rules.md ← Perspective boundaries + conflict resolution
├── perspective-dependencies.md ← Scanner execution order + data flow
├── version-migration.md ← Version evolution + breaking change policy
├── user-interaction-protocol.md ← Tier 1-3 confirmation flow
├── perspective-config-schema.md ← perspective-config.yaml complete schema
├── perspective-config.example.yaml ← Annotated config example
├── project-types-examples/ ← Per-type config templates
│ ├── ai-agent.example.yaml
│ ├── backend.example.yaml
│ └── frontend.example.yaml
├── check-registry/ ← All 116 check IDs (source of truth)
├── fix-action-registry/ ← Standardized fix action vocabulary
├── scanner-contract/ ← Scanner interface specification
├── report-generator/ ← Report format specs
├── scan-history/ ← Storage format + trend definition
├── auto-evolve-integration.md ← How auto-evolve loads and executes
│
├── code-standards.md ← Code quality + naming conventions
├── architecture.md ← Architecture standards + review template
├── security.md ← Security guidelines
├── git-workflow.md ← Git workflow + PR template
├── quality.md ← Test tiers + 100% coverage requirement
├── risk-management.md ← Risk management + Pre-mortem method
├── product-design.md ← Product design + Jobs anti-entropy principles
├── metrics.md ← Product metrics system
└── inspection-template.md ← Inspection report template
Framework docs (at root):
├── README.md ← This file
├── README.zh-CN.md ← 中文版
├── CHANGELOG.md ← Version history
├── VERSION ← Current version (1.1.0)
├── QUICK-REFERENCE.md ← One-page summary card
└── CONTRIBUTING.md ← Contributor quick start guide
clawhub install project-standard
clawhub install auto-evolve
# Done — auto-evolve auto-loads the standard library
python3 scripts/auto-evolve.py scan --dry-run# Read a perspective checklist
cat references/user/user-perspective.md
# See all check statistics
python3 scripts/check_stats.py
# Validate everything
python3 scripts/validate_check_ids.py
python3 scripts/validate_links.py
python3 scripts/validate_registry.pyTotal: 116 checks across 19 perspectives
By perspective: SEC 27 | PERF 18 | TEST 17 | USR 9 | IND 45
By severity: Critical 11 | High 34 | Medium 23 | Low 3
Auto-actionable: 27 (38%) | Human judgment: 44 (62%)
See references/project-types.md for the complete weight matrix.
| Business Form | Required | Type-Required Weight Range |
|---|---|---|
| Frontend | 5 always on | a11y 15%, docs 10%, perf 10% |
| Backend | 5 always on | reliability 20%, compat 20%, sec 20% |
| AI/Agent | 5 always on | cost 25%, perf 10%, testing 15% |
| Infrastructure | 5 always on | sec 30%, reliability 25%, cost 20% |
| Content/Docs | 3 of 5 | docs 40%, i18n 10%, a11y 5% |
| Generic | 5 always on | balanced across all |
"Assume the project already failed — why?" This finds 30% more problems than traditional risk assessment.
┌─────────────────────────────────────┐
│ 👥 PEOPLE │ ⚙️ PROCESS │ 💻 TECH │ 🌍 EXTERNAL │
│ Key person left │ Aggressive timeline │ Scaling failed │ Market shift │
└─────────────────────────────────────┘
"Don't ask users what they want — they don't know until you show them."
- Focus is saying no (cut 350 ideas, ship 10)
- Redefine the category (not first to market, but best execution)
- Perfect the invisible (good wood on the cabinet back)
| Tier | Ratio | Covers |
|---|---|---|
| Unit | 70% | Core logic, utility functions |
| Integration | 20% | Cross-module interactions |
| E2E | 10% | Core user paths |
Core logic (Logic/Service/Dao) requires 100% coverage.
project-standard ← Defines standards (referee, read-only knowledge base)
↑
│ 19 perspectives + 116 checks + arbitration rules
│
auto-evolve ← Executes inspection (athlete, executable code)
↓
findings → user confirmation → closed-loop improvement
Clear boundary: project-standard has no code, no scanners, no LLM calls. auto-evolve has all the execution logic. They never cross.
python3 scripts/validate_check_ids.py # Detect duplicate IDs, format validation
python3 scripts/validate_links.py # Check markdown internal links
python3 scripts/validate_registry.py # Validate registry table completeness
python3 scripts/check_stats.py # Check statistics + distribution
python3 scripts/check_alignment.py # Registry ↔ perspective doc alignment- auto-evolve — Nineteen-perspective automated inspection engine
- SoulForce — AI Agent memory evolution system
- hawk-bridge — OpenClaw context memory integration
MIT