Skip to content

danlinyu/cm-committee

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

cm-committee — A Seven-Voice Methodology Council for Claude Code

A multi-perspective Claude Code skill that turns a methodology question into a seven-voice scholarly debate. Each voice is a separately deployable persona skill distilled from primary sources via a validated 6-phase pipeline.

License: MIT Claude Code


What this is

cm-committee is a Claude Code skill that orchestrates a structured debate among seven distilled scholarly perspectives when you face a methodology decision. It's also, equivalently, seven standalone persona skills you can invoke individually — each one is a thinking lens grounded in primary sources, with explicit refusals where the lens doesn't apply.

The seven voices:

# Scholar What they uniquely answer
1 John Sterman (MIT Sloan) System-dynamics modelling, climate-policy simulation, the Beer Game, Business Dynamics textbook register
2 Jay W. Forrester (MIT, founder of System Dynamics) Founder-status questions, stock-flow notation genesis, Industrial Dynamics / World Dynamics, the counterintuitive axiom
3 Michael Batty (Bartlett, UCL) Urban-systems / scaling laws / cities-as-systems / agent-based-modelling-for-cities
4 Håvard Rue (KAUST, R-INLA) Latent Gaussian models, deterministic-Laplace inference, "MCMC is simply not appropriate" register
5 Finn Lindgren (Edinburgh, SPDE/inlabru) Continuous Gaussian fields on manifolds, SPDE/Whittle-Matérn, mesh-and-FEM construction
6 Andrew Gelman (Columbia, Stan) Bayesian workflow, the folk theorem, MRP, posterior-predictive checks, applied statistics pragmatism
7 Donella H. ("Dana") Meadows (Dartmouth, deceased 2001) The twelve leverage points, paradigm-as-source diagnosis, Limits to Growth / overshoot register, "dance with the system" pedagogy

The skills know who owns a question and who should defer to whom. Ask Gelman a continuous-Gaussian-field-on-a-manifold question and he'll route you to Lindgren; ask Lindgren about deterministic-Laplace inference and he'll route you to Rue; ask Meadows about stock-flow simulation software and she'll route you to Sterman. The boundaries are explicit and primary-source-grounded.


Why it exists

When an LLM is asked "what would Andrew Gelman think about X?", the default behaviour is to generate plausible-sounding-Gelman-shaped text from training data — confidently, without primary-source grounding, and often with fabricated quotes attributed to the scholar.

These skills do something different. Each one is the output of a six-phase distillation pipeline that:

  1. Collects the scholar's primary writings, talks, and interviews into structured research dossiers.
  2. Verifies verbatim quotes against the source corpus.
  3. Names the scholar's distinguishing certainty register, signature vocabulary, characteristic openers and closers.
  4. Encodes explicit refusals — what the scholar would NOT say, what territory belongs to other scholars, what topics fall outside the information cutoff.
  5. Validates the lens against five edge-case prompts that test the discipline.
  6. Refines through dual independent reviewer agents, one auditing persona authenticity and the other auditing cross-scholar register exclusivity.

The result is a thinking lens. It's not the scholar — it's a tool for thinking like the scholar with explicit acknowledgement of what it can and cannot do.


Install

Option A — Claude Code (recommended)

# Clone into your skills directory
cd ~/.claude/skills/
git clone https://github.com/danlinyu/cm-committee.git cm-committee-repo
# Move skill folders to the conventional location
mv cm-committee-repo/skills/* .
# (Optional) Keep methodology/ + README accessible
mv cm-committee-repo/methodology .
mv cm-committee-repo/README.md ./cm-committee-README.md

Each <scholar>-perspective/ folder is a self-contained skill. Claude Code auto-discovers all skills in ~/.claude/skills/ at session start. You should see them listed in /help skills or via the autocomplete on slash commands.

Option B — Other agent runtimes

Each SKILL.md is a self-contained system-prompt-injectable lens. To use in any other agent runtime:

  1. Read the SKILL.md of the desired perspective.
  2. Inject its content as a system prompt or persistent context message when you want the agent to reason in that perspective.
  3. The frontmatter description: field declares triggers and refusals; honour them or your agent will produce out-of-character output.
  4. The body declares answer-workflow steps, mental models, decision heuristics, expression DNA, internal tensions, and honest boundaries — load these as the runtime sees fit.

A minimal manual invocation looks like:

SYSTEM: <paste the SKILL.md body verbatim>
USER: <your methodology question>

For multi-perspective debate without cm-committee orchestration, you can load 2–3 perspective skills simultaneously and explicitly route between them by question type.


Usage

As an orchestrator (/cm-committee)

Invoke cm-committee for any methodology question that benefits from structured pluralism. The orchestrator will:

  1. Classify the question by type (modelling-deliverable / paradigm-diagnosis / inference-engine-choice / spatial-geometry-construction / Bayesian- workflow / overshoot-and-limits / etc.).
  2. Identify which 2–4 scholars uniquely speak to the question.
  3. Sequentially invoke each relevant perspective skill at depth (the full distilled lens, not a compact description).
  4. Surface points of agreement, disagreement, and division-of-labour.
  5. Close with a synthesised recommendation that names which lens to defer to and why.

Example:

> /cm-committee My team is fitting a hierarchical model with a Gaussian
  field over UK administrative regions. We're getting slow MCMC and
  divergent transitions. Who should I listen to?

The orchestrator will likely route to Gelman (folk-theorem / workflow diagnosis), Rue (R-INLA as alternative inference engine), and Lindgren (SPDE-on-mesh as the geometric construction). Each will respond in their own voice with explicit boundaries, then a synthesised recommendation lands.

As individual lenses

Each perspective skill is independently invocable:

> /donella-meadows-perspective Why does our climate-policy committee
  keep producing the same proposals year after year despite the climate
  worsening?

The lens will respond as Donella, in past tense, with paradigm-level diagnosis (Model 2), the canonical 12-leverage-points humility frame (Model 1, with the "work in progress" + "no cheap tickets to mastery" opener-and-closer), and likely a concrete vignette (the New Hampshire selectman; Foundation Farm; Nathan Gray in Guatemala).

Composing your own debate

You can also load any subset of perspective skills directly without the orchestrator and route between them yourself. This is useful when you already know which 2–3 voices you want to hear from.


What's distinctive about each lens

Each lens has a programme-level signature that distinguishes it from its peers. This is the load-bearing finding from the distillation methodology:

  • Sterman: empirical-hedge-stack with flat physical-law (FIRST PATTERN); "may, can, suggests" two-track register; "live as if there's just exactly enough time" closing aphorism (borrowed from Meadows, with attribution).
  • Forrester: founder-flat-axiom-or-flat-I-don't-know (SECOND PATTERN); "counterintuitive" as own-axiom; "PUSH IT IN THE WRONG DIRECTION" caustic; pre-Twitter NULL (SIXTH PROGRAMME PATTERN for short-form publishing).
  • Batty: philosophical-hedge-about-predictability (THIRD PATTERN); "It is still early days yet" register; British-academic essayistic mode; X-sparse-personal short-form.
  • Rue: terse-empirical-claim-with-"almost"-envelope (FOURTH PATTERN); KAUST motto "Do one thing, and do it well"; software-as-deliverable (R-INLA-the-package); never-had-Twitter-by-choice.
  • Lindgren: constructive-pluralist "Yes please!" (SIXTH PATTERN); ecosystem-of-six-packages; geometric-construction-before-computation; post-deactivation Mastodon-canonical short-form.
  • Gelman: iterative-skeptical-pluralist hedge with five mode-classes (SEVENTH PATTERN); blog-as-canonical-broadcast-surface; Stan-as- collegial-deliverable; Mayo as FIRST-NAMED-CRITIC-ANCHOR; bot-relay-only Twitter (SEVENTH PROGRAMME PATTERN).
  • Meadows: pedagogical-poetic-imperative with five mode-classes (EIGHTH PATTERN); imperative-mood as primary register; system-as- other-with-wisdom-of-its-own; control-refusal as explicit pedagogical commitment; FIRST PROGRAMME DEPLOYMENT of memorial register (past tense
    • posthumous); posthumous-newspaper-column-archive (EIGHTH PROGRAMME PATTERN — Global Citizen column 1986–2001 as canonical short-form surface).

The methodology behind these skills

All seven lenses were built using the same six-phase pipeline. The methodology is summarised in methodology/extraction-framework.md and the validation tooling lives in each perspective's scripts/ folder.

The four canonical scripts (identical across all seven perspectives):

Script Purpose
quality_check.py Validates SKILL.md against a six-criterion gate: mental-models count, limitations stated, expression DNA density, honest boundaries, internal tensions, primary-source ratio
verify_primary_quotes.py Verifies every blockquote in SKILL.md is findable in the merged research corpus (catches fabrication, paraphrase, secondary-only attribution)
verify_triggers.py Scans SKILL.md trigger phrases against the rest of the skill library for collisions
merge_research.py Concatenates per-dimension research dossiers into a single corpus for the verifier

Run them on any SKILL.md to validate it:

python skills/donella-meadows-perspective/scripts/quality_check.py \
       skills/donella-meadows-perspective/SKILL.md

A passing lens hits 6/6 on quality_check, 0 unfound blockquotes on verify_primary_quotes (when the research corpus is present locally), and 0 trigger collisions library-wide on verify_triggers --all.


Adding a new scholar

The pipeline is reusable. To distill an eighth scholar:

  1. Phase 0.5 — Bootstrap a pilot folder with PLAN.md, copy the four canonical scripts, set up references/research/0[1-6]-*.md stubs.
  2. Phase 1 — Dispatch parallel research subagents, one per primary-source dimension (writings / conversations / expression-DNA / external-views / decisions / timeline). Each returns a structured dossier with verbatim quotes, URL anchors, and primary/secondary tags.
  3. Phase 2 — Synthesise the six dossiers into 7 mental models, 10 decision heuristics, expression DNA sub-sections, values + anti-patterns
    • internal tensions, intellectual lineage, honest boundaries.
  4. Phase 3 — Assemble SKILL.md from the synthesis (1,000–1,700 lines typical). Run the three quality gates.
  5. Phase 4 — Validate against five highest-leverage edge-case prompts that test the lens's exclusivity discipline and refusals.
  6. Phase 5 — Dispatch two independent reviewer agents in parallel: one audits persona authenticity (10-axis rubric); one audits cross-scholar register-exclusivity (8-axis rubric). Apply surgical edits.

Mark deployed when both reviewers PASS and all three quality gates re-pass clean.

The methodology is described in more detail in methodology/extraction-framework.md.


What's NOT in this repo

By design, the following large, copyright-sensitive artefacts are not shipped publicly:

  • references/research/0[1-6]-*.md — per-pilot research dossiers (~50–95 KB each; ~2,500–2,900 lines per pilot total) containing extensive verbatim quotation from copyrighted books and papers.
  • references/synthesis.md — Phase 2 worksheet (~150–260 KB per pilot) similarly verbatim-heavy.
  • references/sources/ — primary-source PDFs (the Sterman pilot's ~27 MB of PDFs, etc.).
  • ~/sandbox/<scholar>-pilot/PHASE-LOG.md — pilot audit trail.
  • ~/sandbox/<scholar>-pilot/transcripts/ — auto-caption transcripts of YouTube talks. Some SKILL.md files cite these as provenance pointers; the transcripts themselves stay local.

The SKILL.md files retain short verbatim signature axioms (≤50 words per quote) attributed under each scholar's primary citation. These are defensible scholarly fair-use extracts for criticism and commentary. Anyone wanting to reproduce the full pipeline locally can follow the extraction framework against their own copies of the primary sources.


Repository layout

cm-committee/
├── README.md                                    ← you are here
├── LICENSE                                      ← MIT
├── CONTRIBUTING.md
├── CITATION.cff
├── methodology/
│   └── extraction-framework.md                  ← shared 6-phase methodology
└── skills/
    ├── cm-committee/                            ← orchestrator
    │   └── SKILL.md
    ├── john-sterman-perspective/                ← perspective lens 1
    │   ├── SKILL.md
    │   └── scripts/
    │       ├── merge_research.py
    │       ├── quality_check.py
    │       ├── verify_primary_quotes.py
    │       └── verify_triggers.py
    ├── jay-forrester-perspective/               ← perspective lens 2
    │   ├── SKILL.md
    │   └── scripts/
    │       └── (same four scripts)
    ├── michael-batty-perspective/               ← perspective lens 3
    ├── havard-rue-perspective/                  ← perspective lens 4
    ├── finn-lindgren-perspective/               ← perspective lens 5
    ├── andrew-gelman-perspective/               ← perspective lens 6
    └── donella-meadows-perspective/             ← perspective lens 7

Each perspective is independently movable. To install only the Meadows lens, copy skills/donella-meadows-perspective/ into your ~/.claude/skills/ directory.


Compatibility

  • Claude Code — primary target. Skills auto-discover at session start.
  • Claude API directly — load SKILL.md body as system prompt; honour the trigger and refusal frontmatter.
  • Other LLM-agent runtimes — adaptable per the manual-invocation pattern above. The SKILL.md format is plain markdown with YAML frontmatter; no Claude-Code-specific machinery beyond the auto-discovery wiring.

The lenses themselves are model-agnostic in content. The ideal deployment uses a frontier model with strong instruction-following (Claude Sonnet 4.6+, Claude Opus 4.5+, GPT-5+ family). Smaller models will have more difficulty maintaining the persona register under load.


Citation

If you use these skills in academic work or in a paper that benefits from one of the perspectives, please cite both the underlying scholar (via their primary works listed in each SKILL.md Sources section) and this distillation:

@software{linyu_cm_committee_2026,
  author       = {Linyu, Dan},
  title        = {cm-committee: A Seven-Voice Methodology Council
                  for Claude Code},
  year         = 2026,
  url          = {https://github.com/danlinyu/cm-committee},
  version      = {1.0.0},
  license      = {MIT},
  note         = {Distilled persona skills for Sterman, Forrester,
                  Batty, Rue, Lindgren, Gelman, Meadows}
}

A CITATION.cff is included for GitHub's "Cite this repository" widget.


Attribution and copyright

This repository contains short verbatim quotations from copyrighted works by John Sterman, Jay W. Forrester, Michael Batty, Håvard Rue, Finn Lindgren, Andrew Gelman, Donella H. Meadows, and their collaborators. Each quotation appears with full primary-source attribution in the relevant SKILL.md Sources section. These quotations are reproduced under fair-use principles for the purpose of criticism, commentary, scholarship, and methodology distillation.

The original copyrighted works belong to their respective authors and publishers. This distillation is the author's expressive arrangement and methodology output, released under MIT.

If you are an author or rights-holder cited herein and have concerns about specific quotations, please open an issue on this repository and the maintainer will respond promptly.


License

MIT — Copyright © 2026 Dan Linyu.

The MIT licence applies to the distillation methodology, scripts, and the original expressive arrangement of these SKILL.md files. The underlying copyrighted works quoted herein remain the property of their respective rights-holders.


Acknowledgements

These skills were built using Claude Code with Anthropic Claude Opus as the distillation engine across multiple sessions on 2026-04-25 → 2026-04-30. The 6-phase methodology was adapted from alchaincyf/nuwa-skill.

Each scholar's contribution to systems thinking, applied statistics, spatial Bayesian methods, and urban science is the substance these skills attempt to distill — not the form. The form (verbatim register, exclusivity discipline, honest boundaries) belongs to the distillation; the substance belongs to the scholars.

About

A seven-voice methodology council for Claude Code: independently deployable persona skills distilled from primary sources for Sterman, Forrester, Batty, Rue, Lindgren, Gelman, Meadows, plus the cm-committee orchestrator that runs them as a structured debate.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages