Skip to content

v0idOS/performance-deity

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Performance Deity ⚡

License: MIT

A performance engineering plugin for Claude Code — and a CLAUDE.md template for any AI coding assistant.

Nine benchmark-driven skills that enforce a strict, phased performance methodology. Every skill requires proof before presenting a change: a before/after table with real numbers. No guessing.


Install via Claude Code Plugin

/plugin marketplace add https://github.com/v0idOS/performance-deity
/plugin install performance-deity@performance-deity

This gives you nine slash commands in Claude Code:

Command What it does
/optimize Benchmarks, analyzes Big-O, rewrites, proves improvement
/memory Instruments memory, proves the leak grows, fixes the reference
/concurrency Fires 1k–10k concurrent requests, proves failure, adds atomic ops
/database Runs EXPLAIN ANALYZE, kills N+1 loops, provides CREATE INDEX statements
/stress-test Bombards code with extreme inputs, documents crashes, hardens the boundary
/network Profiles payload bytes, enforces Brotli/ETag, migrates to GraphQL/protobuf
/build Fixes Docker layer order, Webpack caching, GitHub Actions pipelines
/telemetry Wraps code in OpenTelemetry spans, proposes concrete alert rules
/ui Audits React re-renders, GPU-accelerates animations, virtualizes long lists

Or: Drop One File (No Plugin Required)

If you're not using Claude Code's plugin system, copy CLAUDE.md directly into your project root. Claude Code reads it automatically on every session — no flags, no config.

git clone https://github.com/v0idOS/performance-deity
cp performance-deity/CLAUDE.md /your/project/CLAUDE.md

For Cursor or Windsurf:

cp performance-deity/CLAUDE.md /your/project/.cursorrules

Global Rules (Enforced by Both Methods)

  • Never suggest a code change for performance without benchmarking the existing code first.
  • Every optimization must include a before/after comparison table with real numbers.
  • Prefer algorithmic improvements over micro-optimizations.
  • If a proposed change does not measurably beat the baseline, discard it and try a different approach.
  • Every final report must explain why the change is faster, grounded in CPU architecture, memory layout, or I/O behavior.

Ephemeral Benchmarking (True Zero-Config)

Unlike other plugins that require you to copy a tools/ folder into your project, Performance Deity generates its own benchmarking tools on the fly.

Whenever Claude needs to prove a latency reduction, it writes an ephemeral micro-benchmark script in the target language (Node.js, Python, Rust, etc.), runs it to collect P95 and Average execution times (including a warm-up phase to prime JIT caches), and then deletes the script. No clutter, no dependencies.


Pre-Commit Hook (Optional)

Blocks commits containing unresolved TODO: optimize markers:

cp hooks/pre-commit .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit

Repo Structure

.
├── CLAUDE.md                   ← Drop-in template for any AI assistant
├── README.md
├── LICENSE                     ← MIT
├── .claude-plugin/
│   ├── plugin.json             ← Plugin manifest (Claude Code plugin system)
│   └── marketplace.json        ← Marketplace catalog (buildwithclaude.com)
├── skills/                     ← Slash command skill files (Claude Code plugin)
│   ├── optimize/SKILL.md
│   ├── memory/SKILL.md
│   ├── concurrency/SKILL.md
│   ├── database/SKILL.md
│   ├── stress-test/SKILL.md
│   ├── network/SKILL.md
│   ├── build/SKILL.md
│   ├── telemetry/SKILL.md
│   └── ui/SKILL.md
├── hooks/
│   └── pre-commit              ← Optional git hook
└── archive/                    ← Original SKILL.md files (pre-refactor)

Roadmap

  • Cross-platform benchmark runners (Python, Node.js, PowerShell, Bash)
  • All nine skills in a single CLAUDE.md
  • Claude Code plugin with proper marketplace.json and slash commands
  • MIT license
  • GitHub Actions workflow: run benchmarks on PRs, fail on P95 regression
  • .cursorrules variant maintained in sync with CLAUDE.md

License

MIT


Built for developers who demand blazing-fast software.

About

A ruthlessly strict algorithmic optimization methodology for AI coding agents. Forces your agent to profile, benchmark, and mathematically prove performance gains before writing code.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages