-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Summary
Add a validate_change MCP tool that lets AI agents submit proposed changes and receive risk scoring before modifying code. This is the missing feedback loop — research proves static analysis feedback reduces AI-generated bugs by 70-80% (arXiv 2508.14419).
Motivation
- AI-generated code has 1.68x more issues per PR than human code (CodeRabbit Dec 2025)
- Amazon lost 6.3M orders from an AI-assisted code change (March 2026)
- No competitor offers pre-flight architectural risk scoring via MCP
- Anthropic's Code Review (March 9, 2026) analyzes "logic relationships between functions, modules, and dependencies" — we should be the infrastructure layer providing this data
Proposed API
tool: validate_change
input: { filePath: string, symbol?: string, changeType: "modify" | "delete" | "add" }
output: {
riskScore: number (0-1),
blastRadius: number,
affectedFiles: string[],
affectedTests: string[],
metrics: { pageRank, betweenness, coupling, tension },
recommendation: "low-risk" | "review-recommended" | "high-risk-review-required",
hints: string[]
}
Acceptance Criteria
- Agent can query risk before modifying a file
- Returns blast radius, affected tests, and risk level
- Integrates with existing graph metrics (PageRank, betweenness, coupling)
- Works with
detect_changesfor diff-aware risk scoring - Tests covering all risk levels
Priority
Immediate — This is the #1 opportunity from competitive analysis. No other MCP tool provides this.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request