Skip to content

feat: add validate_change MCP tool — pre-flight risk scoring for AI agents #17

@bntvllnt

Description

@bntvllnt

Summary

Add a validate_change MCP tool that lets AI agents submit proposed changes and receive risk scoring before modifying code. This is the missing feedback loop — research proves static analysis feedback reduces AI-generated bugs by 70-80% (arXiv 2508.14419).

Motivation

  • AI-generated code has 1.68x more issues per PR than human code (CodeRabbit Dec 2025)
  • Amazon lost 6.3M orders from an AI-assisted code change (March 2026)
  • No competitor offers pre-flight architectural risk scoring via MCP
  • Anthropic's Code Review (March 9, 2026) analyzes "logic relationships between functions, modules, and dependencies" — we should be the infrastructure layer providing this data

Proposed API

tool: validate_change
input: { filePath: string, symbol?: string, changeType: "modify" | "delete" | "add" }
output: {
  riskScore: number (0-1),
  blastRadius: number,
  affectedFiles: string[],
  affectedTests: string[],
  metrics: { pageRank, betweenness, coupling, tension },
  recommendation: "low-risk" | "review-recommended" | "high-risk-review-required",
  hints: string[]
}

Acceptance Criteria

  • Agent can query risk before modifying a file
  • Returns blast radius, affected tests, and risk level
  • Integrates with existing graph metrics (PageRank, betweenness, coupling)
  • Works with detect_changes for diff-aware risk scoring
  • Tests covering all risk levels

Priority

Immediate — This is the #1 opportunity from competitive analysis. No other MCP tool provides this.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions