Adaptive Reasoning and Collaboration Network
A predictive operations intelligence platform that integrates siloed data sources, applies machine learning for forecasting, and provides AI-powered decision support through a multi-agent architecture.
ARCnet introduces a semantic MoE architecture for multi-agent orchestration where expert selection is governed by embedding-based similarity, policy constraints, and diversity optimization—rather than fixed routing or expensive LLM-based selection.
Each agent's fitness for a mission is computed as a weighted composite:
Where:
-
$\mathbf{v}_m$ = mission/request embedding vector -
$\mathbf{v}_i$ = agent's doctrine centroid (embedded from MOS-scoped T&R tasks, METL, and guidance) -
$\cos(\cdot, \cdot)$ = cosine similarity measuring semantic relevance -
$\text{Chain}(i)$ ,$\text{Coverage}(i)$ ,$\text{Avail}(i)$ ,$\text{Load}(i)$ = governance and operational factors
Agents must pass similarity and availability thresholds to enter the candidate pool:
The system selects agents by maximizing total score while penalizing redundant selections:
Subject to capacity and coverage constraints:
This yields a small, relevant, diverse team by explicitly discouraging agents that are too similar to each other.
Combines deterministic organizational coverage with mesh-based specialist augmentation:
Where
flowchart LR
M["Mission Embedding"] --> S1["Compute Similarity"]
S1 --> S2["Composite Score S(i)"]
S2 --> F["Filter by τ_sim, τ_avail"]
F --> SEL{Mode}
SEL -->|Mesh| AM["Mesh Selection:<br/>MustInclude ∪ argmax with λ penalty"]
SEL -->|Hybrid| AH["Hybrid Selection:<br/>A_org ∪ Mesh remainder"]
subgraph Constraints
K1["Capacity caps"]
K2["Coverage rules"]
K3["Diversity penalty λ"]
end
Constraints -. enforce .-> AM
Constraints -. enforce .-> AH
See: Research/Agent-Selection-Formulas.md for complete mathematical specification with worked examples.
| Innovation | Description | Research Relevance |
|---|---|---|
| Semantic Agent Routing | Embedding-based expert activation using cosine similarity between mission vector and agent doctrine centroids | Novel application of MoE principles to organizational decision systems; extends sparse gating to human-interpretable domains |
| Diversity-Penalized Selection | Explicit λ-weighted penalty discouraging redundant agent activation | Addresses coverage vs. efficiency tradeoff absent in standard MoE architectures |
| Policy-Constrained Optimization | Governance factors (chain-of-command, coverage requirements, workload) integrated into scoring function | Bridges AI systems research with real-world deployment constraints in human-AI teaming |
| Three Computational Pathways | Org (deterministic), Mesh (similarity-gated), Hybrid (combined) modes | Flexible architecture enabling controlled experiments on routing strategies |
| Checkpoint-Based Auditability | Evidence-linked reasoning with HITL gates aligned to Military Decision-Making Process phases | Explainable AI for high-stakes domains; supports trust calibration research |
| MOS-Scoped Doctrine Seeding | Agents receive domain knowledge via embedding similarity to T&R tasks and METL | Novel approach to expert knowledge injection without fine-tuning |
| Multi-Provider LLM Routing | Task-based model selection across providers with dual-judge COA evaluation | Cost optimization + consensus-based scoring for robust decisions |
ARCnet supports plug-and-play multi-provider LLM configuration with task-based routing and dual-judge evaluation.
Users configure their own API keys and endpoints for multiple providers:
let registry = LLMProviderRegistry.dualProvider(
openAIKey: "sk-...",
anthropicKey: "sk-ant-..."
)
// Or custom endpoints for local/enterprise modelsDifferent pipeline stages route to appropriate model tiers based on task complexity:
| Pipeline Stage | Model Tier | Rationale |
|---|---|---|
| Scribe, Coordinator, Parsing | Fast (GPT-4o-mini, Haiku) | High-volume, simple tasks |
| Specialists, Integrator, Tasking | Standard (GPT-4o, Sonnet) | Balanced quality/cost |
| COA Generator, Evaluator, Judge | Powerful (GPT-4.5, Opus) | Complex reasoning, critical decisions |
// Router automatically selects tier based on stage
let response = try await router.complete(
stage: .coaGenerator, // → powerful tier
messages: messages
)Critical COA scoring uses two independent LLM judges from different providers:
┌─────────────────┐
│ COA Candidate │
└────────┬────────┘
│
┌───────────────┼───────────────┐
▼ ▼ ▼
┌────────────┐ ┌────────────┐
│ Judge 1 │ │ Judge 2 │
│ (Opus 4.5) │ │ (GPT-4.5) │
└─────┬──────┘ └─────┬──────┘
│ │
▼ ▼
┌───────────────────────────────┐
│ Consensus Score (weighted) │
│ Disagreement Detection │
│ → Flag for review if >0.2 │
└───────────────────────────────┘
Benefits:
- Reduced single-model bias: Two perspectives catch blind spots
- Consensus confidence: Agreement between judges increases score reliability
- Automatic review flagging: High disagreement triggers human review
- Provider flexibility: Mix providers based on cost/capability tradeoffs
| Aspect | ARCnet | Standard MoE (Mixtral, GPT-4) | LangChain/LangGraph | AutoGen/CrewAI |
|---|---|---|---|---|
| Routing Mechanism | Semantic similarity + policy constraints | Learned gating network (neural) | Hardcoded or LLM-selected | LLM-based delegation |
| Diversity Control | Explicit redundancy penalty (λ) | Implicit via training loss | None | None |
| Expert Granularity | Organizational roles (human-interpretable) | Neural network sublayers | Tool/agent definitions | Agent personas |
| Governance Integration | First-class (chain-of-command, coverage, workload) | N/A | Manual implementation | Manual implementation |
| Human Oversight | Configurable HITL/HOTL/AUTO gates per stage | N/A | Optional callbacks | Optional human-in-loop |
| Auditability | Checkpoint ledger with evidence citations | Attention weights (limited) | Trace logging | Conversation history |
| Domain Adaptation | Configuration-driven (schema mapping) | Requires retraining/fine-tuning | Prompt engineering | Prompt engineering |
| Multi-Provider LLM | Task-based routing + dual-judge evaluation | Single provider | Single provider | Single provider |
-
Mathematical Rigor: Unlike prompt-based agent frameworks, ARCnet's routing is governed by explicit optimization with provable properties (diversity penalty, capacity constraints).
-
Organizational Grounding: Agents map to real organizational structures (G-shops, billets, MOS codes), enabling direct comparison between AI-assisted and traditional staff workflows.
-
Governance by Design: Policy constraints aren't afterthoughts—they're integrated into the selection objective function, ensuring compliance without post-hoc filtering.
Traditional multi-agent AI systems face a fundamental routing problem:
| Approach | Problem |
|---|---|
| Fixed routing (always consult the same experts) | Inefficient; irrelevant experts consume tokens and add noise |
| LLM-based routing (ask an LLM who to consult) | Expensive; non-deterministic; no formal guarantees |
| Rule-based routing (if X then agent Y) | Brittle; requires manual maintenance; poor generalization |
ARCnet's semantic MoE routing provides a mathematically principled middle ground:
- Efficiency: Activate only agents whose doctrine centroids are semantically close to the mission (cosine similarity gating)
- Coverage: Mandatory shops always included via policy constraints, regardless of similarity scores
- Diversity: Redundant expertise penalized explicitly, yielding compact teams
- Determinism: Given the same inputs, routing is reproducible (no LLM stochasticity in selection)
- Interpretability: Selection rationale is human-readable (similarity scores, constraint satisfaction)
This approach reduces LLM calls by 40-60% compared to fixed full-staff activation while improving contextual relevance of agent outputs.
| Section | Description |
|---|---|
| Research Contribution | Mathematical formulation of semantic MoE routing |
| Key Innovations | Summary of novel contributions |
| Comparison to Related Work | Positioning against existing frameworks |
| System Overview | What ARCnet does and how |
| Architecture | Technical architecture diagrams |
| Research Documentation | Papers, formulas, and proposals |
| Implementation Details | Technical stack, setup, repository structure |
Modern organizations struggle with fragmented data across financial systems, maintenance records, operational schedules, and asset inventories. Decision-makers receive delayed, incomplete pictures of organizational readiness—often discovering problems at execution time rather than during planning.
ARCnet addresses this by creating an Operational Digital Twin: a unified system that ingests disparate data sources, predicts future states using machine learning, surfaces risks proactively, and provides AI agents that reason transparently about complex decisions.
| Capability | Description |
|---|---|
| Semantic Agent Orchestration | Multi-agent system with MoE-style routing based on mission-doctrine similarity |
| Predictive ML Models | XGBoost/SARIMAX for maintenance forecasting, budget burn-rate, risk scoring |
| Natural Language Interface | Text-to-SQL for querying operational data |
| Computer Vision | Document OCR, visual inspection, dashboard parsing |
| RL Optimization | Policy gradient methods for resource allocation and scheduling |
| Human Oversight | Configurable HITL/HOTL/AUTO gates with checkpoint auditability |
┌─────────────────────────────────────┐
│ Presentation Layer │
│ ┌─────────┐ ┌─────────┐ ┌────────┐ │
│ │ iPad App│ │Dashboard│ │ API │ │
│ └────┬────┘ └────┬────┘ └───┬────┘ │
└───────┼──────────┼───────────┼──────┘
│ │ │
┌───────────────────────┴──────────┴───────────┴───────────────────────┐
│ Agent Orchestration Layer │
│ ┌──────────────────────────────────────────────────────────────────┐ │
│ │ Scribe → Coordinator → Specialists → Integrator → Evaluator │ │
│ │ │ │ │
│ │ ┌─────────────────┼─────────────────┐ │ │
│ │ ▼ ▼ ▼ │ │
│ │ [Operations] [Logistics] [Finance] │ │
│ │ │ │
│ │ Human-in-the-Loop ◄──► Checkpoints ◄──► Audit Trail │ │
│ └──────────────────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────────┘
│
┌───────────────────────────────────┴───────────────────────────────────┐
│ Intelligence Layer │
│ ┌────────────────┐ ┌────────────────┐ ┌────────────────┐ │
│ │ Predictive │ │ Budget/Burn │ │ Readiness │ │
│ │ Maintenance │ │ Rate Forecast │ │ Risk Scoring │ │
│ └───────┬────────┘ └───────┬────────┘ └───────┬────────┘ │
│ │ │ │ │
│ ┌───────┴───────────────────┴───────────────────┴───────┐ │
│ │ ML Inference Engine │ │
│ │ XGBoost │ Time-Series │ Risk Ensemble │ RL Optimizer │ │
│ └───────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────────┘
│
┌───────────────────────────────────┴───────────────────────────────────┐
│ Data Layer │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Analytical Warehouse │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ dim_org_unit │ │ dim_asset │ │ dim_event │ │ │
│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ fact_budget │ │ fact_maint │ │ fact_orders │ │ │
│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────────┘
▲
┌───────────────────────────────────┴───────────────────────────────────┐
│ ETL Pipeline │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ File Detect │→ │ Column Map │→ │ Staging │→ │ Transform │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
│ ▲ │
│ │ │
│ ┌──────┴──────────────────────────────────────────────────────────┐ │
│ │ Excel │ CSV │ Scanned Documents (OCR) │ API Exports │ Photos │ │
│ └─────────────────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────────┘
flowchart LR
subgraph "Gate A: Problem Framing"
A1[Mission Input] --> A2[Scribe]
A2 --> A3[Coordinator]
end
subgraph "Specialist Analysis"
A3 --> B1[Agent Selection]
B1 --> B2[Specialists in Parallel]
end
subgraph "Gate B: Integration"
B2 --> C1[Integrator]
C1 --> C2[Evaluator]
end
subgraph "Gate C: COA Tournament"
C2 -->|Complex| D1[COA Generator]
D1 --> D2[Rank & Score]
end
subgraph "Gate D: Tasking"
C2 -->|Simple| E1[Proposed Plan]
D2 --> E1
E1 --> E2[Approval]
E2 -->|Approve| E3[Tasking/Orders]
E2 -->|Reject| A3
end
style E2 fill:#fff3cd,stroke:#c69500
Additional diagrams: See Research/Diagrams/ for Mermaid sources and rendered PNGs.
Problem: Organizations maintain critical data in disconnected spreadsheets, legacy systems, and manual reports with inconsistent formats.
Solution: Automated ETL pipeline that:
- Detects file types via filename patterns and header inspection
- Maps heterogeneous column names to canonical schema
- Loads raw data into staging tables (all TEXT to prevent import errors)
- Transforms and validates into dimensional model (facts + dimensions)
- Maintains full lineage and audit trail
Data Sources Staging Layer Analytical Warehouse
┌──────────────┐ ┌──────────────┐ ┌──────────────────────┐
│ Budget.xlsx │──┐ │ stg_budget │ │ dim_org_unit │
├──────────────┤ │ ├──────────────┤ │ dim_asset │
│ Maint_Log.csv│──┼──[Detect]──│ stg_maint │──[Transform]─│ dim_event │
├──────────────┤ │ ├──────────────┤ │ fact_budget_execution│
│ Schedule.xlsx│──┘ │ stg_events │ │ fact_maintenance │
└──────────────┘ └──────────────┘ │ fact_work_orders │
└──────────────────────┘
Technical Implementation:
- Python-based ingestion with pandas for file parsing
- Configuration-driven column mapping (new file formats require config only, not code)
- PostgreSQL warehouse with star schema design
- Incremental loading with change detection
- Data quality scoring and anomaly flagging
Problem: Equipment failures discovered at execution time cause costly delays and cancellations.
Solution: ML models trained on historical maintenance patterns to forecast:
- Probability of asset failure within specified time windows
- Expected downtime duration
- Parts demand forecasting
- Optimal maintenance scheduling
Approach:
- Gradient boosted trees (XGBoost/LightGBM) for failure classification
- Survival analysis for time-to-failure estimation
- Feature engineering from work order history, usage metrics, age, and environmental factors
- Calibrated probability outputs for risk-based decision making
Problem: Budget execution deviations discovered too late to correct, leading to shortfalls or underspend.
Solution: Time-series models that predict:
- Burn rate trajectories by organizational unit
- End-of-period fund availability
- Anomaly detection for unexpected spending patterns
- Sensitivity analysis for scenario planning
Approach:
- SARIMAX for seasonality-aware forecasting
- Prophet-style decomposition for trend/holiday effects
- Ensemble with gradient boosted regressors for non-linear patterns
- Confidence intervals for risk quantification
Problem: Multiple interdependent factors (equipment, budget, personnel) combine to create operational risk that's difficult to assess holistically.
Solution: Composite risk model that:
- Integrates predictions from maintenance and budget models
- Weights factors by operational impact
- Produces interpretable risk scores with causal attribution
- Ranks events/operations by risk for prioritization
Problem: Complex decisions require synthesizing information across multiple domains (operations, logistics, finance), but expertise is siloed.
Solution: Coordinated AI agents that mirror organizational structure:
Mission Input
│
▼
┌─────────┐ Refines mission statement, extracts constraints
│ Scribe │ and acceptance criteria
└────┬────┘
│
▼
┌─────────────┐ Routes to appropriate specialist agents
│ Coordinator │ based on domain requirements
└──────┬──────┘
│
├──────────────┬──────────────┐
▼ ▼ ▼
┌────────────┐ ┌────────────┐ ┌────────────┐
│ Operations │ │ Logistics │ │ Finance │ Domain experts gather
│ Specialist │ │ Specialist │ │ Specialist │ relevant data and analysis
└─────┬──────┘ └─────┬──────┘ └─────┬──────┘
│ │ │
└──────────────┼──────────────┘
▼
┌────────────┐ Synthesizes findings into
│ Integrator │ coherent assessment
└─────┬──────┘
│
▼
┌────────────┐ Evaluates complexity, determines
│ Evaluator │ if multiple options needed
└─────┬──────┘
│
▼
┌────────────┐ Generates and scores
│ COA │ alternative approaches
│ Generator │
└─────┬──────┘
│
▼
┌────────────┐
│ Approval │◄── Human Decision Point
│ Gate │
└─────┬──────┘
│
▼
┌────────────┐ Produces actionable
│ Tasking │ implementation plan
└────────────┘
Key Features:
- Configurable Autonomy: Each agent can operate in Human-in-the-Loop (approval required), Human-on-the-Loop (auto-advance with intervention capability), or Autonomous mode
- Explainable Checkpoints: Every stage produces auditable output with rationale, evidence citations, and confidence scores
- Evidence Linking: All conclusions cite source documents with section-level granularity
- Audit Trail: Complete JSONL logging for post-hoc analysis and compliance
Problem: Complex queries against operational data require SQL expertise and understanding of data model.
Solution: Text-to-SQL capability that:
- Translates natural language questions into warehouse queries
- Returns results with narrative explanation
- Supports follow-up questions with context retention
Example Queries:
- "Which scheduled events are at risk due to equipment availability?"
- "What is the projected budget shortfall for Q4 by department?"
- "Which assets should we prioritize for preventive maintenance?"
Problem: Critical data exists in scanned documents, photos, and legacy printouts that can't be directly ingested.
Solution: Vision capabilities for:
- Document OCR: Extract structured data from scanned reports and forms
- Visual Inspection: Classify damage severity from equipment photos
- Dashboard Parsing: Extract metrics from screenshots of legacy systems
Approach:
- Transformer-based OCR (TrOCR) for text extraction
- Table detection and structure recognition for tabular data
- CNN-based damage classification trained on domain-specific imagery
- Integration with ETL pipeline for seamless data flow
Problem: Resource allocation and scheduling involve complex trade-offs that are difficult to optimize manually.
Solution: RL agents that learn optimal policies for:
- Resource Allocation: Distribute limited resources across organizational units to maximize overall readiness
- Schedule Optimization: Sequence events to minimize conflicts and risk
- COA Ranking: Learn decision preferences from historical outcomes
Approach:
- Policy gradient methods (PPO) for continuous action spaces
- Multi-objective reward shaping for competing goals
- Simulation environment for safe policy training
- Human feedback integration for preference alignment
ARCnet's architecture is domain-agnostic. The core pattern—integrate data, predict future state, surface risks, recommend actions—applies across industries.
- Command decision support with transparent AI reasoning
- Readiness forecasting across equipment, personnel, and funding
- Training and exercise planning optimization
- Automated staff product generation
- Predictive maintenance for production equipment
- Production schedule optimization
- Supply chain risk assessment
- Quality control with visual inspection
- Medical equipment availability forecasting
- Operating room scheduling optimization
- Department budget tracking and forecasting
- Clinical resource allocation
- Heavy equipment maintenance prediction
- Project schedule risk assessment
- Subcontractor and resource optimization
- Site progress monitoring via imagery
- Vehicle breakdown prediction
- Route and load optimization
- Maintenance scheduling for minimal disruption
- Damage assessment from driver-submitted photos
- Grid equipment failure prediction
- Maintenance crew scheduling
- Capital project tracking
- Infrastructure inspection via drone imagery
See APPLICATIONS.md for detailed industry configurations.
The /Research folder contains academic documentation:
| Document | Description |
|---|---|
| RESEARCH-OVERVIEW.md | Research questions, hypotheses, methodology |
| Agent-Selection-Formulas.md | Mathematical specification with worked examples |
| LITERATURE.md | Literature review (MoE, MAS, HAT, XAI) |
| PROPOSAL.md | Program proposal (problem, use case, metrics, risks) |
| Technical-Approach.md | System design methodology |
| papers/Info-paper.pdf | Formal information paper with enclosures |
| Layer | Technology |
|---|---|
| Frontend | Swift 6, SwiftUI, Combine, SceneKit, Swift Charts |
| Backend | Python 3.11+, FastAPI |
| ML/AI | PyTorch, XGBoost, scikit-learn, Hugging Face Transformers |
| Database | PostgreSQL 15+ with TimescaleDB extension |
| ETL | pandas, SQLAlchemy, Apache Airflow (optional) |
| LLM | Multi-provider routing (OpenAI, Anthropic, custom endpoints) with task-based tier selection |
| Vision | OpenCV, TrOCR, YOLO |
| RL | Stable Baselines3, Gymnasium |
ARCnet/
├── ios-app/ # Swift iPad application
│ ├── App/
│ │ ├── Domain/ # Core models and types
│ │ ├── Engine/ # Orchestration and autonomy
│ │ ├── Agents/ # Agent implementations
│ │ ├── LLM/ # LLM client abstraction
│ │ ├── Data/ # Data gateway and feeds
│ │ ├── UI/ # SwiftUI views
│ │ └── Security/ # Keychain services
│ └── Tests/
│
├── backend/ # Python services
│ ├── etl/ # Data ingestion pipeline
│ │ ├── ingestion/ # File detection and loading
│ │ ├── transforms/ # Staging to warehouse
│ │ └── configs/ # File type mappings
│ │
│ ├── ml/ # Machine learning models
│ │ ├── predictive_maint/ # Equipment failure prediction
│ │ ├── budget_forecast/ # Burn-rate modeling
│ │ ├── readiness_risk/ # Composite risk scoring
│ │ └── training/ # Model training pipelines
│ │
│ ├── nlp/ # Natural language processing
│ │ ├── text_to_sql/ # Query translation
│ │ └── summarization/ # Report generation
│ │
│ ├── vision/ # Computer vision
│ │ ├── ocr/ # Document extraction
│ │ └── inspection/ # Visual classification
│ │
│ ├── rl/ # Reinforcement learning
│ │ ├── allocator/ # Resource optimization
│ │ └── scheduler/ # Event sequencing
│ │
│ └── api/ # REST API endpoints
│
├── database/ # PostgreSQL schema
│ ├── migrations/
│ └── seeds/
│
├── notebooks/ # Research and analysis
│ ├── eda/ # Exploratory data analysis
│ ├── model_dev/ # Model development
│ └── math/ # Mathematical foundations
│
└── docs/ # Documentation
├── architecture/
├── research/
└── deployment/
Every AI output includes rationale, evidence citations, and confidence scores. No black-box decisions.
Configurable autonomy levels ensure humans remain in control. Full audit trails for accountability.
Business logic is configuration, not code. Adapting to new industries requires schema mapping, not rewrites.
System operates with partial data. ML models provide uncertainty quantification. Missing inputs are flagged, not fatal.
Secrets in secure storage (Keychain). Encryption at rest. Role-based access. Audit logging.
- Xcode 16+ (Swift 6) for iOS app
- Python 3.11+ for backend services
- PostgreSQL 15+ for data warehouse
- OpenAI API key (or compatible LLM endpoint)
The system can run entirely with synthetic data for evaluation:
# Clone repository
git clone https://github.com/[username]/ARCnet.git
cd ARCnet
# iOS App (synthetic mode)
open ios-app/ARCnet.xcodeproj
# Select "Local (Synthetic)" scheme
# Build and run on iPad simulator
# Backend (optional, for full ML pipeline)
cd backend
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python -m api.mainSee docs/deployment/ for production setup.
This project explores several research areas:
- Graduated autonomy models (HITL → HOTL → Autonomous)
- Intervention mechanisms that preserve human agency
- Trust calibration through transparent reasoning
- Structured workflows for organizational decision processes
- Specialist agent collaboration patterns
- Consensus and conflict resolution mechanisms
- Transfer learning for maintenance prediction across asset types
- Multi-horizon budget forecasting with uncertainty quantification
- Composite risk scoring with causal attribution
- Evidence-linked reasoning chains
- Checkpoint-based decision auditing
- Natural language rationale generation
This project builds on research in:
- Human-AI teaming and adjustable autonomy
- Multi-agent systems for organizational decision support
- Predictive maintenance and remaining useful life estimation
- Explainable AI (XAI) in high-stakes domains
- Digital twin architectures
This project is provided for research and educational purposes.
Timothy Moore GitHub: @Tmmoore286
| Document | Description |
|---|---|
| Research/RESEARCH-OVERVIEW.md | Research questions, hypotheses, methodology |
| Research/Agent-Selection-Formulas.md | Mathematical specification with worked examples |
| Research/LITERATURE.md | Literature review (MoE, MAS, HAT, XAI) |
| Research/PROPOSAL.md | Program proposal with metrics |
| Document | Description |
|---|---|
| AGENTS.md | Agent development patterns |
| CODEX.md | Technical implementation guide |
| Canonical-OVERVIEW.md | Product specification |
| APPLICATIONS.md | Industry-specific configurations |