Skip to content

AllenGrahamHart/Certified_Neural_Operators

Repository files navigation

Certified Neural Operators: Computable Error Bounds for Physics-Informed Operator Learning

This repository contains the implementation and experimental code for the paper:

Certified Neural Operators: Computable Error Bounds for Physics-Informed Operator Learning

Neural operators have emerged as powerful surrogates for parametric PDEs, yet they provide no reliability guarantees at inference time. We present a practical certification framework that provides computable, per-instance error bounds through randomized dual-norm estimation.

Key Results

  • 96% certificate validity in-distribution with tight bounds (2.27x effectivity)
  • 100% validity under 20x distribution shift (OOD)
  • First practical certification for neural operators
  • Theoretical guarantee: O(ε⁻²log(1/δ)) test functions suffice

Installation

# Clone the repository
git clone <repository-url>
cd Numerical_Solution_to_PDEs

# Create virtual environment
python -m venv .venv
source .venv/bin/activate  # Linux/Mac
# or: .venv\Scripts\activate  # Windows

# Install dependencies
pip install -r requirements.txt

Requirements

  • Python 3.8+
  • PyTorch 1.9+
  • NumPy, SciPy, Matplotlib
  • h5py (for data storage)
  • einops (for tensor operations)
  • PyYAML (for configuration)

Project Structure

Numerical_Solution_to_PDEs/
├── src/
│   ├── solvers/           # FEM solver for Darcy equation
│   │   └── darcy_fem.py   # Finite difference discretization
│   ├── models/            # Neural operator architectures
│   │   ├── fno.py         # Fourier Neural Operator
│   │   └── pino.py        # Physics-Informed Neural Operator
│   ├── estimators/        # Error estimation components
│   │   ├── residual.py    # Weak residual computation
│   │   ├── test_functions.py  # Test function sampling
│   │   ├── dual_norm.py   # Dual-norm estimation
│   │   ├── coercivity.py  # Coercivity bounds
│   │   └── error_estimator.py  # Complete certified estimator
│   ├── active_learning/   # Active learning components
│   │   ├── acquisition.py # Acquisition functions
│   │   └── loop.py        # Active learning loop
│   ├── data/              # Data utilities
│   │   └── dataset.py     # Dataset classes
│   └── utils/             # Utilities
│       ├── metrics.py     # Evaluation metrics
│       └── visualization.py  # Plotting functions
├── scripts/
│   ├── train.py           # Main training script
│   ├── run_active_learning.py  # Active learning experiments
│   └── create_paper_figures.py # Generate publication figures
├── configs/
│   ├── default.yaml       # Default configuration
│   └── pino_tuned.yaml    # Tuned PINO configuration
├── paper/
│   ├── main.tex           # LaTeX source
│   ├── main.pdf           # Compiled paper
│   ├── references.bib     # Bibliography
│   └── figures/           # Publication figures
├── data/                  # Generated datasets
├── results/               # Experimental results
└── checkpoints/           # Model checkpoints

Usage

1. Generate Dataset

from src.solvers.darcy_fem import generate_darcy_dataset

# Generate training data
generate_darcy_dataset(
    n_samples=200,
    resolution=64,
    coef_type='random_field',
    contrast=10.0,
    save_path='data/darcy_train.h5'
)

2. Train PINO Model

python scripts/train.py --config configs/pino_tuned.yaml

Or programmatically:

from src.models.pino import PINO, PINOTrainer
from src.data.dataset import DarcyDataset

# Load data
dataset = DarcyDataset('data/darcy_train.h5')

# Create model
model = PINO(
    modes=12,
    width=32,
    n_layers=4
)

# Train with physics loss
trainer = PINOTrainer(
    model=model,
    physics_weight=0.05  # Tuned for OOD robustness
)
trainer.fit(dataset, epochs=200)

3. Certified Inference

from src.estimators.error_estimator import CertifiedErrorEstimator

# Load trained model
model = PINO.load('checkpoints/pino_best.pt')

# Create estimator
estimator = CertifiedErrorEstimator(
    n_test_functions=100,
    test_function_mix=[0.4, 0.3, 0.3],  # Fourier, Local, Random
    calibration_slack=0.05
)

# Calibrate on validation set
estimator.calibrate(model, val_dataset)

# Certified inference
for coef, forcing in test_data:
    prediction = model(coef, forcing)
    error_bound = estimator.certify(prediction, coef, forcing)
    print(f"Prediction with certified bound: {error_bound:.4f}")

4. Run Experiments

# Full experimental suite (Sessions 5-7)
python scripts/run_session5_experiments.py  # Sample efficiency
python scripts/run_session6_experiments.py  # Baseline comparison
python scripts/run_session7_experiments.py  # Ablations + OOD

# Generate paper figures
python scripts/create_paper_figures.py

Reproducing Paper Results

Table 1: FNO vs PINO Baselines

python scripts/run_session6_experiments.py

Key results:

  • FNO: 8.0% L2 error at 200 samples (in-distribution)
  • PINO: 45.3% L2 but 10x better OOD robustness

Table 2: OOD Certification

Contrast shift from 10:1 (training) to 200:1:

  • In-distribution: 96% validity, 2.27x effectivity
  • OOD (200:1): 100% validity, 48.6x effectivity

Table 3: Test Function Ablation

  • Fourier-only: 97.5% validity (best)
  • Local-only: 90.0% validity (fails target)
  • Mixed: 97.5% validity

Table 4: Coefficient Type Shift

  • Random field (ID): 100% validity
  • Inclusions: 96% validity
  • Piecewise: 78% validity (limitation)

Citation

@inproceedings{certified_neural_operators,
  title={Certified Neural Operators: Computable Error Bounds for Physics-Informed Operator Learning},
  author={Anonymous},
  booktitle={Advances in Neural Information Processing Systems},
  year={2026}
}

License

MIT License

Acknowledgments

This work builds on:

  • Fourier Neural Operator (Li et al., 2020)
  • Physics-Informed Neural Operators (Li et al., 2021)
  • Classical a-posteriori error estimation (Verfurth, 1996)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors