Summary
Implement CNO, which uses continuous convolutional layers with upsampling/downsampling to preserve the continuous nature of operators.
Reference
- Raonic et al., "Convolutional Neural Operators for robust and accurate learning of PDEs," NeurIPS 2023. Paper
Description
CNO preserves the continuous nature of operators even in discretized form. Each layer consists of upsampling (V), convolution (K), and activation (σ), applied in a way that converges to the continuous operator as resolution increases. It also includes Fourier feature processing for inputs. The paper reports significantly better performance than FNO and DeepONet on multi-scale PDE benchmarks.
Key distinction from standard CNNs: CNO is designed so that increasing discretization resolution converges to the continuous operator, rather than just being a finite-dimensional approximation.
Summary
Implement CNO, which uses continuous convolutional layers with upsampling/downsampling to preserve the continuous nature of operators.
Reference
Description
CNO preserves the continuous nature of operators even in discretized form. Each layer consists of upsampling (V), convolution (K), and activation (σ), applied in a way that converges to the continuous operator as resolution increases. It also includes Fourier feature processing for inputs. The paper reports significantly better performance than FNO and DeepONet on multi-scale PDE benchmarks.
Key distinction from standard CNNs: CNO is designed so that increasing discretization resolution converges to the continuous operator, rather than just being a finite-dimensional approximation.