SP2RINT: Spatially-Decoupled Physics-Inspired Progressive Inverse Optimization for Scalable, PDE-Constrained Meta-Optical Neural Network Training
- URL: http://arxiv.org/abs/2505.18377v2
- Date: Wed, 28 May 2025 22:12:11 GMT
- Title: SP2RINT: Spatially-Decoupled Physics-Inspired Progressive Inverse Optimization for Scalable, PDE-Constrained Meta-Optical Neural Network Training
- Authors: Pingchuan Ma, Ziang Yin, Qi Jing, Zhengqi Gao, Nicholas Gangi, Boyang Zhang, Tsung-Wei Huang, Zhaoran Huang, Duane S. Boning, Yu Yao, Jiaqi Gu,
- Abstract summary: SP2RINT is a spatially decoupled, progressive training framework for meta-optical neural systems.<n>It achieves digital-comparable accuracy while being 1825 times faster than simulation-in-the-loop approaches.
- Score: 23.920752887898658
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: DONNs leverage light propagation for efficient analog AI and signal processing. Advances in nanophotonic fabrication and metasurface-based wavefront engineering have opened new pathways to realize high-capacity DONNs across various spectral regimes. Training such DONN systems to determine the metasurface structures remains challenging. Heuristic methods are fast but oversimplify metasurfaces modulation, often resulting in physically unrealizable designs and significant performance degradation. Simulation-in-the-loop optimizes implementable metasurfaces via adjoint methods, but is computationally prohibitive and unscalable. To address these limitations, we propose SP2RINT, a spatially decoupled, progressive training framework that formulates DONN training as a PDE-constrained learning problem. Metasurface responses are first relaxed into freely trainable transfer matrices with a banded structure. We then progressively enforce physical constraints by alternating between transfer matrix training and adjoint-based inverse design, avoiding per-iteration PDE solves while ensuring final physical realizability. To further reduce runtime, we introduce a physics-inspired, spatially decoupled inverse design strategy based on the natural locality of field interactions. This approach partitions the metasurface into independently solvable patches, enabling scalable and parallel inverse design with system-level calibration. Evaluated across diverse DONN training tasks, SP2RINT achieves digital-comparable accuracy while being 1825 times faster than simulation-in-the-loop approaches. By bridging the gap between abstract DONN models and implementable photonic hardware, SP2RINT enables scalable, high-performance training of physically realizable meta-optical neural systems. Our code is available at https://github.com/ScopeX-ASU/SP2RINT
Related papers
- Efficient Training of Physics-enhanced Neural ODEs via Direct Collocation and Nonlinear Programming [0.0]
We propose a novel approach for training Physics-enhanced Neural ODEs (PeNODEs) by expressing the training process as a dynamic optimization problem.<n>The full model, including neural components, is discretized using a high-order implicit Runge-Kutta method with flipped Legendre-Gauss-Radau points.<n>This formulation enables simultaneous optimization of network parameters and state trajectories, addressing key limitations of ODE solver-based training in terms of stability, runtime, and accuracy.
arXiv Detail & Related papers (2025-05-06T14:04:46Z) - Physics-Informed Latent Neural Operator for Real-time Predictions of Complex Physical Systems [0.0]
We propose PI-Latent-NO, a physics-informed latent neural operator framework that integrates governing physics directly into the learning process.<n>Our architecture features two coupled DeepONets trained end-to-end: a Latent-DeepONet that learns a low-dimensional representation of the solution, and a Reconstruction-DeepONet that maps this latent representation back to the physical space.
arXiv Detail & Related papers (2025-01-14T20:38:30Z) - Metamizer: a versatile neural optimizer for fast and accurate physics simulations [4.717325308876749]
We introduce Metamizer, a novel neural network that iteratively solves a wide range of physical systems with high accuracy.<n>We demonstrate that Metamizer achieves unprecedented accuracy for deep learning based approaches.<n>Our results suggest that Metamizer could have a profound impact on future numerical solvers.
arXiv Detail & Related papers (2024-10-10T11:54:31Z) - Scalable Mechanistic Neural Networks for Differential Equations and Machine Learning [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.<n>We reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.<n>Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Scaling physics-informed hard constraints with mixture-of-experts [0.0]
We develop a scalable approach to enforce hard physical constraints using Mixture-of-Experts (MoE)
MoE imposes the constraint over smaller domains, each of which is solved by an "expert" through differentiable optimization.
Compared to standard differentiable optimization, our scalable approach achieves greater accuracy in the neural PDE solver setting.
arXiv Detail & Related papers (2024-02-20T22:45:00Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - A Sequential Meta-Transfer (SMT) Learning to Combat Complexities of
Physics-Informed Neural Networks: Application to Composites Autoclave
Processing [1.6317061277457001]
PINNs have gained popularity in solving nonlinear partial differential equations.
PINNs are designed to approximate a specific realization of a given PDE system.
They lack the necessary generalizability to efficiently adapt to new system configurations.
arXiv Detail & Related papers (2023-08-12T02:46:54Z) - Learning Generic Solutions for Multiphase Transport in Porous Media via
the Flux Functions Operator [0.0]
DeepDeepONet has emerged as a powerful tool for accelerating rendering fluxDEs.
We use Physics-In DeepONets (PI-DeepONets) to achieve this mapping without any input paired-output observations.
arXiv Detail & Related papers (2023-07-03T21:10:30Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - A nonlocal physics-informed deep learning framework using the
peridynamic differential operator [0.0]
We develop a nonlocal PINN approach using the Peridynamic Differential Operator (PDDO)---a numerical method which incorporates long-range interactions and removes spatial derivatives in the governing equations.
Because the PDDO functions can be readily incorporated in the neural network architecture, the nonlocality does not degrade the performance of modern deep-learning algorithms.
We document the superior behavior of nonlocal PINN with respect to local PINN in both solution accuracy and parameter inference.
arXiv Detail & Related papers (2020-05-31T06:26:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.