DimINO: Dimension-Informed Neural Operator Learning
- URL: http://arxiv.org/abs/2410.05894v5
- Date: Thu, 25 Sep 2025 02:58:25 GMT
- Title: DimINO: Dimension-Informed Neural Operator Learning
- Authors: Yichen Song, Yalun Wu, Yunbo Wang, Xiaokang Yang,
- Abstract summary: DimINO is a framework inspired by dimensional analysis.<n>It can be seamlessly integrated into existing neural operator architectures.<n>It achieves up to 76.3% performance gain on PDE datasets.
- Score: 41.37905663176428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In computational physics, a longstanding challenge lies in finding numerical solutions to partial differential equations (PDEs). Recently, research attention has increasingly focused on Neural Operator methods, which are notable for their ability to approximate operators-mappings between functions. Although neural operators benefit from a universal approximation theorem, achieving reliable error bounds often necessitates large model architectures, such as deep stacks of Fourier layers. This raises a natural question: Can we design lightweight models without sacrificing generalization? To address this, we introduce DimINO (Dimension-Informed Neural Operators), a framework inspired by dimensional analysis. DimINO incorporates two key components, DimNorm and a redimensionalization operation, which can be seamlessly integrated into existing neural operator architectures. These components enhance the model's ability to generalize across datasets with varying physical parameters. Theoretically, we establish a universal approximation theorem for DimINO and prove that it satisfies a critical property we term Similar Transformation Invariance (STI). Empirically, DimINO achieves up to 76.3% performance gain on PDE datasets while exhibiting clear evidence of the STI property.
Related papers
- Geometric Laplace Neural Operator [12.869633759181417]
We propose a generalized operator learning framework based on a pole-residue decomposition enriched with exponential basis functions.<n>We introduce the Geometric Laplace Neural Operator (GLNO), which embeds the Laplace spectral representation into the eigen-basis of the Laplace-Beltrami operator.<n>We further design a grid-invariant network architecture (GLNONet) that realizes GLNO in practice.
arXiv Detail & Related papers (2025-12-18T11:07:41Z) - Fourier Neural Operators Explained: A Practical Perspective [75.12291469255794]
The Fourier Neural Operator (FNO) has become the most influential and widely adopted due to its elegant spectral formulation.<n>This guide aims to establish a clear and reliable framework for applying FNOs effectively across diverse scientific and engineering fields.
arXiv Detail & Related papers (2025-12-01T08:56:21Z) - Neural Expectation Operators [2.1756081703276]
This paper introduces textbfMeasure Learning, a paradigm for modeling ambiguity via non-linear expectations.<n>We define Neural Expectation Operators as solutions to Backward Differential Equations (BSDEs) whose drivers are parameterized by neural networks.<n>We provide constructive methods for enforcing key axiomatic properties, such as convexity, by architectural design.
arXiv Detail & Related papers (2025-07-13T06:19:28Z) - Neural Interpretable PDEs: Harmonizing Fourier Insights with Attention for Scalable and Interpretable Physics Discovery [15.29112632863168]
We introduce Neural Interpretable PDEs (NIPS), a novel neural operator architecture that builds upon and enhances Nonlocal Attention Operators (NAO)<n>NIPS employs a linear attention mechanism to enable scalable learning and integrates a learnable kernel network that acts as a channel-independent convolution in Fourier space.<n> Empirical evaluations demonstrate that NIPS consistently surpasses NAO and other baselines across diverse benchmarks.
arXiv Detail & Related papers (2025-05-29T05:18:30Z) - TensorGRaD: Tensor Gradient Robust Decomposition for Memory-Efficient Neural Operator Training [91.8932638236073]
We introduce textbfTensorGRaD, a novel method that directly addresses the memory challenges associated with large-structured weights.<n>We show that sparseGRaD reduces total memory usage by over $50%$ while maintaining and sometimes even improving accuracy.
arXiv Detail & Related papers (2025-01-04T20:51:51Z) - Physics-informed Discretization-independent Deep Compositional Operator Network [1.2430809884830318]
We introduce a novel physics-informed model architecture which can generalize to various discrete representations of PDE parameters and irregular domain shapes.
Inspired by deep operator neural networks, our model involves a discretization-independent learning of parameter embedding repeatedly.
Numerical results demonstrate the accuracy and efficiency of the proposed method.
arXiv Detail & Related papers (2024-04-21T12:41:30Z) - Operator Learning: Algorithms and Analysis [8.305111048568737]
Operator learning refers to the application of ideas from machine learning to approximate operators mapping between Banach spaces of functions.
This review focuses on neural operators, built on the success of deep neural networks in the approximation of functions defined on finite dimensional Euclidean spaces.
arXiv Detail & Related papers (2024-02-24T04:40:27Z) - Generalization Error Guaranteed Auto-Encoder-Based Nonlinear Model
Reduction for Operator Learning [12.124206935054389]
In this paper, we utilize low-dimensional nonlinear structures in model reduction by investigating Auto-Encoder-based Neural Network (AENet)
Our numerical experiments validate the ability of AENet to accurately learn the solution operator of nonlinear partial differential equations.
Our theoretical framework shows that the sample complexity of training AENet is intricately tied to the intrinsic dimension of the modeled process.
arXiv Detail & Related papers (2024-01-19T05:01:43Z) - GIT-Net: Generalized Integral Transform for Operator Learning [58.13313857603536]
This article introduces GIT-Net, a deep neural network architecture for approximating Partial Differential Equation (PDE) operators.
GIT-Net harnesses the fact that differential operators commonly used for defining PDEs can often be represented parsimoniously when expressed in specialized functional bases.
Numerical experiments demonstrate that GIT-Net is a competitive neural network operator, exhibiting small test errors and low evaluations across a range of PDE problems.
arXiv Detail & Related papers (2023-12-05T03:03:54Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - Learning Generic Solutions for Multiphase Transport in Porous Media via
the Flux Functions Operator [0.0]
DeepDeepONet has emerged as a powerful tool for accelerating rendering fluxDEs.
We use Physics-In DeepONets (PI-DeepONets) to achieve this mapping without any input paired-output observations.
arXiv Detail & Related papers (2023-07-03T21:10:30Z) - The Parametric Complexity of Operator Learning [5.756283466216181]
This paper is to prove that for general classes of operators which are characterized only by their $Cr$- or Lipschitz-regularity, operator learning suffers from a "curse of parametric complexity"<n>The second contribution of the paper is to prove that this general curse can be overcome for solution operators defined by the Hamilton-Jacobi equation.<n>A novel neural operator architecture is introduced, termed HJ-Net, which explicitly takes into account characteristic information of the underlying Hamiltonian system.
arXiv Detail & Related papers (2023-06-28T05:02:03Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Generalized Neural Closure Models with Interpretability [28.269731698116257]
We develop a novel and versatile methodology of unified neural partial delay differential equations.
We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations.
We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models.
arXiv Detail & Related papers (2023-01-15T21:57:43Z) - Multi-scale Physical Representations for Approximating PDE Solutions
with Graph Neural Operators [14.466945570499183]
We study three multi-resolution schema with integral kernel operators approximated with emphMessage Passing Graph Neural Networks (MPGNNs)
To validate our study, we make extensive MPGNNs experiments with well-chosen metrics considering steady and unsteady PDEs.
arXiv Detail & Related papers (2022-06-29T14:42:03Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.