NUNO: A General Framework for Learning Parametric PDEs with Non-Uniform
Data
- URL: http://arxiv.org/abs/2305.18694v2
- Date: Wed, 31 May 2023 06:39:10 GMT
- Title: NUNO: A General Framework for Learning Parametric PDEs with Non-Uniform
Data
- Authors: Songming Liu, Zhongkai Hao, Chengyang Ying, Hang Su, Ze Cheng, Jun Zhu
- Abstract summary: We introduce the Non-Uniform Operator (NUNO) framework for efficient operator learning with non-uniform data.
We transform non-uniform data into uniform grids while effectively controlling error, thereby paralleling the speed and accuracy of non-uniform data.
Our framework has reduced error rates by up to 60% and enhanced training speeds by 2x to 30x.
- Score: 25.52271761404213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The neural operator has emerged as a powerful tool in learning mappings
between function spaces in PDEs. However, when faced with real-world physical
data, which are often highly non-uniformly distributed, it is challenging to
use mesh-based techniques such as the FFT. To address this, we introduce the
Non-Uniform Neural Operator (NUNO), a comprehensive framework designed for
efficient operator learning with non-uniform data. Leveraging a K-D tree-based
domain decomposition, we transform non-uniform data into uniform grids while
effectively controlling interpolation error, thereby paralleling the speed and
accuracy of learning from non-uniform data. We conduct extensive experiments on
2D elasticity, (2+1)D channel flow, and a 3D multi-physics heatsink, which, to
our knowledge, marks a novel exploration into 3D PDE problems with complex
geometries. Our framework has reduced error rates by up to 60% and enhanced
training speeds by 2x to 30x. The code is now available at
https://github.com/thu-ml/NUNO.
Related papers
- Tail-Aware Post-Training Quantization for 3D Geometry Models [58.79500829118265]
Post-Training Quantization (PTQ) enables efficient inference without retraining.<n>PTQ fails to transfer effectively to 3D models due to intricate feature distributions and prohibitive calibration overhead.<n>We propose TAPTQ, a Tail-Aware Post-Training Quantization pipeline for 3D geometric learning.
arXiv Detail & Related papers (2026-02-02T07:21:15Z) - Data-Efficient Time-Dependent PDE Surrogates: Graph Neural Simulators vs. Neural Operators [0.0]
We propose Neural Graph Simulators (GNS) as a principled surrogate modeling paradigm for time-dependent partial differential equations (PDEs)<n>GNS leverages message-passing combined with numerical time-stepping schemes to learn PDE dynamics by modeling the instantaneous time derivatives.<n>Results demonstrate that GNS is markedly more data-efficient, achieving less than 1% relative L2 error using only 3% of available trajectories.
arXiv Detail & Related papers (2025-09-07T17:54:23Z) - Geometric Operator Learning with Optimal Transport [77.16909146519227]
We propose integrating optimal transport (OT) into operator learning for partial differential equations (PDEs) on complex geometries.<n>For 3D simulations focused on surfaces, our OT-based neural operator embeds the surface geometry into a 2D parameterized latent space.<n> Experiments with Reynolds-averaged Navier-Stokes equations (RANS) on the ShapeNet-Car and DrivAerNet-Car datasets show that our method achieves better accuracy and also reduces computational expenses.
arXiv Detail & Related papers (2025-07-26T21:28:25Z) - Spectral-inspired Operator Learning with Limited Data and Unknown Physics [10.143396024546368]
Spectral-Inspired Neural Operator (SINO) can model complex systems from just 2-5 trajectories without requiring explicit PDE terms.<n>To model nonlinear effects, SINO employs a Pi-block that performs multiplicative operations on spectral features, complemented by a low-pass filter to suppress aliasing.<n>Experiments on both 2D and 3D PDE benchmarks demonstrate that SINO achieves state-of-the-art performance, with improvements of 1-2 orders of magnitude in accuracy.
arXiv Detail & Related papers (2025-05-27T07:25:13Z) - Enabling Automatic Differentiation with Mollified Graph Neural Operators [75.3183193262225]
We propose the mollified graph neural operator (mGNO), the first method to leverage automatic differentiation and compute emphexact gradients on arbitrary geometries.
For a PDE example on regular grids, mGNO paired with autograd reduced the L2 relative data error by 20x compared to finite differences.
It can also solve PDEs on unstructured point clouds seamlessly, using physics losses only, at resolutions vastly lower than those needed for finite differences to be accurate enough.
arXiv Detail & Related papers (2025-04-11T06:16:30Z) - Separable Operator Networks [4.688862638563124]
Operator learning has become a powerful tool in machine learning for modeling complex physical systems governed by partial differential equations (PDEs)
We introduce Separable Operator Networks (SepONet), a novel framework that significantly enhances the efficiency of physics-informed operator learning.
SepONet uses independent trunk networks to learn basis functions separately for different coordinate axes, enabling faster and more memory-efficient training.
arXiv Detail & Related papers (2024-07-15T21:43:41Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - Separable Physics-Informed Neural Networks [6.439575695132489]
We propose a network architecture and training algorithm for PINNs.
SPINN operates on a per-axis basis to significantly reduce the number of network propagations in multi-dimensional PDEs.
We show that SPINN can solve a chaotic (2+1)-d Navier-Stokes equation significantly faster than the best-performing prior method.
arXiv Detail & Related papers (2023-06-28T07:11:39Z) - Fast-SNARF: A Fast Deformer for Articulated Neural Fields [92.68788512596254]
We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space.
Fast-SNARF is a drop-in replacement in to our previous work, SNARF, while significantly improving its computational efficiency.
Because learning of deformation maps is a crucial component in many 3D human avatar methods, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.
arXiv Detail & Related papers (2022-11-28T17:55:34Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - Fourier Neural Operator with Learned Deformations for PDEs on General Geometries [75.91055304134258]
We propose a new framework, viz., geo-FNO, to solve PDEs on arbitrary geometries.
Geo-FNO learns to deform the input (physical) domain, which may be irregular, into a latent space with a uniform grid.
We consider a variety of PDEs such as the Elasticity, Plasticity, Euler's, and Navier-Stokes equations, and both forward modeling and inverse design problems.
arXiv Detail & Related papers (2022-07-11T21:55:47Z) - Towards Large-Scale Learned Solvers for Parametric PDEs with
Model-Parallel Fourier Neural Operators [3.0384874162715856]
Fourier neural operators (FNOs) are a recently introduced neural network architecture for learning solution operators of partial differential equations.
We propose a model-parallel version of FNOs based on domain-decomposition of both the input data and network weights.
We demonstrate that our model-parallel FNO is able to predict time-varying PDE solutions of over 3.2 billions variables.
arXiv Detail & Related papers (2022-04-04T02:12:03Z) - A Discontinuity Capturing Shallow Neural Network for Elliptic Interface
Problems [0.0]
Discontinuity Capturing Shallow Neural Network (DCSNN) for approximating $d$-dimensional piecewise continuous functions and for solving elliptic interface problems is developed.
DCSNN model is comparably efficient due to only moderate number of parameters needed to be trained.
arXiv Detail & Related papers (2021-06-10T08:40:30Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.