Reduced-Order Neural Operators: Learning Lagrangian Dynamics on Highly Sparse Graphs
- URL: http://arxiv.org/abs/2407.03925v3
- Date: Fri, 16 May 2025 22:22:21 GMT
- Title: Reduced-Order Neural Operators: Learning Lagrangian Dynamics on Highly Sparse Graphs
- Authors: Hrishikesh Viswanath, Yue Chang, Aleksey Panas, Julius Berner, Peter Yichen Chen, Aniket Bera,
- Abstract summary: We present GIOROM, a data-driven discretization invariant framework for accelerating Lagrangian simulations through reduced-order modeling (ROM)<n>We leverage a data-driven graph-based neural approximation of the PDE solution operator.<n>GIOROM achieves a 6.6$times$-32$times$ reduction in input dimensionality while maintaining high-fidelity reconstructions across diverse Lagrangian regimes.
- Score: 19.1312659245072
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulating complex physical systems governed by Lagrangian dynamics often requires solving partial differential equations (PDEs) over high-resolution spatial domains, resulting in substantial computational costs. We present GIOROM (\textit{G}raph \textit{I}nf\textit{O}rmed \textit{R}educed \textit{O}rder \textit{M}odeling), a data-driven discretization invariant framework for accelerating Lagrangian simulations through reduced-order modeling (ROM). Previous discretization invariant ROM approaches rely on PDE time-steppers for spatiotemporally evolving low-dimensional reduced-order latent states. Instead, we leverage a data-driven graph-based neural approximation of the PDE solution operator. This operator estimates point-wise function values from a sparse set of input observations, reducing reliance on known governing equations of numerical solvers. Order reduction is achieved by embedding these point-wise estimates within the reduced-order latent space using a learned kernel parameterization. This latent representation enables the reconstruction of the solution at arbitrary spatial query points by evolving latent variables over local neighborhoods on the solution manifold, using the kernel. Empirically, GIOROM achieves a 6.6$\times$-32$\times$ reduction in input dimensionality while maintaining high-fidelity reconstructions across diverse Lagrangian regimes including fluid flows, granular media, and elastoplastic dynamics. The resulting framework enables learnable, data-driven and discretization-invariant order-reduction with reduced reliance on analytical PDE formulations. Our code is at \href{https://github.com/HrishikeshVish/GIOROM}{https://github.com/HrishikeshVish/GIOROM}
Related papers
- Self-Supervised Coarsening of Unstructured Grid with Automatic Differentiation [55.88862563823878]
In this work, we present an original algorithm to coarsen an unstructured grid based on the concepts of differentiable physics.<n>We demonstrate performance of the algorithm on two PDEs: a linear equation which governs slightly compressible fluid flow in porous media and the wave equation.<n>Our results show that in the considered scenarios, we reduced the number of grid points up to 10 times while preserving the modeled variable dynamics in the points of interest.
arXiv Detail & Related papers (2025-07-24T11:02:13Z) - Learning Effective Dynamics across Spatio-Temporal Scales of Complex Flows [4.798951413107239]
We propose a novel framework, Graph-based Learning of Effective Dynamics (Graph-LED), that leverages graph neural networks (GNNs) and an attention-based autoregressive model.
We evaluate the proposed approach on a suite of fluid dynamics problems, including flow past a cylinder and flow over a backward-facing step over a range of Reynolds numbers.
arXiv Detail & Related papers (2025-02-11T22:14:30Z) - Proper Latent Decomposition [4.266376725904727]
We compute a reduced set of intrinsic coordinates (latent space) to accurately describe a flow with fewer degrees of freedom than the numerical discretization.<n>With this proposed numerical framework, we propose an algorithm to perform PLD on the manifold.<n>This work opens opportunities for analyzing autoencoders and latent spaces, nonlinear reduced-order modeling and scientific insights into the structure of high-dimensional data.
arXiv Detail & Related papers (2024-12-01T12:19:08Z) - Point Cloud Denoising With Fine-Granularity Dynamic Graph Convolutional Networks [58.050130177241186]
Noise perturbations often corrupt 3-D point clouds, hindering downstream tasks such as surface reconstruction, rendering, and further processing.
This paper introduces finegranularity dynamic graph convolutional networks called GDGCN, a novel approach to denoising in 3-D point clouds.
arXiv Detail & Related papers (2024-11-21T14:19:32Z) - Equation-informed data-driven identification of flow budgets and dynamics [0.0]
We propose a novel hybrid approach to flow clustering.
It consists of characterising each sample point of the system with equation-based features.
The algorithm is implemented in both the Eulerian and Lagrangian frameworks.
arXiv Detail & Related papers (2024-11-14T15:59:41Z) - From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems [20.006163951844357]
We propose a simulation-free framework for training neural ordinary differential equations (NODEs)
We employ the Fourier analysis to estimate temporal and potential high-order spatial gradients from noisy observational data.
Our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness.
arXiv Detail & Related papers (2024-05-19T13:15:23Z) - Neural variational Data Assimilation with Uncertainty Quantification using SPDE priors [28.804041716140194]
Recent advances in the deep learning community enables to address the problem through a neural architecture a variational data assimilation framework.<n>In this work we use the theory of Partial Differential Equations (SPDE) and Gaussian Processes (GP) to estimate both space-and time covariance of the state.
arXiv Detail & Related papers (2024-02-02T19:18:12Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - A graph convolutional autoencoder approach to model order reduction for
parametrized PDEs [0.8192907805418583]
The present work proposes a framework for nonlinear model order reduction based on a Graph Convolutional Autoencoder (GCA-ROM)
We develop a non-intrusive and data-driven nonlinear reduction approach, exploiting GNNs to encode the reduced manifold and enable fast evaluations of parametrized PDEs.
arXiv Detail & Related papers (2023-05-15T12:01:22Z) - Tensor network reduced order models for wall-bounded flows [0.0]
We introduce a widely applicable tensor network-based framework for developing reduced order models.
We consider the incompressible Navier-Stokes equations and the lid-driven cavity in two spatial dimensions.
arXiv Detail & Related papers (2023-03-06T10:33:00Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Rigorous dynamical mean field theory for stochastic gradient descent
methods [17.90683687731009]
We prove closed-form equations for the exact high-dimensionals of a family of first order gradient-based methods.
This includes widely used algorithms such as gradient descent (SGD) or Nesterov acceleration.
arXiv Detail & Related papers (2022-10-12T21:10:55Z) - Losing momentum in continuous-time stochastic optimisation [42.617042045455506]
momentum-based optimisation algorithms have become particularly widespread.
In this work, we analyse a continuous-time model for gradient descent with momentum.
We also train a convolutional neural network in an image classification problem.
arXiv Detail & Related papers (2022-09-08T10:46:05Z) - CROM: Continuous Reduced-Order Modeling of PDEs Using Implicit Neural
Representations [5.551136447769071]
Excessive runtime of high-fidelity partial differential equation solvers makes them unsuitable for time-critical applications.
We propose to accelerate PDE solvers using reduced-order modeling (ROM)
Our approach builds a smooth, low-dimensional manifold of the continuous vector fields themselves, not their discretization.
arXiv Detail & Related papers (2022-06-06T13:27:21Z) - Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs [77.33781731432163]
We learn dynamic graph representation in hyperbolic space, for the first time, which aims to infer node representations.
We present a novel Hyperbolic Variational Graph Network, referred to as HVGNN.
In particular, to model the dynamics, we introduce a Temporal GNN (TGNN) based on a theoretically grounded time encoding approach.
arXiv Detail & Related papers (2021-04-06T01:44:15Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.