Equivariant geometric convolutions for emulation of dynamical systems
- URL: http://arxiv.org/abs/2305.12585v2
- Date: Fri, 01 Nov 2024 18:24:00 GMT
- Title: Equivariant geometric convolutions for emulation of dynamical systems
- Authors: Wilson G. Gregory, David W. Hogg, Ben Blum-Smith, Maria Teresa Arias, Kaze W. K. Wong, Soledad Villar,
- Abstract summary: We use geometric convolutions to enforce coordinate freedom in surrogate machine learning models.
In numerical experiments emulating 2D compressible Navier-Stokes, we see better accuracy and improved stability.
The ease of enforcing coordinate freedom without making major changes to the model architecture provides an exciting recipe for any CNN-based method applied to an appropriate class of problems.
- Score: 6.3003220645859175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning methods are increasingly being employed as surrogate models in place of computationally expensive and slow numerical integrators for a bevy of applications in the natural sciences. However, while the laws of physics are relationships between scalars, vectors, and tensors that hold regardless of the frame of reference or chosen coordinate system, surrogate machine learning models are not coordinate-free by default. We enforce coordinate freedom by using geometric convolutions in three model architectures: a ResNet, a Dilated ResNet, and a UNet. In numerical experiments emulating 2D compressible Navier-Stokes, we see better accuracy and improved stability compared to baseline surrogate models in almost all cases. The ease of enforcing coordinate freedom without making major changes to the model architecture provides an exciting recipe for any CNN-based method applied to an appropriate class of problems
Related papers
- Geometry-Informed Neural Operator Transformer [0.8906214436849201]
This work introduces the Geometry-Informed Neural Operator Transformer (GINOT), which integrates the transformer architecture with the neural operator framework to enable forward predictions for arbitrary geometries.
The performance of GINOT is validated on multiple challenging datasets, showcasing its high accuracy and strong generalization capabilities for complex and arbitrary 2D and 3D geometries.
arXiv Detail & Related papers (2025-04-28T03:39:27Z) - Physics Encoded Blocks in Residual Neural Network Architectures for Digital Twin Models [2.8720819157502344]
This paper presents a generic approach based on a novel physics-encoded residual neural network architecture.
Our method combines physics blocks as mathematical operators from physics-based models with learning blocks comprising feed-forward layers.
Compared to conventional neural network-based methods, our method improves generalizability with substantially low data requirements.
arXiv Detail & Related papers (2024-11-18T11:58:20Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Learning solution of nonlinear constitutive material models using
physics-informed neural networks: COMM-PINN [0.0]
We apply physics-informed neural networks to solve the relations for nonlinear, path-dependent material behavior.
One advantage of this work is that it bypasses the repetitive Newton iterations needed to solve nonlinear equations in complex material models.
arXiv Detail & Related papers (2023-04-10T19:58:49Z) - Learning Integrable Dynamics with Action-Angle Networks [1.2999518604217852]
Action-Angle Networks learn a nonlinear transformation from input coordinates to the action-angle space, where evolution of the system is linear.
Unlike traditional learned simulators, Action-Angle Networks do not employ any higher-order numerical integration methods.
arXiv Detail & Related papers (2022-11-24T17:37:20Z) - Equivariant Graph Mechanics Networks with Constraints [83.38709956935095]
We propose Graph Mechanics Network (GMN) which is efficient, equivariant and constraint-aware.
GMN represents, by generalized coordinates, the forward kinematics information (positions and velocities) of a structural object.
Extensive experiments support the advantages of GMN compared to the state-of-the-art GNNs in terms of prediction accuracy, constraint satisfaction and data efficiency.
arXiv Detail & Related papers (2022-03-12T14:22:14Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Non-linear Independent Dual System (NIDS) for Discretization-independent
Surrogate Modeling over Complex Geometries [0.0]
Non-linear independent dual system (NIDS) is a deep learning surrogate model for discretization-independent, continuous representation of PDE solutions.
NIDS can be used for prediction over domains with complex, variable geometries and mesh topologies.
Test cases include a vehicle problem with complex geometry and data scarcity, enabled by a training method.
arXiv Detail & Related papers (2021-09-14T23:38:41Z) - Stabilizing Equilibrium Models by Jacobian Regularization [151.78151873928027]
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single nonlinear layer.
We propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models.
We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains.
arXiv Detail & Related papers (2021-06-28T00:14:11Z) - Frame-independent vector-cloud neural network for nonlocal constitutive
modelling on arbitrary grids [4.168157981135698]
Constitutive models are widely used for modelling complex systems in science and engineering.
We propose a frame-independent, nonlocal model based on a vector-cloud neural network that can be trained with data.
arXiv Detail & Related papers (2021-03-11T14:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.