CLPNets: Coupled Lie-Poisson Neural Networks for Multi-Part Hamiltonian Systems with Symmetries
- URL: http://arxiv.org/abs/2408.16160v1
- Date: Wed, 28 Aug 2024 22:45:15 GMT
- Title: CLPNets: Coupled Lie-Poisson Neural Networks for Multi-Part Hamiltonian Systems with Symmetries
- Authors: Christopher Eldred, François Gay-Balmaz, Vakhtang Putkaradze,
- Abstract summary: We develop a novel method of data-based computation and complete phase space learning of Hamiltonian systems.
We derive a novel system of mappings that are built into neural networks for coupled systems.
Our method shows good resistance to the curse of dimensionality, requiring only a few thousand data points for all cases studied.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To accurately compute data-based prediction of Hamiltonian systems, especially the long-term evolution of such systems, it is essential to utilize methods that preserve the structure of the equations over time. We consider a case that is particularly challenging for data-based methods: systems with interacting parts that do not reduce to pure momentum evolution. Such systems are essential in scientific computations. For example, any discretization of a continuum elastic rod can be viewed as interacting elements that can move and rotate in space, with each discrete element moving on the group of rotations and translations $SE(3)$. We develop a novel method of data-based computation and complete phase space learning of such systems. We follow the original framework of \emph{SympNets} (Jin et al, 2020) building the neural network from canonical phase space mappings, and transformations that preserve the Lie-Poisson structure (\emph{LPNets}) as in (Eldred et al, 2024). We derive a novel system of mappings that are built into neural networks for coupled systems. We call such networks Coupled Lie-Poisson Neural Networks, or \emph{CLPNets}. We consider increasingly complex examples for the applications of CLPNets: rotation of two rigid bodies about a common axis, the free rotation of two rigid bodies, and finally the evolution of two connected and interacting $SE(3)$ components. Our method preserves all Casimir invariants of each system to machine precision, irrespective of the quality of the training data, and preserves energy to high accuracy. Our method also shows good resistance to the curse of dimensionality, requiring only a few thousand data points for all cases studied, with the effective dimension varying from three to eighteen. Additionally, the method is highly economical in memory requirements, requiring only about 200 parameters for the most complex case considered.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Lie-Poisson Neural Networks (LPNets): Data-Based Computing of
Hamiltonian Systems with Symmetries [0.0]
An accurate data-based prediction of the long-term evolution of Hamiltonian systems requires a network that preserves the appropriate structure under each time step.
We present two flavors of such systems: one, where the parameters of transformations are computed from data using a dense neural network (LPNets), and another, where the composition of transformations is used as building blocks (G-LPNets)
The resulting methods are important for the construction of accurate data-based methods for simulating the long-term dynamics of physical systems.
arXiv Detail & Related papers (2023-08-29T14:45:23Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - An example of use of Variational Methods in Quantum Machine Learning [0.0]
This paper introduces a quantum neural network for the binary classification of points of a specific geometric pattern on a plane.
The intention was to produce a quantum deep neural network with the minimum number of trainable parameters capable of correctly recognising and classifying points.
arXiv Detail & Related papers (2022-08-07T03:52:42Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Learning Poisson systems and trajectories of autonomous systems via
Poisson neural networks [3.225972620435058]
We propose the Poisson neural networks (PNNs) to learn Poisson systems and trajectories of autonomous systems from data.
Based on the Darboux-Lie theorem, the phase flow of a Poisson system can be written as the composition of (1) a coordinate transformation, (2) an extended symplectic map and (3) the inverse of the transformation.
arXiv Detail & Related papers (2020-12-05T22:18:29Z) - DyCo3D: Robust Instance Segmentation of 3D Point Clouds through Dynamic
Convolution [136.7261709896713]
We propose a data-driven approach that generates the appropriate convolution kernels to apply in response to the nature of the instances.
The proposed method achieves promising results on both ScanetNetV2 and S3DIS.
It also improves inference speed by more than 25% over the current state-of-the-art.
arXiv Detail & Related papers (2020-11-26T14:56:57Z) - Deep learning of thermodynamics-aware reduced-order models from data [0.08699280339422537]
We present an algorithm to learn the relevant latent variables of a large-scale discretized physical system.
We then predict its time evolution using thermodynamically-consistent deep neural networks.
arXiv Detail & Related papers (2020-07-03T08:49:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.