Constants of motion network revisited
- URL: http://arxiv.org/abs/2504.09434v1
- Date: Sun, 13 Apr 2025 04:57:34 GMT
- Title: Constants of motion network revisited
- Authors: Wenqi Fang, Chao Chen, Yongkui Yang, Zheng Wang,
- Abstract summary: We propose a novel neural network architecture, built using the singular-value-decomposition (SVD) technique, and a two-phase training algorithm to improve the performance of COMET.<n>Our approach not only retains the advantages of COMET, such as applying to non-Hamiltonian systems and indicating the number of constants of motion, but also can be more lightweight and noise-robust than COMET.
- Score: 8.938225969562643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discovering constants of motion is meaningful in helping understand the dynamical systems, but inevitably needs proficient mathematical skills and keen analytical capabilities. With the prevalence of deep learning, methods employing neural networks, such as Constant Of Motion nETwork (COMET), are promising in handling this scientific problem. Although the COMET method can produce better predictions on dynamics by exploiting the discovered constants of motion, there is still plenty of room to sharpen it. In this paper, we propose a novel neural network architecture, built using the singular-value-decomposition (SVD) technique, and a two-phase training algorithm to improve the performance of COMET. Extensive experiments show that our approach not only retains the advantages of COMET, such as applying to non-Hamiltonian systems and indicating the number of constants of motion, but also can be more lightweight and noise-robust than COMET.
Related papers
- Semi-Implicit Neural Ordinary Differential Equations [5.196303789025002]
We present a semi-implicit neural ODE approach that exploits the partitionable structure of the underlying dynamics.
Our technique leads to an implicit neural network with significant computational advantages over existing approaches.
arXiv Detail & Related papers (2024-12-15T20:21:02Z) - Learning low-dimensional dynamics from whole-brain data improves task
capture [2.82277518679026]
We introduce a novel approach to learning low-dimensional approximations of neural dynamics by using a sequential variational autoencoder (SVAE)
Our method finds smooth dynamics that can predict cognitive processes with accuracy higher than classical methods.
We evaluate our approach on various task-fMRI datasets, including motor, working memory, and relational processing tasks.
arXiv Detail & Related papers (2023-05-18T18:43:13Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Neural DAEs: Constrained neural networks [4.212663349859165]
We implement related methods in residual neural networks, despite some fundamental scenario differences.
We show when to use which method based on experiments involving simulations of multi-body pendulums and molecular dynamics scenarios.
Several of our methods are easy to implement in existing code and have limited impact on training performance while giving significant boosts in terms of inference.
arXiv Detail & Related papers (2022-11-25T18:58:28Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Constants of motion network [0.0]
We present a neural network that can simultaneously learn the dynamics of the system and the constants of motion from data.
By exploiting the discovered constants of motion, it can produce better predictions on dynamics.
arXiv Detail & Related papers (2022-08-22T15:07:48Z) - On Fast Simulation of Dynamical System with Neural Vector Enhanced
Numerical Solver [59.13397937903832]
We introduce a deep learning-based corrector called Neural Vector (NeurVec)
NeurVec can compensate for integration errors and enable larger time step sizes in simulations.
Our experiments on a variety of complex dynamical system benchmarks demonstrate that NeurVec exhibits remarkable generalization capability.
arXiv Detail & Related papers (2022-08-07T09:02:18Z) - Learning effective dynamics from data-driven stochastic systems [2.4578723416255754]
This work is devoted to investigating the effective dynamics for slow-fast dynamical systems.
We propose a novel algorithm including a neural network called Auto-SDE to learn in slow manifold.
arXiv Detail & Related papers (2022-05-09T09:56:58Z) - Learning Trajectories of Hamiltonian Systems with Neural Networks [81.38804205212425]
We propose to enhance Hamiltonian neural networks with an estimation of a continuous-time trajectory of the modeled system.
We demonstrate that the proposed integration scheme works well for HNNs, especially with low sampling rates, noisy and irregular observations.
arXiv Detail & Related papers (2022-04-11T13:25:45Z) - Learning to Modulate Random Weights: Neuromodulation-inspired Neural
Networks For Efficient Continual Learning [1.9580473532948401]
We introduce a novel neural network architecture inspired by neuromodulation in biological nervous systems.
We show that this approach has strong learning performance per task despite the very small number of learnable parameters.
arXiv Detail & Related papers (2022-04-08T21:12:13Z) - Continual Learning Beyond a Single Model [28.130513524601145]
We show that employing ensemble models can be a simple yet effective method to improve continual performance.
We propose a computationally cheap algorithm with similar runtime to a single model yet enjoying the performance benefits of ensembles.
arXiv Detail & Related papers (2022-02-20T14:30:39Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.