Neural DAEs: Constrained neural networks
- URL: http://arxiv.org/abs/2211.14302v4
- Date: Tue, 12 Mar 2024 14:02:49 GMT
- Title: Neural DAEs: Constrained neural networks
- Authors: Tue Boesen, Eldad Haber, Uri Michael Ascher
- Abstract summary: We implement related methods in residual neural networks, despite some fundamental scenario differences.
We show when to use which method based on experiments involving simulations of multi-body pendulums and molecular dynamics scenarios.
Several of our methods are easy to implement in existing code and have limited impact on training performance while giving significant boosts in terms of inference.
- Score: 4.212663349859165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article investigates the effect of explicitly adding auxiliary algebraic
trajectory information to neural networks for dynamical systems. We draw
inspiration from the field of differential-algebraic equations and differential
equations on manifolds and implement related methods in residual neural
networks, despite some fundamental scenario differences. Constraint or
auxiliary information effects are incorporated through stabilization as well as
projection methods, and we show when to use which method based on experiments
involving simulations of multi-body pendulums and molecular dynamics scenarios.
Several of our methods are easy to implement in existing code and have limited
impact on training performance while giving significant boosts in terms of
inference.
Related papers
- Projected Neural Differential Equations for Learning Constrained Dynamics [3.570367665112327]
We introduce a new method for constraining neural differential equations based on projection of the learned vector field to the tangent space of the constraint manifold.
PNDEs outperform existing methods while requiring fewer hyper parameters.
The proposed approach demonstrates significant potential for enhancing the modeling of constrained dynamical systems.
arXiv Detail & Related papers (2024-10-31T06:32:43Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Deep Learning-based Analysis of Basins of Attraction [49.812879456944984]
This research addresses the challenge of characterizing the complexity and unpredictability of basins within various dynamical systems.
The main focus is on demonstrating the efficiency of convolutional neural networks (CNNs) in this field.
arXiv Detail & Related papers (2023-09-27T15:41:12Z) - Neural Galerkin Schemes with Active Learning for High-Dimensional
Evolution Equations [44.89798007370551]
This work proposes Neural Galerkin schemes based on deep learning that generate training data with active learning for numerically solving high-dimensional partial differential equations.
Neural Galerkin schemes build on the Dirac-Frenkel variational principle to train networks by minimizing the residual sequentially over time.
Our finding is that the active form of gathering training data of the proposed Neural Galerkin schemes is key for numerically realizing the expressive power of networks in high dimensions.
arXiv Detail & Related papers (2022-03-02T19:09:52Z) - DAE-PINN: A Physics-Informed Neural Network Model for Simulating
Differential-Algebraic Equations with Application to Power Networks [8.66798555194688]
We develop DAE-PINN, the first effective deep-learning framework for learning and simulating the solution trajectories of nonlinear differential-algebraic equations.
Our framework enforces the neural network to satisfy the DAEs as (approximate) hard constraints using a penalty-based method.
We showcase the effectiveness and accuracy of DAE-PINN by learning and simulating the solution trajectories of a three-bus power network.
arXiv Detail & Related papers (2021-09-09T14:30:28Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Finite Difference Neural Networks: Fast Prediction of Partial
Differential Equations [5.575293536755126]
We propose a novel neural network framework, finite difference neural networks (FDNet), to learn partial differential equations from data.
Specifically, our proposed finite difference inspired network is designed to learn the underlying governing partial differential equations from trajectory data.
arXiv Detail & Related papers (2020-06-02T19:17:58Z) - Physics-based polynomial neural networks for one-shot learning of
dynamical systems from one or a few samples [0.0]
The paper describes practical results on both a simple pendulum and one of the largest worldwide X-ray source.
It is demonstrated in practice that the proposed approach allows recovering complex physics from noisy, limited, and partial observations.
arXiv Detail & Related papers (2020-05-24T09:27:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.