Predicting the Stability of Hierarchical Triple Systems with
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2206.12402v1
- Date: Fri, 24 Jun 2022 17:58:13 GMT
- Title: Predicting the Stability of Hierarchical Triple Systems with
Convolutional Neural Networks
- Authors: Florian Lalande and Alessandro Alberto Trani
- Abstract summary: We propose a convolutional neural network model to predict the stability of hierarchical triples.
All trained models are made publicly available, allowing to predict the stability of hierarchical triple systems $200$ times faster than pure $N$-body methods.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the long-term evolution of hierarchical triple systems is
challenging due to its inherent chaotic nature, and it requires computationally
expensive simulations. Here we propose a convolutional neural network model to
predict the stability of hierarchical triples by looking at their evolution
during the first $5 \times 10^5$ inner binary orbits. We employ the regularized
few-body code \textsc{tsunami} to simulate $5\times 10^6$ hierarchical triples,
from which we generate a large training and test dataset. We develop twelve
different network configurations that use different combinations of the
triples' orbital elements and compare their performances. Our best model uses 6
time-series, namely, the semimajor axes ratio, the inner and outer
eccentricities, the mutual inclination and the arguments of pericenter. This
model achieves an area under the curve of over $95\%$ and informs of the
relevant parameters to study triple systems stability. All trained models are
made publicly available, allowing to predict the stability of hierarchical
triple systems $200$ times faster than pure $N$-body methods.
Related papers
- CLPNets: Coupled Lie-Poisson Neural Networks for Multi-Part Hamiltonian Systems with Symmetries [0.0]
We develop a novel method of data-based computation and complete phase space learning of Hamiltonian systems.
We derive a novel system of mappings that are built into neural networks for coupled systems.
Our method shows good resistance to the curse of dimensionality, requiring only a few thousand data points for all cases studied.
arXiv Detail & Related papers (2024-08-28T22:45:15Z) - Beyond Closure Models: Learning Chaotic-Systems via Physics-Informed Neural Operators [78.64101336150419]
Predicting the long-term behavior of chaotic systems is crucial for various applications such as climate modeling.
An alternative approach to such a full-resolved simulation is using a coarse grid and then correcting its errors through a temporalittext model.
We propose an alternative end-to-end learning approach using a physics-informed neural operator (PINO) that overcomes this limitation.
arXiv Detail & Related papers (2024-08-09T17:05:45Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - SE(3)-Stochastic Flow Matching for Protein Backbone Generation [54.951832422425454]
We introduce FoldFlow, a series of novel generative models of increasing modeling power based on the flow-matching paradigm over $3mathrmD$ rigid motions.
Our family of FoldFlowgenerative models offers several advantages over previous approaches to the generative modeling of proteins.
arXiv Detail & Related papers (2023-10-03T19:24:24Z) - Simulating first-order phase transition with hierarchical autoregressive
networks [0.04588028371034406]
We apply the Hierarchical Autoregressive Neural (HAN) network sampling algorithm to the two-dimensional $Q$-state Potts model.
We quantify the performance of the approach in the vicinity of the first-order phase transition and compare it with that of the Wolff cluster algorithm.
arXiv Detail & Related papers (2022-12-09T16:04:56Z) - Offline Reinforcement Learning at Multiple Frequencies [62.08749079914275]
We study how well offline reinforcement learning algorithms can accommodate data with a mixture of frequencies during training.
We present a simple yet effective solution that enforces consistency in the rate of $Q$-value updates to stabilize learning.
arXiv Detail & Related papers (2022-07-26T17:54:49Z) - Algebraic and machine learning approach to hierarchical triple-star
stability [0.0]
We present two approaches to determine the stability of a hierarchical triple-star system.
The first is an improvement on the semi-analytical stability criterion of Mardling & Aarseth (2001), where we introduce a dependence on inner orbital eccentricity.
The second involves a machine learning approach, where we use a multilayer perceptron (MLP) to classify triple-star systems as stable' and unstable'
arXiv Detail & Related papers (2022-07-07T08:29:17Z) - Zero Stability Well Predicts Performance of Convolutional Neural
Networks [6.965550605588623]
We find that if a discrete solver of an ordinary differential equation is zero stable, the CNN corresponding to that solver performs well.
Based on the preliminary observation, we provide a higher-order discretization to construct CNNs and then propose a zero-stable network (ZeroSNet)
To guarantee zero stability of the ZeroSNet, we first deduce a structure that meets consistency conditions and then give a zero stable region of a training-free parameter.
arXiv Detail & Related papers (2022-06-27T08:07:08Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - A Bayesian neural network predicts the dissolution of compact planetary
systems [2.261581864118072]
We introduce a deep learning architecture to push forward this problem for compact systems.
Our model is more than two orders of magnitude more accurate at predicting instability times than analytical estimators.
Our inference model is publicly available in the SPOCK package, with training code open-sourced.
arXiv Detail & Related papers (2021-01-11T19:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.