A Bayesian neural network predicts the dissolution of compact planetary
systems
- URL: http://arxiv.org/abs/2101.04117v1
- Date: Mon, 11 Jan 2021 19:00:00 GMT
- Title: A Bayesian neural network predicts the dissolution of compact planetary
systems
- Authors: Miles Cranmer, Daniel Tamayo, Hanno Rein, Peter Battaglia, Samuel
Hadden, Philip J. Armitage, Shirley Ho, David N. Spergel
- Abstract summary: We introduce a deep learning architecture to push forward this problem for compact systems.
Our model is more than two orders of magnitude more accurate at predicting instability times than analytical estimators.
Our inference model is publicly available in the SPOCK package, with training code open-sourced.
- Score: 2.261581864118072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite over three hundred years of effort, no solutions exist for predicting
when a general planetary configuration will become unstable. We introduce a
deep learning architecture to push forward this problem for compact systems.
While current machine learning algorithms in this area rely on
scientist-derived instability metrics, our new technique learns its own metrics
from scratch, enabled by a novel internal structure inspired from dynamics
theory. Our Bayesian neural network model can accurately predict not only if,
but also when a compact planetary system with three or more planets will go
unstable. Our model, trained directly from short N-body time series of raw
orbital elements, is more than two orders of magnitude more accurate at
predicting instability times than analytical estimators, while also reducing
the bias of existing machine learning algorithms by nearly a factor of three.
Despite being trained on compact resonant and near-resonant three-planet
configurations, the model demonstrates robust generalization to both
non-resonant and higher multiplicity configurations, in the latter case
outperforming models fit to that specific set of integrations. The model
computes instability estimates up to five orders of magnitude faster than a
numerical integrator, and unlike previous efforts provides confidence intervals
on its predictions. Our inference model is publicly available in the SPOCK
package, with training code open-sourced.
Related papers
- OPUS: Occupancy Prediction Using a Sparse Set [64.60854562502523]
We present a framework to simultaneously predict occupied locations and classes using a set of learnable queries.
OPUS incorporates a suite of non-trivial strategies to enhance model performance.
Our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.
arXiv Detail & Related papers (2024-09-14T07:44:22Z) - On the importance of learning non-local dynamics for stable data-driven climate modeling: A 1D gravity wave-QBO testbed [0.0]
Machine learning (ML) techniques have shown promise in learning subgrid-scale parameterizations for climate models.
However, a major problem with data-driven parameterizations is model instability.
Here, we combine ML theory and climate physics to address a source of instability in NN-based parameterization.
arXiv Detail & Related papers (2024-07-07T01:15:52Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Cheap and Deterministic Inference for Deep State-Space Models of
Interacting Dynamical Systems [38.23826389188657]
We present a deep state-space model which employs graph neural networks in order to model the underlying interacting dynamical system.
The predictive distribution is multimodal and has the form of a Gaussian mixture model, where the moments of the Gaussian components can be computed via deterministic moment matching rules.
Our moment matching scheme can be exploited for sample-free inference, leading to more efficient and stable training compared to Monte Carlo alternatives.
arXiv Detail & Related papers (2023-05-02T20:30:23Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - Predicting the Stability of Hierarchical Triple Systems with
Convolutional Neural Networks [68.8204255655161]
We propose a convolutional neural network model to predict the stability of hierarchical triples.
All trained models are made publicly available, allowing to predict the stability of hierarchical triple systems $200$ times faster than pure $N$-body methods.
arXiv Detail & Related papers (2022-06-24T17:58:13Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - Model-free prediction of emergence of extreme events in a parametrically
driven nonlinear dynamical system by Deep Learning [0.0]
We predict the emergence of extreme events in a parametrically driven nonlinear dynamical system.
We use three Deep Learning models, namely Multi-Layer Perceptron, Convolutional Neural Network and Long Short-Term Memory.
We find that the Long Short-Term Memory model can serve as the best model to forecast the chaotic time series.
arXiv Detail & Related papers (2021-07-14T14:48:57Z) - Symplectic Neural Networks in Taylor Series Form for Hamiltonian Systems [15.523425139375226]
We propose an effective and lightweight learning algorithm, Symplectic Taylor Neural Networks (Taylor-nets)
We conduct continuous, long-term predictions of a complex Hamiltonian dynamic system based on sparse, short-term observations.
We demonstrate the efficacy of our Taylor-net in predicting a broad spectrum of Hamiltonian dynamic systems, including the pendulum, the Lotka--Volterra, the Kepler, and the H'enon--Heiles systems.
arXiv Detail & Related papers (2020-05-11T10:32:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.