Algebraic and machine learning approach to hierarchical triple-star
stability
- URL: http://arxiv.org/abs/2207.03151v1
- Date: Thu, 7 Jul 2022 08:29:17 GMT
- Title: Algebraic and machine learning approach to hierarchical triple-star
stability
- Authors: Pavan Vynatheya, Adrian S. Hamers, Rosemary A. Mardling and Earl P.
Bellinger
- Abstract summary: We present two approaches to determine the stability of a hierarchical triple-star system.
The first is an improvement on the semi-analytical stability criterion of Mardling & Aarseth (2001), where we introduce a dependence on inner orbital eccentricity.
The second involves a machine learning approach, where we use a multilayer perceptron (MLP) to classify triple-star systems as stable' and unstable'
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present two approaches to determine the dynamical stability of a
hierarchical triple-star system. The first is an improvement on the
semi-analytical stability criterion of Mardling & Aarseth (2001), where we
introduce a dependence on inner orbital eccentricity and improve the dependence
on mutual orbital inclination. The second involves a machine learning approach,
where we use a multilayer perceptron (MLP) to classify triple-star systems as
`stable' and `unstable'. To achieve this, we generate a large training data set
of 10^6 hierarchical triples using the N-body code MSTAR. Both our approaches
perform better than the original Mardling & Aarseth (2001) stability criterion,
with the MLP model performing the best. The improved stability formula and the
machine learning model have overall classification accuracies of 93 % and 95 %
respectively. Our MLP model, which accurately predicts the stability of any
hierarchical triple-star system within the parameter ranges studied with almost
no computation required, is publicly available on Github in the form of an
easy-to-use Python script.
Related papers
- Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - Matbench Discovery -- A framework to evaluate machine learning crystal
stability predictions [2.234359119457391]
Matbench Discovery simulates the deployment of machine learning (ML) energy models in a search for stable inorganic crystals.
We address the disconnect between (i) thermodynamic stability and formation energy and (ii) in-domain vs out-of-distribution performance.
arXiv Detail & Related papers (2023-08-28T22:29:57Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Predicting the Stability of Hierarchical Triple Systems with
Convolutional Neural Networks [68.8204255655161]
We propose a convolutional neural network model to predict the stability of hierarchical triples.
All trained models are made publicly available, allowing to predict the stability of hierarchical triple systems $200$ times faster than pure $N$-body methods.
arXiv Detail & Related papers (2022-06-24T17:58:13Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Learning Stable Koopman Embeddings [9.239657838690228]
We present a new data-driven method for learning stable models of nonlinear systems.
We prove that every discrete-time nonlinear contracting model can be learnt in our framework.
arXiv Detail & Related papers (2021-10-13T05:44:13Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Actor-Critic Reinforcement Learning for Control with Stability Guarantee [9.400585561458712]
Reinforcement Learning (RL) and its integration with deep learning have achieved impressive performance in various robotic control tasks.
However, stability is not guaranteed in model-free RL by solely using data.
We propose an actor-critic RL framework for control which can guarantee closed-loop stability by employing the classic Lyapunov's method in control theory.
arXiv Detail & Related papers (2020-04-29T16:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.