Active Learning for Computationally Efficient Distribution of Binary
Evolution Simulations
- URL: http://arxiv.org/abs/2203.16683v1
- Date: Wed, 30 Mar 2022 21:36:32 GMT
- Title: Active Learning for Computationally Efficient Distribution of Binary
Evolution Simulations
- Authors: Kyle Akira Rocha, Jeff J. Andrews, Christopher P. L. Berry, Zoheyr
Doctor, Pablo Marchant, Vicky Kalogera, Scott Coughlin, Simone S. Bavera,
Aaron Dotter, Tassos Fragos, Konstantinos Kovlakas, Devina Misra, Zepei Xing,
Emmanouil Zapartas
- Abstract summary: We present a new active learning algorithm, psy-cris, which uses machine learning in the data-gathering process to adaptively and iteratively select targeted simulations to run.
We test psy-cris on a toy problem and find the resulting training sets require fewer simulations for accurate classification and regression than either regular or randomly sampled grids.
- Score: 0.19359975080269876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Binary stars undergo a variety of interactions and evolutionary phases,
critical for predicting and explaining observed properties. Binary population
synthesis with full stellar-structure and evolution simulations are
computationally expensive requiring a large number of mass-transfer sequences.
The recently developed binary population synthesis code POSYDON incorporates
grids of MESA binary star simulations which are then interpolated to model
large-scale populations of massive binaries. The traditional method of
computing a high-density rectilinear grid of simulations is not scalable for
higher-dimension grids, accounting for a range of metallicities, rotation, and
eccentricity. We present a new active learning algorithm, psy-cris, which uses
machine learning in the data-gathering process to adaptively and iteratively
select targeted simulations to run, resulting in a custom, high-performance
training set. We test psy-cris on a toy problem and find the resulting training
sets require fewer simulations for accurate classification and regression than
either regular or randomly sampled grids. We further apply psy-cris to the
target problem of building a dynamic grid of MESA simulations, and we
demonstrate that, even without fine tuning, a simulation set of only $\sim 1/4$
the size of a rectilinear grid is sufficient to achieve the same classification
accuracy. We anticipate further gains when algorithmic parameters are optimized
for the targeted application. We find that optimizing for classification only
may lead to performance losses in regression, and vice versa. Lowering the
computational cost of producing grids will enable future versions of POSYDON to
cover more input parameters while preserving interpolation accuracies.
Related papers
- OPUS: Occupancy Prediction Using a Sparse Set [64.60854562502523]
We present a framework to simultaneously predict occupied locations and classes using a set of learnable queries.
OPUS incorporates a suite of non-trivial strategies to enhance model performance.
Our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.
arXiv Detail & Related papers (2024-09-14T07:44:22Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - Transfer learning for atomistic simulations using GNNs and kernel mean
embeddings [24.560340485988128]
We propose a transfer learning algorithm that leverages the ability of graph neural networks (GNNs) to represent chemical environments together with kernel mean embeddings.
We test our approach on a series of realistic datasets of increasing complexity, showing excellent generalization and transferability performance.
arXiv Detail & Related papers (2023-06-02T14:58:16Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Surrogate Neural Networks for Efficient Simulation-based Trajectory
Planning Optimization [28.292234483886947]
This paper presents a novel methodology that uses surrogate models in the form of neural networks to reduce the computation time of simulation-based optimization of a reference trajectory.
We find a 74% better-performing reference trajectory compared to nominal, and the numerical results clearly show a substantial reduction in computation time for designing future trajectories.
arXiv Detail & Related papers (2023-03-30T15:44:30Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Simulating Liquids with Graph Networks [25.013244956897832]
We investigate graph neural networks (GNNs) for learning fluid dynamics.
Our results indicate that learning models, such as GNNs, fail to learn the exact underlying dynamics unless the training set is devoid of any other problem-specific correlations.
arXiv Detail & Related papers (2022-03-14T15:39:27Z) - A Graph Neural Network Framework for Grid-Based Simulation [0.9137554315375922]
We propose a graph neural network (GNN) framework to build a surrogate feed-forward model which replaces simulation runs to accelerate the optimization process.
Our GNN framework shows great potential in the application of well-related subsurface optimization including oil and gas as well as carbon capture sequestration (CCS)
arXiv Detail & Related papers (2022-02-05T22:48:16Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.