Fast, high-fidelity Lyman $\alpha$ forests with convolutional neural
networks
- URL: http://arxiv.org/abs/2106.12662v1
- Date: Wed, 23 Jun 2021 21:41:47 GMT
- Title: Fast, high-fidelity Lyman $\alpha$ forests with convolutional neural
networks
- Authors: Peter Harrington, Mustafa Mustafa, Max Dornfest, Benjamin Horowitz,
Zarija Luki\'c
- Abstract summary: We train a convolutional neural network to reconstruct the baryon hydrodynamic variables on scales relevant to the Lyman-$alpha$ (Ly$alpha$) forest.
Our method enables rapid estimation of these fields at a resolution of $sim$20kpc, and captures the statistics of the Ly$alpha$ forest with much greater accuracy than existing approximations.
- Score: 1.0499611180329804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Full-physics cosmological simulations are powerful tools for studying the
formation and evolution of structure in the universe but require extreme
computational resources. Here, we train a convolutional neural network to use a
cheaper N-body-only simulation to reconstruct the baryon hydrodynamic variables
(density, temperature, and velocity) on scales relevant to the Lyman-$\alpha$
(Ly$\alpha$) forest, using data from Nyx simulations. We show that our method
enables rapid estimation of these fields at a resolution of $\sim$20kpc, and
captures the statistics of the Ly$\alpha$ forest with much greater accuracy
than existing approximations. Because our model is fully-convolutional, we can
train on smaller simulation boxes and deploy on much larger ones, enabling
substantial computational savings. Furthermore, as our method produces an
approximation for the hydrodynamic fields instead of Ly$\alpha$ flux directly,
it is not limited to a particular choice of ionizing background or mean
transmitted flux.
Related papers
- Beyond Closure Models: Learning Chaotic-Systems via Physics-Informed Neural Operators [78.64101336150419]
Predicting the long-term behavior of chaotic systems is crucial for various applications such as climate modeling.
An alternative approach to such a full-resolved simulation is using a coarse grid and then correcting its errors through a temporalittext model.
We propose an alternative end-to-end learning approach using a physics-informed neural operator (PINO) that overcomes this limitation.
arXiv Detail & Related papers (2024-08-09T17:05:45Z) - Neural-g: A Deep Learning Framework for Mixing Density Estimation [16.464806944964003]
Mixing (or prior) density estimation is an important problem in machine learning and statistics.
We propose neural-$g$, a new neural network-based estimator for $g$-modeling.
arXiv Detail & Related papers (2024-06-10T03:00:28Z) - SE(3)-Stochastic Flow Matching for Protein Backbone Generation [54.951832422425454]
We introduce FoldFlow, a series of novel generative models of increasing modeling power based on the flow-matching paradigm over $3mathrmD$ rigid motions.
Our family of FoldFlowgenerative models offers several advantages over previous approaches to the generative modeling of proteins.
arXiv Detail & Related papers (2023-10-03T19:24:24Z) - Simulation-free Schr\"odinger bridges via score and flow matching [89.4231207928885]
We present simulation-free score and flow matching ([SF]$2$M)
Our method generalizes both the score-matching loss used in the training of diffusion models and the recently proposed flow matching loss used in the training of continuous flows.
Notably, [SF]$2$M is the first method to accurately model cell dynamics in high dimensions and can recover known gene regulatory networks simulated data.
arXiv Detail & Related papers (2023-07-07T15:42:35Z) - Fast emulation of cosmological density fields based on dimensionality
reduction and supervised machine-learning [0.0]
We show that it is possible to perform fast dark matter density field emulations with competitive accuracy using simple machine-learning approaches.
New density cubes for different cosmological parameters can be estimated without relying directly on new N-body simulations.
arXiv Detail & Related papers (2023-04-12T18:29:26Z) - Predicting the Initial Conditions of the Universe using a Deterministic
Neural Network [10.158552381785078]
Finding the initial conditions that led to the current state of the universe is challenging because it involves searching over an intractable input space of initial conditions.
Deep learning has emerged as a surrogate for N-body simulations by directly learning the mapping between the linear input of an N-body simulation and the final nonlinear output from the simulation.
In this work, we pioneer the use of a deterministic convolutional neural network for learning the reverse mapping and show that it accurately recovers the initial linear displacement field over a wide range of scales.
arXiv Detail & Related papers (2023-03-23T06:04:36Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Fast and realistic large-scale structure from machine-learning-augmented
random field simulations [0.0]
We train a machine learning model to transform projected lognormal dark matter density fields to more realistic dark matter maps.
We demonstrate the performance of our model comparing various statistical tests with different field resolutions, redshifts and cosmological parameters.
arXiv Detail & Related papers (2022-05-16T18:00:01Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Predicting the near-wall region of turbulence through convolutional
neural networks [0.0]
A neural-network-based approach to predict the near-wall behaviour in a turbulent open channel flow is investigated.
The fully-convolutional network (FCN) is trained to predict the two-dimensional velocity-fluctuation fields at $y+_rm target$.
FCN can take advantage of the self-similarity in the logarithmic region of the flow and predict the velocity-fluctuation fields at $y+ = 50$.
arXiv Detail & Related papers (2021-07-15T13:58:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.