Learning effective physical laws for generating cosmological
hydrodynamics with Lagrangian Deep Learning
- URL: http://arxiv.org/abs/2010.02926v1
- Date: Tue, 6 Oct 2020 18:00:00 GMT
- Title: Learning effective physical laws for generating cosmological
hydrodynamics with Lagrangian Deep Learning
- Authors: Biwei Dai and Uros Seljak
- Abstract summary: We propose Lagrangian Deep Learning to learn outputs of cosmological hydrodynamical simulations.
The model uses layers of Lagrangian displacements of particles describing the observables to learn the effective physical laws.
The total number of learned parameters is only of order 10, and they can be viewed as effective theory parameters.
- Score: 7.6146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of generative models is to learn the intricate relations between the
data to create new simulated data, but current approaches fail in very high
dimensions. When the true data generating process is based on physical
processes these impose symmetries and constraints, and the generative model can
be created by learning an effective description of the underlying physics,
which enables scaling of the generative model to very high dimensions. In this
work we propose Lagrangian Deep Learning (LDL) for this purpose, applying it to
learn outputs of cosmological hydrodynamical simulations. The model uses layers
of Lagrangian displacements of particles describing the observables to learn
the effective physical laws. The displacements are modeled as the gradient of
an effective potential, which explicitly satisfies the translational and
rotational invariance. The total number of learned parameters is only of order
10, and they can be viewed as effective theory parameters. We combine N-body
solver FastPM with LDL and apply them to a wide range of cosmological outputs,
from the dark matter to the stellar maps, gas density and temperature. The
computational cost of LDL is nearly four orders of magnitude lower than the
full hydrodynamical simulations, yet it outperforms it at the same resolution.
We achieve this with only of order 10 layers from the initial conditions to the
final output, in contrast to typical cosmological simulations with thousands of
time steps. This opens up the possibility of analyzing cosmological
observations entirely within this framework, without the need for large
dark-matter simulations.
Related papers
- Language Models as Zero-shot Lossless Gradient Compressors: Towards
General Neural Parameter Prior Models [66.1595537904019]
Large language models (LLMs) can act as gradient priors in a zero-shot setting.
We introduce LM-GC, a novel method that integrates LLMs with arithmetic coding.
arXiv Detail & Related papers (2024-09-26T13:38:33Z) - CHARM: Creating Halos with Auto-Regressive Multi-stage networks [1.6987257996124416]
CHARM is a novel method for creating mock halo catalogs.
We show that the mock halo catalogs and painted galaxy catalogs have the same statistical properties as obtained from $N$-body simulations in both real space and redshift space.
arXiv Detail & Related papers (2024-09-13T18:00:06Z) - Towards Complex Dynamic Physics System Simulation with Graph Neural ODEs [75.7104463046767]
This paper proposes a novel learning based simulation model that characterizes the varying spatial and temporal dependencies in particle systems.
We empirically evaluate GNSTODE's simulation performance on two real-world particle systems, Gravity and Coulomb.
arXiv Detail & Related papers (2023-05-21T03:51:03Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - Fast and realistic large-scale structure from machine-learning-augmented
random field simulations [0.0]
We train a machine learning model to transform projected lognormal dark matter density fields to more realistic dark matter maps.
We demonstrate the performance of our model comparing various statistical tests with different field resolutions, redshifts and cosmological parameters.
arXiv Detail & Related papers (2022-05-16T18:00:01Z) - BIGDML: Towards Exact Machine Learning Force Fields for Materials [55.944221055171276]
Machine-learning force fields (MLFF) should be accurate, computationally and data efficient, and applicable to molecules, materials, and interfaces thereof.
Here, we introduce the Bravais-Inspired Gradient-Domain Machine Learning approach and demonstrate its ability to construct reliable force fields using a training set with just 10-200 atoms.
arXiv Detail & Related papers (2021-06-08T10:14:57Z) - Learning 3D Granular Flow Simulations [6.308272531414633]
We present a Graph Neural Networks approach towards accurate modeling of complex 3D granular flow simulation processes created by the discrete element method LIGGGHTS.
We discuss how to implement Graph Neural Networks that deal with 3D objects, boundary conditions, particle - particle, and particle - boundary interactions.
arXiv Detail & Related papers (2021-05-04T17:27:59Z) - Machine learning for rapid discovery of laminar flow channel wall
modifications that enhance heat transfer [56.34005280792013]
We present a combination of accurate numerical simulations of arbitrary, flat, and non-flat channels and machine learning models predicting drag coefficient and Stanton number.
We show that convolutional neural networks (CNN) can accurately predict the target properties at a fraction of the time of numerical simulations.
arXiv Detail & Related papers (2021-01-19T16:14:02Z) - Fast and Accurate Non-Linear Predictions of Universes with Deep Learning [21.218297581239664]
We build a V-Net based model that transforms fast linear predictions into fully nonlinear predictions from numerical simulations.
Our NN model learns to emulate the simulations down to small scales and is both faster and more accurate than the current state-of-the-art approximate methods.
arXiv Detail & Related papers (2020-12-01T03:30:37Z) - AI-assisted super-resolution cosmological simulations [9.59904742274332]
We develop a neural network to learn from high-resolution (HR) image data, and then make accurate super-resolution (SR) versions of different low-resolution (LR) images.
We are able to enhance the simulation resolution by generating 512 times more particles and predicting their displacement from the initial positions.
Our model learns from only 16 pairs of small-volume LR-HR simulations, and is then able to generate SR simulations that successfully reproduce the HR matter power spectrum to percent level up to $16,h-1mathrmMpc$, and the HR halo mass function to within $10
arXiv Detail & Related papers (2020-10-13T18:04:24Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.