Predicting the Initial Conditions of the Universe using a Deterministic
Neural Network
- URL: http://arxiv.org/abs/2303.13056v2
- Date: Thu, 14 Dec 2023 02:48:36 GMT
- Title: Predicting the Initial Conditions of the Universe using a Deterministic
Neural Network
- Authors: Vaibhav Jindal, Albert Liang, Aarti Singh, Shirley Ho, Drew Jamieson
- Abstract summary: Finding the initial conditions that led to the current state of the universe is challenging because it involves searching over an intractable input space of initial conditions.
Deep learning has emerged as a surrogate for N-body simulations by directly learning the mapping between the linear input of an N-body simulation and the final nonlinear output from the simulation.
In this work, we pioneer the use of a deterministic convolutional neural network for learning the reverse mapping and show that it accurately recovers the initial linear displacement field over a wide range of scales.
- Score: 10.158552381785078
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding the initial conditions that led to the current state of the universe
is challenging because it involves searching over an intractable input space of
initial conditions, along with modeling their evolution via tools such as
N-body simulations which are computationally expensive. Recently, deep learning
has emerged as a surrogate for N-body simulations by directly learning the
mapping between the linear input of an N-body simulation and the final
nonlinear output from the simulation, significantly accelerating the forward
modeling. However, this still does not reduce the search space for initial
conditions. In this work, we pioneer the use of a deterministic convolutional
neural network for learning the reverse mapping and show that it accurately
recovers the initial linear displacement field over a wide range of scales
($<1$-$2\%$ error up to nearly $k\simeq0.8$-$0.9 \text{ Mpc}^{-1}h$), despite
the one-to-many mapping of the inverse problem (due to the divergent backward
trajectories at smaller scales). Specifically, we train a V-Net architecture,
which outputs the linear displacement of an N-body simulation, given the
nonlinear displacement at redshift $z=0$ and the cosmological parameters. The
results of our method suggest that a simple deterministic neural network is
sufficient for accurately approximating the initial linear states, potentially
obviating the need for the more complex and computationally demanding backward
modeling methods that were recently proposed.
Related papers
- Beyond Closure Models: Learning Chaotic-Systems via Physics-Informed Neural Operators [78.64101336150419]
Predicting the long-term behavior of chaotic systems is crucial for various applications such as climate modeling.
An alternative approach to such a full-resolved simulation is using a coarse grid and then correcting its errors through a temporalittext model.
We propose an alternative end-to-end learning approach using a physics-informed neural operator (PINO) that overcomes this limitation.
arXiv Detail & Related papers (2024-08-09T17:05:45Z) - Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms [48.869199703062606]
A fundamental problem in quantum many-body physics is that of finding ground states of local Hamiltonians.
We introduce two approaches that achieve a constant sample complexity, independent of system size $n$, for learning ground state properties.
arXiv Detail & Related papers (2024-05-28T18:00:32Z) - Initialization Approach for Nonlinear State-Space Identification via the
Subspace Encoder Approach [0.0]
SUBNET has been developed to identify nonlinear state-space models from input-output data.
State encoder function is introduced to reconstruct the current state from past input-output data.
This paper focuses on an initialisation of the subspace encoder approach using the Best Linear Approximation (BLA)
arXiv Detail & Related papers (2023-04-04T20:57:34Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Field Level Neural Network Emulator for Cosmological N-body Simulations [7.051595217991437]
We build a field level emulator for cosmic structure formation that is accurate in the nonlinear regime.
We use two convolutional neural networks trained to output the nonlinear displacements and velocities of N-body simulation particles.
arXiv Detail & Related papers (2022-06-09T16:21:57Z) - Simple lessons from complex learning: what a neural network model learns
about cosmic structure formation [7.270598539996841]
We train a neural network model to predict the full phase space evolution of cosmological N-body simulations.
Our model achieves percent level accuracy at nonlinear scales of $ksim 1 mathrmMpc-1, h$, representing a significant improvement over COLA.
arXiv Detail & Related papers (2022-06-09T15:41:09Z) - The Neural Network shifted-Proper Orthogonal Decomposition: a Machine
Learning Approach for Non-linear Reduction of Hyperbolic Equations [0.0]
In this work we approach the problem of automatically detecting the correct pre-processing transformation in a statistical learning framework.
The purely data-driven method allowed us to generalise the existing approaches of linear subspace manipulation to non-linear hyperbolic problems with unknown advection fields.
The proposed algorithm has been validated against simple test cases to benchmark its performances and later successfully applied to a multiphase simulation.
arXiv Detail & Related papers (2021-08-14T15:13:35Z) - Fixed Depth Hamiltonian Simulation via Cartan Decomposition [59.20417091220753]
We present a constructive algorithm for generating quantum circuits with time-independent depth.
We highlight our algorithm for special classes of models, including Anderson localization in one dimensional transverse field XY model.
In addition to providing exact circuits for a broad set of spin and fermionic models, our algorithm provides broad analytic and numerical insight into optimal Hamiltonian simulations.
arXiv Detail & Related papers (2021-04-01T19:06:00Z) - Improved Initialization of State-Space Artificial Neural Networks [0.0]
The identification of black-box nonlinear state-space models requires a flexible representation of the state and output equation.
This paper introduces an improved approach for nonlinear state-space models represented as a recurrent artificial neural network.
arXiv Detail & Related papers (2021-03-26T15:16:08Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z) - Quantum Algorithms for Simulating the Lattice Schwinger Model [63.18141027763459]
We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings.
In lattice units, we find a Schwinger model on $N/2$ physical sites with coupling constant $x-1/2$ and electric field cutoff $x-1/2Lambda$.
We estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable---the mean pair density.
arXiv Detail & Related papers (2020-02-25T19:18:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.