Baryon acoustic oscillations reconstruction using convolutional neural
networks
- URL: http://arxiv.org/abs/2002.10218v3
- Date: Thu, 3 Dec 2020 08:41:49 GMT
- Title: Baryon acoustic oscillations reconstruction using convolutional neural
networks
- Authors: Tian-Xiang Mao, Jie Wang, Baojiu Li, Yan-Chuan Cai, Bridget Falck,
Mark Neyrinck and Alex Szalay
- Abstract summary: We propose a new scheme to reconstruct the baryon acoustic oscillations (BAO) signal, which contains key cosmological information, based on deep convolutional neural networks (CNN)
We find that the network trained in one cosmology is able to reconstruct BAO peaks in the others, i.e. recovering information lost to non-linearity independent of cosmology.
- Score: 1.9262162668141078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new scheme to reconstruct the baryon acoustic oscillations (BAO)
signal, which contains key cosmological information, based on deep
convolutional neural networks (CNN). Trained with almost no fine-tuning, the
network can recover large-scale modes accurately in the test set: the
correlation coefficient between the true and reconstructed initial conditions
reaches $90\%$ at $k\leq 0.2 h\mathrm{Mpc}^{-1}$, which can lead to significant
improvements of the BAO signal-to-noise ratio down to
$k\simeq0.4h\mathrm{Mpc}^{-1}$. Since this new scheme is based on the
configuration-space density field in sub-boxes, it is local and less affected
by survey boundaries than the standard reconstruction method, as our tests
confirm. We find that the network trained in one cosmology is able to
reconstruct BAO peaks in the others, i.e. recovering information lost to
non-linearity independent of cosmology. The accuracy of recovered BAO peak
positions is far less than that caused by the difference in the cosmology
models for training and testing, suggesting that different models can be
distinguished efficiently in our scheme. It is very promising that Our scheme
provides a different new way to extract the cosmological information from the
ongoing and future large galaxy surveys.
Related papers
- Stabilizing RNN Gradients through Pre-training [3.335932527835653]
Theory of learning proposes to prevent the gradient from exponential growth with depth or time, to stabilize and improve training.
We extend known stability theories to encompass a broader family of deep recurrent networks, requiring minimal assumptions on data and parameter distribution.
We propose a new approach to mitigate this issue, that consists on giving a weight of a half to the time and depth contributions to the gradient.
arXiv Detail & Related papers (2023-08-23T11:48:35Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - PUERT: Probabilistic Under-sampling and Explicable Reconstruction
Network for CS-MRI [47.24613772568027]
Compressed Sensing MRI aims at reconstructing de-aliased images from sub-Nyquist sampling k-space data to accelerate MR Imaging.
We propose a novel end-to-end Probabilistic Under-sampling and Explicable Reconstruction neTwork, dubbed PUERT, to jointly optimize the sampling pattern and the reconstruction network.
Experiments on two widely used MRI datasets demonstrate that our proposed PUERT achieves state-of-the-art results in terms of both quantitative metrics and visual quality.
arXiv Detail & Related papers (2022-04-24T04:23:57Z) - Direction of Arrival Estimation of Sound Sources Using Icosahedral CNNs [10.089520556398574]
We present a new model for Direction of Arrival (DOA) estimation of sound sources based on an Icosahedral Convolutional Neural Network (CNN)
This icosahedral CNN is equivariant to the 60 rotational symmetries of the icosahedron, which represent a good approximation of the continuous space of spherical rotations.
We prove that using models that fit the equivariances of the problem allows us to outperform other state-of-the-art models with a lower computational cost and more robustness.
arXiv Detail & Related papers (2022-03-31T10:52:19Z) - Probabilistic Mass Mapping with Neural Score Estimation [4.079848600120986]
We introduce a novel methodology for efficient sampling of the high-dimensional Bayesian posterior of the weak lensing mass-mapping problem.
We aim to demonstrate the accuracy of the method on simulations, and then proceed to applying it to the mass reconstruction of the HST/ACS COSMOS field.
arXiv Detail & Related papers (2022-01-14T17:07:48Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Regularization-Agnostic Compressed Sensing MRI Reconstruction with
Hypernetworks [21.349071909858218]
We present a novel strategy of using a hypernetwork to generate the parameters of a separate reconstruction network as a function of the regularization weight(s)
At test time, for a given under-sampled image, our model can rapidly compute reconstructions with different amounts of regularization.
We analyze the variability of these reconstructions, especially in situations when the overall quality is similar.
arXiv Detail & Related papers (2021-01-06T18:55:37Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z) - FPCR-Net: Feature Pyramidal Correlation and Residual Reconstruction for
Optical Flow Estimation [72.41370576242116]
We propose a semi-supervised Feature Pyramidal Correlation and Residual Reconstruction Network (FPCR-Net) for optical flow estimation from frame pairs.
It consists of two main modules: pyramid correlation mapping and residual reconstruction.
Experiment results show that the proposed scheme achieves the state-of-the-art performance, with improvement by 0.80, 1.15 and 0.10 in terms of average end-point error (AEE) against competing baseline methods.
arXiv Detail & Related papers (2020-01-17T07:13:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.