Mass Estimation of Galaxy Clusters with Deep Learning II: CMB Cluster
Lensing
- URL: http://arxiv.org/abs/2005.13985v2
- Date: Fri, 22 Oct 2021 07:44:21 GMT
- Title: Mass Estimation of Galaxy Clusters with Deep Learning II: CMB Cluster
Lensing
- Authors: N. Gupta and C. L. Reichardt
- Abstract summary: We present a new application of deep learning to reconstruct the cosmic microwave background (CMB) temperature maps from the images of microwave sky.
We use a feed-forward deep learning network, mResUNet, for both steps of the analysis.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new application of deep learning to reconstruct the cosmic
microwave background (CMB) temperature maps from the images of microwave sky,
and to use these reconstructed maps to estimate the masses of galaxy clusters.
We use a feed-forward deep learning network, mResUNet, for both steps of the
analysis. The first deep learning model, mResUNet-I, is trained to reconstruct
foreground and noise suppressed CMB maps from a set of simulated images of the
microwave sky that include signals from the cosmic microwave background,
astrophysical foregrounds like dusty and radio galaxies, instrumental noise as
well as the cluster's own thermal Sunyaev Zel'dovich signal. The second deep
learning model, mResUNet-II, is trained to estimate cluster masses from the
gravitational lensing signature in the reconstructed foreground and noise
suppressed CMB maps. For SPTpol-like noise levels, the trained mResUNet-II
model recovers the mass for $10^4$ galaxy cluster samples with a 1-$\sigma$
uncertainty $\Delta M_{\rm 200c}^{\rm est}/M_{\rm 200c}^{\rm est} =$ 0.108 and
0.016 for input cluster mass $M_{\rm 200c}^{\rm true}=10^{14}~\rm M_{\odot}$
and $8\times 10^{14}~\rm M_{\odot}$, respectively. We also test for potential
bias on recovered masses, finding that for a set of $10^5$ clusters the
estimator recovers $M_{\rm 200c}^{\rm est} = 2.02 \times 10^{14}~\rm
M_{\odot}$, consistent with the input at 1% level. The 2 $\sigma$ upper limit
on potential bias is at 3.5% level.
Related papers
- A Unified Framework for Gradient-based Clustering of Distributed Data [51.904327888475606]
We develop a family of distributed clustering algorithms that work over networks of users.
DGC-$mathcalF_rho$ is specialized to popular clustering losses like $K$-means and Huber loss.
We show that consensus fixed points of DGC-$mathcalF_rho$ are equivalent to fixed points of gradient clustering over the full data.
arXiv Detail & Related papers (2024-02-02T10:44:42Z) - Physics-informed compressed sensing for PC-MRI: an inverse Navier-Stokes
problem [78.20667552233989]
We formulate a physics-informed compressed sensing (PICS) method for the reconstruction of velocity fields from noisy and sparse magnetic resonance signals.
We find that the method is capable of reconstructing and segmenting the velocity fields from sparsely-sampled signals.
arXiv Detail & Related papers (2022-07-04T14:51:59Z) - Augmenting astrophysical scaling relations with machine learning :
application to reducing the SZ flux-mass scatter [2.0223261087090303]
We study the Sunyaev-Zeldovich flux$-$cluster mass relation ($Y_mathrmSZ-M$)
We find a new proxy for cluster mass which combines $Y_mathrmSZ$ and concentration of ionized gas.
We show that the dependence on $c_mathrmgas$ is linked to cores of clusters exhibiting larger scatter than their outskirts.
arXiv Detail & Related papers (2022-01-04T19:00:01Z) - Robust marginalization of baryonic effects for cosmological inference at
the field level [12.768056235837427]
We train neural networks to perform likelihood-free inference from $(25,h-1rm Mpc)2$ 2D maps containing the total mass surface density.
We show that the networks can extract information beyond one-point functions and power spectra from all resolved scales.
arXiv Detail & Related papers (2021-09-21T18:00:01Z) - Boosting in the Presence of Massart Noise [49.72128048499074]
We study the problem of boosting the accuracy of a weak learner in the (distribution-independent) PAC model with Massart noise.
Our main result is the first computationally efficient boosting algorithm in the presence of Massart noise.
As a simple application of our positive result, we give first efficient Massart learner for unions of high-dimensional rectangles.
arXiv Detail & Related papers (2021-06-14T22:21:25Z) - Provable Robustness of Adversarial Training for Learning Halfspaces with
Noise [95.84614821570283]
We analyze the properties of adversarial learning adversarially robust halfspaces in the presence of label noise.
To the best of our knowledge, this is the first work to show that adversarial training prov yields classifiers in noise.
arXiv Detail & Related papers (2021-04-19T16:35:38Z) - Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK [58.5766737343951]
We consider the dynamic of descent for learning a two-layer neural network.
We show that an over-parametrized two-layer neural network can provably learn with gradient loss at most ground with Tangent samples.
arXiv Detail & Related papers (2020-07-09T07:09:28Z) - DeepMerge: Classifying High-redshift Merging Galaxies with Deep Neural
Networks [0.0]
We show the use of convolutional neural networks (CNNs) for the task of distinguishing between merging and non-merging galaxies in simulated images.
We extract images of merging and non-merging galaxies from the Illustris-1 cosmological simulation and apply observational and experimental noise.
The test set classification accuracy of the CNN is $79%$ for pristine and $76%$ for noisy.
arXiv Detail & Related papers (2020-04-24T20:36:06Z) - Mass Estimation of Galaxy Clusters with Deep Learning I:
Sunyaev-Zel'dovich Effect [0.0]
We present a new application of deep learning to infer the masses of galaxy clusters directly from images of the microwave sky.
We train and test the deep learning model using simulated images of the microwave sky that include signals from the cosmic microwave background (CMB), dusty and radio galaxies, instrumental noise as well as the cluster's own SZ signal.
We verify that the model works for realistic SZ profiles even when trained on azimuthally symmetric SZ profiles by using the Magneticum hydrodynamical simulations.
arXiv Detail & Related papers (2020-03-13T07:16:20Z) - Quantum Algorithms for Simulating the Lattice Schwinger Model [63.18141027763459]
We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings.
In lattice units, we find a Schwinger model on $N/2$ physical sites with coupling constant $x-1/2$ and electric field cutoff $x-1/2Lambda$.
We estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable---the mean pair density.
arXiv Detail & Related papers (2020-02-25T19:18:36Z) - CosmoVAE: Variational Autoencoder for CMB Image Inpainting [4.69377041192659]
The noise of the CMB map has a significant impact on the estimation precision for cosmological parameters.
In this paper, we propose a deep learning-based variational autoencoder to restore the missing observations of the CMB map.
The proposed model achieves state of the art performance for Planck textttCommander 2018 CMB map inpainting.
arXiv Detail & Related papers (2020-01-31T03:54:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.