Inferring the Hubble Constant Using Simulated Strongly Lensed Supernovae and Neural Network Ensembles
- URL: http://arxiv.org/abs/2504.10553v1
- Date: Mon, 14 Apr 2025 10:43:18 GMT
- Title: Inferring the Hubble Constant Using Simulated Strongly Lensed Supernovae and Neural Network Ensembles
- Authors: Gonçalo Gonçalves, Nikki Arendse, Doogesh Kodi Ramanah, Radosław Wojtak,
- Abstract summary: Strongly lensed supernovae are a promising new probe to obtain independent measurements of the Hubble constant.<n>In this work, we employ simulated gravitationally lensed Type Ia supernovae (glSNe Ia) to train our machine learning pipeline.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Strongly lensed supernovae are a promising new probe to obtain independent measurements of the Hubble constant (${H_0}$). In this work, we employ simulated gravitationally lensed Type Ia supernovae (glSNe Ia) to train our machine learning (ML) pipeline to constrain $H_0$. We simulate image time-series of glSNIa, as observed with the upcoming Nancy Grace Roman Space Telescope, that we employ for training an ensemble of five convolutional neural networks (CNNs). The outputs of this ensemble network are combined with a simulation-based inference (SBI) framework to quantify the uncertainties on the network predictions and infer full posteriors for the $H_0$ estimates. We illustrate that the combination of multiple glSN systems enhances constraint precision, providing a $4.4\%$ estimate of $H_0$ based on 100 simulated systems, which is in agreement with the ground truth. This research highlights the potential of leveraging the capabilities of ML with glSNe systems to obtain a pipeline capable of fast and automated $H_0$ measurements.
Related papers
- Global Convergence and Rich Feature Learning in $L$-Layer Infinite-Width Neural Networks under $μ$P Parametrization [66.03821840425539]
In this paper, we investigate the training dynamics of $L$-layer neural networks using the tensor gradient program (SGD) framework.
We show that SGD enables these networks to learn linearly independent features that substantially deviate from their initial values.
This rich feature space captures relevant data information and ensures that any convergent point of the training process is a global minimum.
arXiv Detail & Related papers (2025-03-12T17:33:13Z) - Automatic Machine Learning Framework to Study Morphological Parameters of AGN Host Galaxies within $z < 1.4$ in the Hyper Supreme-Cam Wide Survey [4.6218496439194805]
We present a machine learning framework to estimate posterior distributions of bulge-to-total light ratio, half-light radius, and flux for AGN host galaxies.<n>We use PSFGAN to decompose the AGN point source light from its host galaxy, and invoke the Galaxy Morphology Posterior Estimation Network (GaMPEN) to estimate morphological parameters.<n>Our framework runs at least three orders of magnitude faster than traditional light-profile fitting methods.
arXiv Detail & Related papers (2025-01-27T03:04:34Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Streamlined Lensed Quasar Identification in Multiband Images via
Ensemble Networks [34.82692226532414]
Quasars experiencing strong lensing offer unique viewpoints on subjects related to cosmic expansion rate, dark matter, and quasar host galaxies.
We have developed a novel approach by ensembling cutting-edge convolutional networks (CNNs) trained on realistic galaxy-quasar lens simulations.
We retrieve approximately 60 million sources as parent samples and reduce this to 892,609 after employing a photometry preselection to discover quasars with Einstein radii of $theta_mathrmE5$ arcsec.
arXiv Detail & Related papers (2023-07-03T15:09:10Z) - Constraining cosmological parameters from N-body simulations with
Variational Bayesian Neural Networks [0.0]
Multiplicative normalizing flows (MNFs) are a family of approximate posteriors for the parameters of BNNs.
We have compared MNFs with respect to the standard BNNs, and the flipout estimator.
MNFs provide more realistic predictive distribution closer to the true posterior mitigating the bias introduced by the variational approximation.
arXiv Detail & Related papers (2023-01-09T16:07:48Z) - Hierarchical Inference of the Lensing Convergence from Photometric
Catalogs with Bayesian Graph Neural Networks [0.0]
We introduce fluctuations on galaxy-galaxy lensing scales of $sim$1$''$ and extract random sightlines to train our BGNN.
For each test set of 1,000 sightlines, the BGNN infers the individual $kappa$ posteriors, which we combine in a hierarchical Bayesian model.
For a test field well sampled by the training set, the BGNN recovers the population mean of $kappa$ precisely and without bias.
arXiv Detail & Related papers (2022-11-15T00:29:20Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Scalable Lipschitz Residual Networks with Convex Potential Flows [120.27516256281359]
We show that using convex potentials in a residual network gradient flow provides a built-in $1$-Lipschitz transformation.
A comprehensive set of experiments on CIFAR-10 demonstrates the scalability of our architecture and the benefit of our approach for $ell$ provable defenses.
arXiv Detail & Related papers (2021-10-25T07:12:53Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Optimising simulations for diphoton production at hadron colliders using
amplitude neural networks [0.0]
We investigate the use of neural networks to approximate matrix elements for high-multiplicity scattering processes.
We develop a realistic simulation method that can be applied to hadron collider observables.
arXiv Detail & Related papers (2021-06-17T13:24:36Z) - Large-Scale Gravitational Lens Modeling with Bayesian Neural Networks
for Accurate and Precise Inference of the Hubble Constant [0.0]
We investigate the use of approximate Bayesian neural networks (BNNs) in modeling hundreds of time-delay gravitational lenses.
A simple combination of 200 test-set lenses results in a precision of 0.5 $textrmkm s-1 textrm Mpc-1$ ($0.7%$)
Our pipeline is a promising tool for exploring ensemble-level systematics in lens modeling.
arXiv Detail & Related papers (2020-11-30T19:00:20Z) - DeepShadows: Separating Low Surface Brightness Galaxies from Artifacts
using Deep Learning [70.80563014913676]
We investigate the use of convolutional neural networks (CNNs) for the problem of separating low-surface-brightness galaxies from artifacts in survey images.
We show that CNNs offer a very promising path in the quest to study the low-surface-brightness universe.
arXiv Detail & Related papers (2020-11-24T22:51:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.