Hierarchical Inference of the Lensing Convergence from Photometric
Catalogs with Bayesian Graph Neural Networks
- URL: http://arxiv.org/abs/2211.07807v1
- Date: Tue, 15 Nov 2022 00:29:20 GMT
- Title: Hierarchical Inference of the Lensing Convergence from Photometric
Catalogs with Bayesian Graph Neural Networks
- Authors: Ji Won Park, Simon Birrer, Madison Ueland, Miles Cranmer, Adriano
Agnello, Sebastian Wagner-Carena, Philip J. Marshall, Aaron Roodman, and the
LSST Dark Energy Science Collaboration
- Abstract summary: We introduce fluctuations on galaxy-galaxy lensing scales of $sim$1$''$ and extract random sightlines to train our BGNN.
For each test set of 1,000 sightlines, the BGNN infers the individual $kappa$ posteriors, which we combine in a hierarchical Bayesian model.
For a test field well sampled by the training set, the BGNN recovers the population mean of $kappa$ precisely and without bias.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a Bayesian graph neural network (BGNN) that can estimate the weak
lensing convergence ($\kappa$) from photometric measurements of galaxies along
a given line of sight. The method is of particular interest in strong
gravitational time delay cosmography (TDC), where characterizing the "external
convergence" ($\kappa_{\rm ext}$) from the lens environment and line of sight
is necessary for precise inference of the Hubble constant ($H_0$). Starting
from a large-scale simulation with a $\kappa$ resolution of $\sim$1$'$, we
introduce fluctuations on galaxy-galaxy lensing scales of $\sim$1$''$ and
extract random sightlines to train our BGNN. We then evaluate the model on test
sets with varying degrees of overlap with the training distribution. For each
test set of 1,000 sightlines, the BGNN infers the individual $\kappa$
posteriors, which we combine in a hierarchical Bayesian model to yield
constraints on the hyperparameters governing the population. For a test field
well sampled by the training set, the BGNN recovers the population mean of
$\kappa$ precisely and without bias, resulting in a contribution to the $H_0$
error budget well under 1\%. In the tails of the training set with sparse
samples, the BGNN, which can ingest all available information about each
sightline, extracts more $\kappa$ signal compared to a simplified version of
the traditional method based on matching galaxy number counts, which is limited
by sample variance. Our hierarchical inference pipeline using BGNNs promises to
improve the $\kappa_{\rm ext}$ characterization for precision TDC. The
implementation of our pipeline is available as a public Python package, Node to
Joy.
Related papers
- Self-Ensembling Gaussian Splatting for Few-shot Novel View Synthesis [55.561961365113554]
3D Gaussian Splatting (3DGS) has demonstrated remarkable effectiveness for novel view synthesis (NVS)
However, the 3DGS model tends to overfit when trained with sparse posed views, limiting its generalization capacity for broader pose variations.
We introduce a self-ensembling Gaussian Splatting (SE-GS) approach to alleviate the overfitting problem.
arXiv Detail & Related papers (2024-10-31T18:43:48Z) - Bayesian Inference with Deep Weakly Nonlinear Networks [57.95116787699412]
We show at a physics level of rigor that Bayesian inference with a fully connected neural network is solvable.
We provide techniques to compute the model evidence and posterior to arbitrary order in $1/N$ and at arbitrary temperature.
arXiv Detail & Related papers (2024-05-26T17:08:04Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Role of Locality and Weight Sharing in Image-Based Tasks: A Sample Complexity Separation between CNNs, LCNs, and FCNs [42.551773746803946]
Vision tasks are characterized by the properties of locality and translation invariance.
The superior performance of convolutional neural networks (CNNs) on these tasks is widely attributed to the inductive bias of locality and weight sharing baked into their architecture.
Existing attempts to quantify the statistical benefits of these biases in CNNs over locally connected neural networks (LCNs) and fully connected neural networks (FCNs) fall into one of the following categories.
arXiv Detail & Related papers (2024-03-23T03:57:28Z) - Local primordial non-Gaussianity from the large-scale clustering of photometric DESI luminous red galaxies [5.534428269834764]
We use angular clustering of luminous red galaxies from the Dark Energy Spectroscopic Instrument (DESI) imaging surveys to constrain the local primordial non-Gaussianity parameter $fnl$.
Our sample comprises over 12 million targets, covering 14,000 square degrees of the sky, with redshifts in the range 0.2 z 1.35$.
We identify Galactic extinction, survey depth, and astronomical seeing as the primary sources of systematic error, and employ linear regression and artificial neural networks to alleviate non-cosmological excess clustering on large scales.
arXiv Detail & Related papers (2023-07-04T14:49:23Z) - Generalization and Stability of Interpolating Neural Networks with
Minimal Width [37.908159361149835]
We investigate the generalization and optimization of shallow neural-networks trained by gradient in the interpolating regime.
We prove the training loss number minimizations $m=Omega(log4 (n))$ neurons and neurons $Tapprox n$.
With $m=Omega(log4 (n))$ neurons and $Tapprox n$, we bound the test loss training by $tildeO (1/)$.
arXiv Detail & Related papers (2023-02-18T05:06:15Z) - Neural Inference of Gaussian Processes for Time Series Data of Quasars [72.79083473275742]
We introduce a new model that enables it to describe quasar spectra completely.
We also introduce a new method of inference of Gaussian process parameters, which we call $textitNeural Inference$.
The combination of both the CDRW model and Neural Inference significantly outperforms the baseline DRW and MLE.
arXiv Detail & Related papers (2022-11-17T13:01:26Z) - Bounding the Width of Neural Networks via Coupled Initialization -- A
Worst Case Analysis [121.9821494461427]
We show how to significantly reduce the number of neurons required for two-layer ReLU networks.
We also prove new lower bounds that improve upon prior work, and that under certain assumptions, are best possible.
arXiv Detail & Related papers (2022-06-26T06:51:31Z) - High-Dimensional Inference over Networks: Linear Convergence and
Statistical Guarantees [20.701475313495884]
We study a sparse linear regression over a network of agents, modeled as an undirected graph and no server node.
We analyze the convergence rate and statistical guarantees of a distributed projected gradient tracking-based algorithm.
arXiv Detail & Related papers (2022-01-21T01:26:08Z) - Large-Scale Gravitational Lens Modeling with Bayesian Neural Networks
for Accurate and Precise Inference of the Hubble Constant [0.0]
We investigate the use of approximate Bayesian neural networks (BNNs) in modeling hundreds of time-delay gravitational lenses.
A simple combination of 200 test-set lenses results in a precision of 0.5 $textrmkm s-1 textrm Mpc-1$ ($0.7%$)
Our pipeline is a promising tool for exploring ensemble-level systematics in lens modeling.
arXiv Detail & Related papers (2020-11-30T19:00:20Z) - Deep learning for gravitational-wave data analysis: A resampling
white-box approach [62.997667081978825]
We apply Convolutional Neural Networks (CNNs) to detect gravitational wave (GW) signals of compact binary coalescences, using single-interferometer data from LIGO detectors.
CNNs were quite precise to detect noise but not sensitive enough to recall GW signals, meaning that CNNs are better for noise reduction than generation of GW triggers.
arXiv Detail & Related papers (2020-09-09T03:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.