Accelerating lensed quasar discovery and modeling with physics-informed variational autoencoders
- URL: http://arxiv.org/abs/2412.12709v3
- Date: Mon, 27 Jan 2025 13:29:05 GMT
- Title: Accelerating lensed quasar discovery and modeling with physics-informed variational autoencoders
- Authors: Irham T. Andika, Stefan Schuldt, Sherry H. Suyu, Satadru Bag, Raoul CaƱameras, Alejandra Melo, Claudio Grillo, James H. H. Chan,
- Abstract summary: Strongly lensed quasars provide valuable insights into the rate of cosmic expansion.
detecting them in astronomical images is difficult due to the prevalence of non-lensing objects.
We develop a generative deep learning model called VariLens, built upon a physics-informed variational autoencoder.
- Score: 34.82692226532414
- License:
- Abstract: Strongly lensed quasars provide valuable insights into the rate of cosmic expansion, the distribution of dark matter in foreground deflectors, and the characteristics of quasar hosts. However, detecting them in astronomical images is difficult due to the prevalence of non-lensing objects. To address this challenge, we developed a generative deep learning model called VariLens, built upon a physics-informed variational autoencoder. This model seamlessly integrates three essential modules: image reconstruction, object classification, and lens modeling, offering a fast and comprehensive approach to strong lens analysis. VariLens is capable of rapidly determining both (1) the probability that an object is a lens system and (2) key parameters of a singular isothermal ellipsoid (SIE) mass model -- including the Einstein radius ($\theta_\mathrm{E}$), lens center, and ellipticity -- in just milliseconds using a single CPU. A direct comparison of VariLens estimates with traditional lens modeling for 20 known lensed quasars within the Subaru Hyper Suprime-Cam (HSC) footprint shows good agreement, with both results consistent within $2\sigma$ for systems with $\theta_\mathrm{E}<3$ arcsecs. To identify new lensed quasar candidates, we begin with an initial sample of approximately 80 million sources, combining HSC data with multiwavelength information from various surveys. After applying a photometric preselection aimed at locating $z>1.5$ sources, the number of candidates was reduced to 710,966. Subsequently, VariLens highlights 13,831 sources, each showing a high likelihood of being a lens. A visual assessment of these objects results in 42 promising candidates that await spectroscopic confirmation. These results underscore the potential of automated deep learning pipelines to efficiently detect and model strong lenses in large datasets.
Related papers
- CSST Strong Lensing Preparation: a Framework for Detecting Strong Lenses in the Multi-color Imaging Survey by the China Survey Space Telescope (CSST) [25.468504540327498]
Strong gravitational lensing is a powerful tool for investigating dark matter and dark energy properties.
We have developed a framework based on a hierarchical visual Transformer with a sliding window technique to search for strong lensing systems within entire images.
Our framework achieves precision and recall rates of 0.98 and 0.90, respectively.
arXiv Detail & Related papers (2024-04-02T09:44:30Z) - Streamlined Lensed Quasar Identification in Multiband Images via
Ensemble Networks [34.82692226532414]
Quasars experiencing strong lensing offer unique viewpoints on subjects related to cosmic expansion rate, dark matter, and quasar host galaxies.
We have developed a novel approach by ensembling cutting-edge convolutional networks (CNNs) trained on realistic galaxy-quasar lens simulations.
We retrieve approximately 60 million sources as parent samples and reduce this to 892,609 after employing a photometry preselection to discover quasars with Einstein radii of $theta_mathrmE5$ arcsec.
arXiv Detail & Related papers (2023-07-03T15:09:10Z) - PAC-NeRF: Physics Augmented Continuum Neural Radiance Fields for
Geometry-Agnostic System Identification [64.61198351207752]
Existing approaches to system identification (estimating the physical parameters of an object) from videos assume known object geometries.
In this work, we aim to identify parameters characterizing a physical system from a set of multi-view videos without any assumption on object geometry or topology.
We propose "Physics Augmented Continuum Neural Radiance Fields" (PAC-NeRF), to estimate both the unknown geometry and physical parameters of highly dynamic objects from multi-view videos.
arXiv Detail & Related papers (2023-03-09T18:59:50Z) - When Spectral Modeling Meets Convolutional Networks: A Method for
Discovering Reionization-era Lensed Quasars in Multi-band Imaging Data [0.0]
We introduce a new spatial geometry veto criterion, implemented via image-based deep learning.
We make the first application of this approach in a systematic search for reionization-era lensed quasars.
The training datasets are constructed by painting deflected point-source lights over actual galaxy images to generate realistic galaxy-quasar lens models.
arXiv Detail & Related papers (2022-11-26T11:27:13Z) - Strong Gravitational Lensing Parameter Estimation with Vision
Transformer [2.0996675418033623]
With 31,200 simulated strongly lensed quasar images, we explore the usage of Vision Transformer (ViT) for simulated strong gravitational lensing for the first time.
We show that ViT could reach competitive results compared with CNNs, and is specifically good at some lensing parameters.
arXiv Detail & Related papers (2022-10-09T02:32:29Z) - DeepGraviLens: a Multi-Modal Architecture for Classifying Gravitational
Lensing Data [3.4138918206057265]
DeepGraviLens is a novel network that classifiestemporal data belonging to one non-lensed system type and three lensed system types.
It surpasses the current state of the art accuracy results by $approx 3%$ to $approx 11%$, depending on the considered data set.
arXiv Detail & Related papers (2022-05-02T07:45:51Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - DeepShadows: Separating Low Surface Brightness Galaxies from Artifacts
using Deep Learning [70.80563014913676]
We investigate the use of convolutional neural networks (CNNs) for the problem of separating low-surface-brightness galaxies from artifacts in survey images.
We show that CNNs offer a very promising path in the quest to study the low-surface-brightness universe.
arXiv Detail & Related papers (2020-11-24T22:51:08Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.