Strong Gravitational Lensing Parameter Estimation with Vision
Transformer
- URL: http://arxiv.org/abs/2210.04143v1
- Date: Sun, 9 Oct 2022 02:32:29 GMT
- Title: Strong Gravitational Lensing Parameter Estimation with Vision
Transformer
- Authors: Kuan-Wei Huang, Geoff Chih-Fan Chen, Po-Wen Chang, Sheng-Chieh Lin,
Chia-Jung Hsu, Vishal Thengane, Joshua Yao-Yu Lin
- Abstract summary: With 31,200 simulated strongly lensed quasar images, we explore the usage of Vision Transformer (ViT) for simulated strong gravitational lensing for the first time.
We show that ViT could reach competitive results compared with CNNs, and is specifically good at some lensing parameters.
- Score: 2.0996675418033623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantifying the parameters and corresponding uncertainties of hundreds of
strongly lensed quasar systems holds the key to resolving one of the most
important scientific questions: the Hubble constant ($H_{0}$) tension. The
commonly used Markov chain Monte Carlo (MCMC) method has been too
time-consuming to achieve this goal, yet recent work has shown that convolution
neural networks (CNNs) can be an alternative with seven orders of magnitude
improvement in speed. With 31,200 simulated strongly lensed quasar images, we
explore the usage of Vision Transformer (ViT) for simulated strong
gravitational lensing for the first time. We show that ViT could reach
competitive results compared with CNNs, and is specifically good at some
lensing parameters, including the most important mass-related parameters such
as the center of lens $\theta_{1}$ and $\theta_{2}$, the ellipticities $e_1$
and $e_2$, and the radial power-law slope $\gamma'$. With this promising
preliminary result, we believe the ViT (or attention-based) network
architecture can be an important tool for strong lensing science for the next
generation of surveys. The open source of our code and data is in
\url{https://github.com/kuanweih/strong_lensing_vit_resnet}.
Related papers
- Accelerating lensed quasar discovery and modeling with physics-informed variational autoencoders [34.82692226532414]
Strongly lensed quasars provide valuable insights into the rate of cosmic expansion.
detecting them in astronomical images is difficult due to the prevalence of non-lensing objects.
We develop a generative deep learning model called VariLens, built upon a physics-informed variational autoencoder.
arXiv Detail & Related papers (2024-12-17T09:23:46Z) - CSST Strong Lensing Preparation: a Framework for Detecting Strong Lenses in the Multi-color Imaging Survey by the China Survey Space Telescope (CSST) [25.468504540327498]
Strong gravitational lensing is a powerful tool for investigating dark matter and dark energy properties.
We have developed a framework based on a hierarchical visual Transformer with a sliding window technique to search for strong lensing systems within entire images.
Our framework achieves precision and recall rates of 0.98 and 0.90, respectively.
arXiv Detail & Related papers (2024-04-02T09:44:30Z) - Streamlined Lensed Quasar Identification in Multiband Images via
Ensemble Networks [34.82692226532414]
Quasars experiencing strong lensing offer unique viewpoints on subjects related to cosmic expansion rate, dark matter, and quasar host galaxies.
We have developed a novel approach by ensembling cutting-edge convolutional networks (CNNs) trained on realistic galaxy-quasar lens simulations.
We retrieve approximately 60 million sources as parent samples and reduce this to 892,609 after employing a photometry preselection to discover quasars with Einstein radii of $theta_mathrmE5$ arcsec.
arXiv Detail & Related papers (2023-07-03T15:09:10Z) - RangeViT: Towards Vision Transformers for 3D Semantic Segmentation in
Autonomous Driving [80.14669385741202]
Vision transformers (ViTs) have achieved state-of-the-art results in many image-based benchmarks.
ViTs are notoriously hard to train and require a lot of training data to learn powerful representations.
We show that our method, called RangeViT, outperforms existing projection-based methods on nuScenes and Semantic KITTI.
arXiv Detail & Related papers (2023-01-24T18:50:48Z) - When Spectral Modeling Meets Convolutional Networks: A Method for
Discovering Reionization-era Lensed Quasars in Multi-band Imaging Data [0.0]
We introduce a new spatial geometry veto criterion, implemented via image-based deep learning.
We make the first application of this approach in a systematic search for reionization-era lensed quasars.
The training datasets are constructed by painting deflected point-source lights over actual galaxy images to generate realistic galaxy-quasar lens models.
arXiv Detail & Related papers (2022-11-26T11:27:13Z) - Cosmology from Galaxy Redshift Surveys with PointNet [65.89809800010927]
In cosmology, galaxy redshift surveys resemble such a permutation invariant collection of positions in space.
We employ a textitPointNet-like neural network to regress the values of the cosmological parameters directly from point cloud data.
Our implementation of PointNets can analyse inputs of $mathcalO(104) - mathcalO(105)$ galaxies at a time, which improves upon earlier work for this application by roughly two orders of magnitude.
arXiv Detail & Related papers (2022-11-22T15:35:05Z) - Neural Inference of Gaussian Processes for Time Series Data of Quasars [72.79083473275742]
We introduce a new model that enables it to describe quasar spectra completely.
We also introduce a new method of inference of Gaussian process parameters, which we call $textitNeural Inference$.
The combination of both the CDRW model and Neural Inference significantly outperforms the baseline DRW and MLE.
arXiv Detail & Related papers (2022-11-17T13:01:26Z) - InternImage: Exploring Large-Scale Vision Foundation Models with
Deformable Convolutions [95.94629864981091]
This work presents a new large-scale CNN-based foundation model, termed InternImage, which can obtain the gain from increasing parameters and training data like ViTs.
The proposed InternImage reduces the strict inductive bias of traditional CNNs and makes it possible to learn stronger and more robust patterns with large-scale parameters from massive data like ViTs.
arXiv Detail & Related papers (2022-11-10T18:59:04Z) - Parameterization of Cross-Token Relations with Relative Positional
Encoding for Vision MLP [52.25478388220691]
Vision multi-layer perceptrons (MLPs) have shown promising performance in computer vision tasks.
They use token-mixing layers to capture cross-token interactions, as opposed to the multi-head self-attention mechanism used by Transformers.
We propose a new positional spacial gating unit (PoSGU) to efficiently encode the cross-token relations for token mixing.
arXiv Detail & Related papers (2022-07-15T04:18:06Z) - Large-Scale Gravitational Lens Modeling with Bayesian Neural Networks
for Accurate and Precise Inference of the Hubble Constant [0.0]
We investigate the use of approximate Bayesian neural networks (BNNs) in modeling hundreds of time-delay gravitational lenses.
A simple combination of 200 test-set lenses results in a precision of 0.5 $textrmkm s-1 textrm Mpc-1$ ($0.7%$)
Our pipeline is a promising tool for exploring ensemble-level systematics in lens modeling.
arXiv Detail & Related papers (2020-11-30T19:00:20Z) - DeepShadows: Separating Low Surface Brightness Galaxies from Artifacts
using Deep Learning [70.80563014913676]
We investigate the use of convolutional neural networks (CNNs) for the problem of separating low-surface-brightness galaxies from artifacts in survey images.
We show that CNNs offer a very promising path in the quest to study the low-surface-brightness universe.
arXiv Detail & Related papers (2020-11-24T22:51:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.