A machine learning based approach to gravitational lens identification
with the International LOFAR Telescope
- URL: http://arxiv.org/abs/2207.10698v1
- Date: Thu, 21 Jul 2022 18:18:55 GMT
- Title: A machine learning based approach to gravitational lens identification
with the International LOFAR Telescope
- Authors: S.Rezaei, J. P. McKean, M. Biehl, W. de Roo1 and A. Lafontaine
- Abstract summary: We present a novel machine learning based approach for detecting galaxy-scale gravitational lenses from interferometric data.
We develop and test several Convolutional Neural Networks to determine the probability and uncertainty of a given sample being classified as a lensed or non-lensed event.
We expect to discover the vast majority of galaxy-scale gravitational lens systems contained within the LOFAR Two Metre Sky Survey.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a novel machine learning based approach for detecting galaxy-scale
gravitational lenses from interferometric data, specifically those taken with
the International LOFAR Telescope (ILT), which is observing the northern radio
sky at a frequency of 150 MHz, an angular resolution of 350 mas and a
sensitivity of 90 uJy beam-1 (1 sigma). We develop and test several
Convolutional Neural Networks to determine the probability and uncertainty of a
given sample being classified as a lensed or non-lensed event. By training and
testing on a simulated interferometric imaging data set that includes realistic
lensed and non-lensed radio sources, we find that it is possible to recover
95.3 per cent of the lensed samples (true positive rate), with a contamination
of just 0.008 per cent from non-lensed samples (false positive rate). Taking
the expected lensing probability into account results in a predicted sample
purity for lensed events of 92.2 per cent. We find that the network structure
is most robust when the maximum image separation between the lensed images is
greater than 3 times the synthesized beam size, and the lensed images have a
total flux density that is equivalent to at least a 20 sigma (point-source)
detection. For the ILT, this corresponds to a lens sample with Einstein radii
greater than 0.5 arcsec and a radio source population with 150 MHz flux
densities more than 2 mJy. By applying these criteria and our lens detection
algorithm we expect to discover the vast majority of galaxy-scale gravitational
lens systems contained within the LOFAR Two Metre Sky Survey.
Related papers
- Whole-body Detection, Recognition and Identification at Altitude and
Range [57.445372305202405]
We propose an end-to-end system evaluated on diverse datasets.
Our approach involves pre-training the detector on common image datasets and fine-tuning it on BRIAR's complex videos and images.
We conduct thorough evaluations under various conditions, such as different ranges and angles in indoor, outdoor, and aerial scenarios.
arXiv Detail & Related papers (2023-11-09T20:20:23Z) - Streamlined Lensed Quasar Identification in Multiband Images via
Ensemble Networks [34.82692226532414]
Quasars experiencing strong lensing offer unique viewpoints on subjects related to cosmic expansion rate, dark matter, and quasar host galaxies.
We have developed a novel approach by ensembling cutting-edge convolutional networks (CNNs) trained on realistic galaxy-quasar lens simulations.
We retrieve approximately 60 million sources as parent samples and reduce this to 892,609 after employing a photometry preselection to discover quasars with Einstein radii of $theta_mathrmE5$ arcsec.
arXiv Detail & Related papers (2023-07-03T15:09:10Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Deep-learning based measurement of planetary radial velocities in the
presence of stellar variability [70.4007464488724]
We use neural networks to reduce stellar RV jitter in three years of HARPS-N sun-as-a-star spectra.
We find that the multi-line CNN is able to recover planets with 0.2 m/s semi-amplitude, 50 day period, with 8.8% error in the amplitude and 0.7% in the period.
arXiv Detail & Related papers (2023-04-10T18:33:36Z) - Pixelated Reconstruction of Foreground Density and Background Surface
Brightness in Gravitational Lensing Systems using Recurrent Inference
Machines [116.33694183176617]
We use a neural network based on the Recurrent Inference Machine to reconstruct an undistorted image of the background source and the lens mass density distribution as pixelated maps.
When compared to more traditional parametric models, the proposed method is significantly more expressive and can reconstruct complex mass distributions.
arXiv Detail & Related papers (2023-01-10T19:00:12Z) - Detection of Strongly Lensed Arcs in Galaxy Clusters with Transformers [11.051750815556748]
We propose a framework to detect cluster-scale strongly lensed arcs, which contains a transformer-based detection algorithm and an image simulation algorithm.
Results show that our approach could achieve 99.63 % accuracy rate, 90.32 % recall rate, 85.37 % precision rate and 0.23 % false positive rate in detection of strongly lensed arcs from simulated images.
arXiv Detail & Related papers (2022-11-11T02:33:34Z) - On-chip quantum information processing with distinguishable photons [55.41644538483948]
Multi-photon interference is at the heart of photonic quantum technologies.
Here, we experimentally demonstrate that detection can be implemented with a temporal resolution sufficient to interfere photons detuned on the scales necessary for cavity-based integrated photon sources.
We show how time-resolved detection of non-ideal photons can be used to improve the fidelity of an entangling operation and to mitigate the reduction of computational complexity in boson sampling experiments.
arXiv Detail & Related papers (2022-10-14T18:16:49Z) - Strong Lensing Source Reconstruction Using Continuous Neural Fields [3.604982738232833]
We introduce a method that uses continuous neural fields to non-parametrically reconstruct the complex morphology of a source galaxy.
We demonstrate the efficacy of our method through experiments on simulated data targeting high-resolution lensing images.
arXiv Detail & Related papers (2022-06-29T18:00:01Z) - Large-Scale Gravitational Lens Modeling with Bayesian Neural Networks
for Accurate and Precise Inference of the Hubble Constant [0.0]
We investigate the use of approximate Bayesian neural networks (BNNs) in modeling hundreds of time-delay gravitational lenses.
A simple combination of 200 test-set lenses results in a precision of 0.5 $textrmkm s-1 textrm Mpc-1$ ($0.7%$)
Our pipeline is a promising tool for exploring ensemble-level systematics in lens modeling.
arXiv Detail & Related papers (2020-11-30T19:00:20Z) - Extracting the Subhalo Mass Function from Strong Lens Images with Image
Segmentation [0.0]
We develop a neural network to both locate subhalos in an image as well as determine their mass.
The network is trained on images with a single subhalo located near the Einstein ring.
Remarkably, it is then able to detect entire populations of substructure, even for locations further away from the Einstein ring.
arXiv Detail & Related papers (2020-09-14T18:00:01Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.