A comparative study of source-finding techniques in HI emission line
cubes using SoFiA, MTObjects, and supervised deep learning
- URL: http://arxiv.org/abs/2211.12809v1
- Date: Wed, 23 Nov 2022 09:45:07 GMT
- Title: A comparative study of source-finding techniques in HI emission line
cubes using SoFiA, MTObjects, and supervised deep learning
- Authors: J.A. Barkai, M.A.W. Verheijen, E.T. Mart\'inez, M.H.F. Wilkinson
- Abstract summary: The 21 cm spectral line emission of atomic neutral hydrogen (HI) is one of the primary wavelengths observed in radio astronomy.
This study aimed to find the optimal pipeline for finding and masking the most sources with the best mask quality and the fewest artefacts in 3D neutral hydrogen cubes.
Two traditional source-finding methods were tested, SoFiA and MTObjects, as well as a new supervised deep learning approach, in which a 3D convolutional neural network architecture, known as V-Net was used.
The pipelines were tested on HI data cubes from the Westerbork Synthesis Radio Telescope with additional inserted mock galaxies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The 21 cm spectral line emission of atomic neutral hydrogen (HI) is one of
the primary wavelengths observed in radio astronomy. However, the signal is
intrinsically faint and the HI content of galaxies depends on the cosmic
environment, requiring large survey volumes and survey depth to investigate the
HI Universe. As the amount of data coming from these surveys continues to
increase with technological improvements, so does the need for automatic
techniques for identifying and characterising HI sources while considering the
tradeoff between completeness and purity. This study aimed to find the optimal
pipeline for finding and masking the most sources with the best mask quality
and the fewest artefacts in 3D neutral hydrogen cubes. Various existing methods
were explored in an attempt to create a pipeline to optimally identify and mask
the sources in 3D neutral hydrogen 21 cm spectral line data cubes. Two
traditional source-finding methods were tested, SoFiA and MTObjects, as well as
a new supervised deep learning approach, in which a 3D convolutional neural
network architecture, known as V-Net was used. These three source-finding
methods were further improved by adding a classical machine learning classifier
as a post-processing step to remove false positive detections. The pipelines
were tested on HI data cubes from the Westerbork Synthesis Radio Telescope with
additional inserted mock galaxies. SoFiA combined with a random forest
classifier provided the best results, with the V-Net-random forest combination
a close second. We suspect this is due to the fact that there are many more
mock sources in the training set than real sources. There is, therefore, room
to improve the quality of the V-Net network with better-labelled data such that
it can potentially outperform SoFiA.
Related papers
- Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - Identification of 4FGL uncertain sources at Higher Resolutions with
Inverse Discrete Wavelet Transform [0.562479170374811]
In the forthcoming era of big astronomical data, it is a burden to find out target sources from ground-based and space-based telescopes.
In this work, we focused on the task of finding AGN candidates and identifying BL Lac/FSRQ candidates from the 4FGL DR3 uncertain sources.
arXiv Detail & Related papers (2024-01-05T01:02:34Z) - UAVStereo: A Multiple Resolution Dataset for Stereo Matching in UAV
Scenarios [0.6524460254566905]
This paper constructs a multi-resolution UAV scenario dataset, called UAVStereo, with over 34k stereo image pairs covering 3 typical scenes.
In this paper, we evaluate traditional and state-of-the-art deep learning methods, highlighting their limitations in addressing challenges in UAV scenarios.
arXiv Detail & Related papers (2023-02-20T16:45:27Z) - 3D Detection and Characterisation of ALMA Sources through Deep Learning [0.0]
We present a Deep-Learning (DL) pipeline developed for the detection and characterization of astronomical sources within simulated Atacama Large Millimeter/submillimeter Array (ALMA) data cubes.
The pipeline is composed of six DL models: a Convolutional Autoencoder for source detection within the spatial domain of the integrated data cubes, a Recurrent Neural Network (RNN) for denoising and peak detection within the frequency domain, and four Residual Neural Networks (ResNets) for source characterization.
arXiv Detail & Related papers (2022-11-21T13:50:35Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Joint Learning of Salient Object Detection, Depth Estimation and Contour
Extraction [91.43066633305662]
We propose a novel multi-task and multi-modal filtered transformer (MMFT) network for RGB-D salient object detection (SOD)
Specifically, we unify three complementary tasks: depth estimation, salient object detection and contour estimation. The multi-task mechanism promotes the model to learn the task-aware features from the auxiliary tasks.
Experiments show that it not only significantly surpasses the depth-based RGB-D SOD methods on multiple datasets, but also precisely predicts a high-quality depth map and salient contour at the same time.
arXiv Detail & Related papers (2022-03-09T17:20:18Z) - MonoDistill: Learning Spatial Features for Monocular 3D Object Detection [80.74622486604886]
We propose a simple and effective scheme to introduce the spatial information from LiDAR signals to the monocular 3D detectors.
We use the resulting data to train a 3D detector with the same architecture as the baseline model.
Experimental results show that the proposed method can significantly boost the performance of the baseline model.
arXiv Detail & Related papers (2022-01-26T09:21:41Z) - MeerCRAB: MeerLICHT Classification of Real and Bogus Transients using
Deep Learning [0.0]
We present a deep learning pipeline based on the convolutional neural network architecture called $textttMeerCRAB$.
It is designed to filter out the so called 'bogus' detections from true astrophysical sources in the transient detection pipeline of the MeerLICHT telescope.
arXiv Detail & Related papers (2021-04-28T18:12:51Z) - 3D-QCNet -- A Pipeline for Automated Artifact Detection in Diffusion MRI
images [0.5735035463793007]
Artifacts are a common occurrence in Diffusion MRI (dMRI) scans.
Several QC methods for artifact detection exist, but they suffer from problems like requiring manual intervention and the inability to generalize across different artifacts and datasets.
We propose an automated deep learning (DL) pipeline that utilizes a 3D-Densenet architecture to train a model on diffusion volumes for automatic artifact detection.
arXiv Detail & Related papers (2021-03-09T08:21:53Z) - Transfer Learning for Motor Imagery Based Brain-Computer Interfaces: A
Complete Pipeline [54.73337667795997]
Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject.
This paper proposes that TL could be considered in all three components (spatial filtering, feature engineering, and classification) of MI-based BCIs.
arXiv Detail & Related papers (2020-07-03T23:44:21Z) - RoutedFusion: Learning Real-time Depth Map Fusion [73.0378509030908]
We present a novel real-time capable machine learning-based method for depth map fusion.
We propose a neural network that predicts non-linear updates to better account for typical fusion errors.
Our network is composed of a 2D depth routing network and a 3D depth fusion network which efficiently handle sensor-specific noise and outliers.
arXiv Detail & Related papers (2020-01-13T16:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.