MeerCRAB: MeerLICHT Classification of Real and Bogus Transients using
Deep Learning
- URL: http://arxiv.org/abs/2104.13950v1
- Date: Wed, 28 Apr 2021 18:12:51 GMT
- Title: MeerCRAB: MeerLICHT Classification of Real and Bogus Transients using
Deep Learning
- Authors: Zafiirah Hosenie, Steven Bloemen, Paul Groot, Robert Lyon, Bart
Scheers, Benjamin Stappers, Fiorenzo Stoppa, Paul Vreeswijk, Simon De Wet,
Marc Klein Wolt, Elmar K\"ording, Vanessa McBride, Rudolf Le Poole, Kerry
Paterson, Dani\"elle L. A. Pieterse and Patrick Woudt
- Abstract summary: We present a deep learning pipeline based on the convolutional neural network architecture called $textttMeerCRAB$.
It is designed to filter out the so called 'bogus' detections from true astrophysical sources in the transient detection pipeline of the MeerLICHT telescope.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Astronomers require efficient automated detection and classification
pipelines when conducting large-scale surveys of the (optical) sky for variable
and transient sources. Such pipelines are fundamentally important, as they
permit rapid follow-up and analysis of those detections most likely to be of
scientific value. We therefore present a deep learning pipeline based on the
convolutional neural network architecture called $\texttt{MeerCRAB}$. It is
designed to filter out the so called 'bogus' detections from true astrophysical
sources in the transient detection pipeline of the MeerLICHT telescope. Optical
candidates are described using a variety of 2D images and numerical features
extracted from those images. The relationship between the input images and the
target classes is unclear, since the ground truth is poorly defined and often
the subject of debate. This makes it difficult to determine which source of
information should be used to train a classification algorithm. We therefore
used two methods for labelling our data (i) thresholding and (ii) latent class
model approaches. We deployed variants of $\texttt{MeerCRAB}$ that employed
different network architectures trained using different combinations of input
images and training set choices, based on classification labels provided by
volunteers. The deepest network worked best with an accuracy of 99.5$\%$ and
Matthews correlation coefficient (MCC) value of 0.989. The best model was
integrated to the MeerLICHT transient vetting pipeline, enabling the accurate
and efficient classification of detected transients that allows researchers to
select the most promising candidates for their research goals.
Related papers
- Deep Homography Estimation for Visual Place Recognition [49.235432979736395]
We propose a transformer-based deep homography estimation (DHE) network.
It takes the dense feature map extracted by a backbone network as input and fits homography for fast and learnable geometric verification.
Experiments on benchmark datasets show that our method can outperform several state-of-the-art methods.
arXiv Detail & Related papers (2024-02-25T13:22:17Z) - Creating Ensembles of Classifiers through UMDA for Aerial Scene Classification [0.8049701904919515]
In remote sensing area, the use of CNN architectures as an alternative solution is also a reality for scene classification tasks.
This work proposes to employ six DML approaches for aerial scene classification tasks, analysing their behave with four different pre-trained CNNs.
In performed experiments, it is possible to observe than DML approaches can achieve the best classification results when compared to traditional pre-trained CNNs.
arXiv Detail & Related papers (2023-03-20T18:49:39Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - What's the Difference? The potential for Convolutional Neural Networks
for transient detection without template subtraction [0.0]
We present a study of the potential for Convolutional Neural Networks (CNNs) to enable separation of astrophysical transients from image artifacts.
Using data from the Dark Energy Survey, we explore the use of CNNs to automate the "real-bogus" classification.
arXiv Detail & Related papers (2022-03-14T18:00:03Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - New SAR target recognition based on YOLO and very deep multi-canonical
correlation analysis [0.1503974529275767]
This paper proposes a robust feature extraction method for SAR image target classification by adaptively fusing effective features from different CNN layers.
Experiments on the MSTAR dataset demonstrate that the proposed method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-10-28T18:10:26Z) - MD-CSDNetwork: Multi-Domain Cross Stitched Network for Deepfake
Detection [80.83725644958633]
Current deepfake generation methods leave discriminative artifacts in the frequency spectrum of fake images and videos.
We present a novel approach, termed as MD-CSDNetwork, for combining the features in the spatial and frequency domains to mine a shared discriminative representation.
arXiv Detail & Related papers (2021-09-15T14:11:53Z) - Lightweight Convolutional Neural Network with Gaussian-based Grasping
Representation for Robotic Grasping Detection [4.683939045230724]
Current object detectors are difficult to strike a balance between high accuracy and fast inference speed.
We present an efficient and robust fully convolutional neural network model to perform robotic grasping pose estimation.
The network is an order of magnitude smaller than other excellent algorithms.
arXiv Detail & Related papers (2021-01-25T16:36:53Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z) - Hyperspectral Images Classification Based on Multi-scale Residual
Network [5.166817530813299]
Hyperspectral remote sensing images contain a lot of redundant information and the data structure is non-linear.
Deep convolutional neural network has high accuracy, but when a small amount of data is used for training, the classification accuracy of deep learning methods is greatly reduced.
In order to solve the problem of low classification accuracy of existing algorithms on small samples of hyperspectral images, a multi-scale residual network is proposed.
arXiv Detail & Related papers (2020-04-26T13:46:52Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.