Systematic biases when using deep neural networks for annotating large
catalogs of astronomical images
- URL: http://arxiv.org/abs/2201.03131v1
- Date: Mon, 10 Jan 2022 01:51:14 GMT
- Title: Systematic biases when using deep neural networks for annotating large
catalogs of astronomical images
- Authors: Sanchari Dhar, Lior Shamir
- Abstract summary: We show that for basic classification of elliptical and spiral galaxies, the sky location of the galaxies used for training affects the behavior of the algorithm.
That bias exhibits itself in the form of cosmological-scale anisotropy in the distribution of basic galaxy morphology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks (DCNNs) have become the most common
solution for automatic image annotation due to their non-parametric nature,
good performance, and their accessibility through libraries such as TensorFlow.
Among other fields, DCNNs are also a common approach to the annotation of large
astronomical image databases acquired by digital sky surveys. One of the main
downsides of DCNNs is the complex non-intuitive rules that make DCNNs act as a
``black box", providing annotations in a manner that is unclear to the user.
Therefore, the user is often not able to know what information is used by the
DCNNs for the classification. Here we demonstrate that the training of a DCNN
is sensitive to the context of the training data such as the location of the
objects in the sky. We show that for basic classification of elliptical and
spiral galaxies, the sky location of the galaxies used for training affects the
behavior of the algorithm, and leads to a small but consistent and
statistically significant bias. That bias exhibits itself in the form of
cosmological-scale anisotropy in the distribution of basic galaxy morphology.
Therefore, while DCNNs are powerful tools for annotating images of extended
sources, the construction of training sets for galaxy morphology should take
into consideration more aspects than the visual appearance of the object. In
any case, catalogs created with deep neural networks that exhibit signs of
cosmological anisotropy should be interpreted with the possibility of
consistent bias.
Related papers
- Self-supervised visual learning for analyzing firearms trafficking
activities on the Web [6.728794938150435]
Automated visual firearms classification from RGB images is an important real-world task with applications in public space security, intelligence gathering and law enforcement investigations.
It can serve as an important component of systems that attempt to identify criminal firearms trafficking networks, by analyzing Big Data from open-source intelligence.
Neither Visual Transformer (ViT) neural architectures nor Self-Supervised Learning (SSL) approaches have been so far evaluated on this critical task.
arXiv Detail & Related papers (2023-10-12T01:47:55Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning Ability of Interpolating Deep Convolutional Neural Networks [28.437011792990347]
We study the learning ability of an important family of deep neural networks, deep convolutional neural networks (DCNNs)
We show that by adding well-defined layers to a non-interpolating DCNN, we can obtain some interpolating DCNNs that maintain the good learning rates of the non-interpolating DCNN.
Our work provides theoretical verification of how overfitted DCNNs generalize well.
arXiv Detail & Related papers (2022-10-25T17:22:31Z) - Invariant Content Synergistic Learning for Domain Generalization of
Medical Image Segmentation [13.708239594165061]
Deep convolution neural networks (DCNNs) often fail to maintain their robustness when confronting test data with the novel distribution.
In this paper, we propose a method, named Invariant Content Synergistic Learning (ICSL), to improve the generalization ability of DCNNs.
arXiv Detail & Related papers (2022-05-05T08:13:17Z) - Explainability Tools Enabling Deep Learning in Future In-Situ Real-Time
Planetary Explorations [58.720142291102135]
Deep learning (DL) has proven to be an effective machine learning and computer vision technique.
Most of the Deep Neural Network (DNN) architectures are so complex that they are considered a 'black box'
In this paper, we used integrated gradients to describe the attributions of each neuron to the output classes.
It provides a set of explainability tools (ET) that opens the black box of a DNN so that the individual contribution of neurons to category classification can be ranked and visualized.
arXiv Detail & Related papers (2022-01-15T07:10:00Z) - SAR Image Classification Based on Spiking Neural Network through
Spike-Time Dependent Plasticity and Gradient Descent [7.106664778883502]
Spiking neural network (SNN) is one of the core components of brain-like intelligence.
This article constructs a complete SAR image based on unsupervised and supervised learning SNN.
arXiv Detail & Related papers (2021-06-15T09:36:04Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - A Practical Tutorial on Graph Neural Networks [49.919443059032226]
Graph neural networks (GNNs) have recently grown in popularity in the field of artificial intelligence (AI)
This tutorial exposes the power and novelty of GNNs to AI practitioners.
arXiv Detail & Related papers (2020-10-11T12:36:17Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Deep Convolutional Neural Networks with Spatial Regularization, Volume
and Star-shape Priori for Image Segmentation [6.282154392910916]
The classification functions in the existing network architecture of CNNs are simple and lack capabilities to handle important spatial information.
We propose a novel Soft Threshold Dynamics (STD) framework which can easily integrate many spatial priors into the DCNNs.
The proposed method is a general mathematical framework and it can be applied to any semantic segmentation DCNNs.
arXiv Detail & Related papers (2020-02-10T18:03:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.