DeepAdversaries: Examining the Robustness of Deep Learning Models for
Galaxy Morphology Classification
- URL: http://arxiv.org/abs/2112.14299v1
- Date: Tue, 28 Dec 2021 21:29:02 GMT
- Title: DeepAdversaries: Examining the Robustness of Deep Learning Models for
Galaxy Morphology Classification
- Authors: Aleksandra \'Ciprijanovi\'c, Diana Kafkes, Gregory Snyder, F. Javier
S\'anchez, Gabriel Nathan Perdue, Kevin Pedro, Brian Nord, Sandeep Madireddy,
Stefan M. Wild
- Abstract summary: In morphological classification of galaxies, we study the effects of perturbations in imaging data.
We show that training with domain adaptation improves model robustness and mitigates the effects of these perturbations.
- Score: 47.38422424155742
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data processing and analysis pipelines in cosmological survey experiments
introduce data perturbations that can significantly degrade the performance of
deep learning-based models. Given the increased adoption of supervised deep
learning methods for processing and analysis of cosmological survey data, the
assessment of data perturbation effects and the development of methods that
increase model robustness are increasingly important. In the context of
morphological classification of galaxies, we study the effects of perturbations
in imaging data. In particular, we examine the consequences of using neural
networks when training on baseline data and testing on perturbed data. We
consider perturbations associated with two primary sources: 1) increased
observational noise as represented by higher levels of Poisson noise and 2)
data processing noise incurred by steps such as image compression or telescope
errors as represented by one-pixel adversarial attacks. We also test the
efficacy of domain adaptation techniques in mitigating the perturbation-driven
errors. We use classification accuracy, latent space visualizations, and latent
space distance to assess model robustness. Without domain adaptation, we find
that processing pixel-level errors easily flip the classification into an
incorrect class and that higher observational noise makes the model trained on
low-noise data unable to classify galaxy morphologies. On the other hand, we
show that training with domain adaptation improves model robustness and
mitigates the effects of these perturbations, improving the classification
accuracy by 23% on data with higher observational noise. Domain adaptation also
increases by a factor of ~2.3 the latent space distance between the baseline
and the incorrectly classified one-pixel perturbed image, making the model more
robust to inadvertent perturbations.
Related papers
- Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Detection of Pavement Cracks by Deep Learning Models of Transformer and
UNet [9.483452333312373]
In recent years, the emergence and development of deep learning techniques have shown great potential to facilitate surface crack detection.
In this study, we investigated nine promising models to evaluate their performance in pavement surface crack detection by model accuracy, computational complexity, and model stability.
We find that transformer-based models generally are easier to converge during the training process and have higher accuracy, but usually exhibit more memory consumption and low processing efficiency.
arXiv Detail & Related papers (2023-04-25T06:07:49Z) - The role of noise in denoising models for anomaly detection in medical
images [62.0532151156057]
Pathological brain lesions exhibit diverse appearance in brain images.
Unsupervised anomaly detection approaches have been proposed using only normal data for training.
We show that optimization of the spatial resolution and magnitude of the noise improves the performance of different model training regimes.
arXiv Detail & Related papers (2023-01-19T21:39:38Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - Robustness of deep learning algorithms in astronomy -- galaxy morphology
studies [0.0]
We study the effect of observational noise from the exposure time on performance of ResNet18 trained to distinguish between galaxies of different morphologies in LSST mock data.
We also explore how domain adaptation techniques can help improve model robustness in case of this type of naturally occurring attacks.
arXiv Detail & Related papers (2021-11-01T14:12:15Z) - Extensive Studies of the Neutron Star Equation of State from the Deep
Learning Inference with the Observational Data Augmentation [0.0]
We discuss deep learning inference for the neutron star equation of state (EoS) using the real observational data of the mass and the radius.
For our deep learning method to incorporate uncertainties in observation, we augment the training data with noise fluctuations corresponding to observational uncertainties.
We conclude that the data augmentation could be a useful technique to evade the overfitting without tuning the neural network architecture.
arXiv Detail & Related papers (2021-01-20T14:27:12Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.