Robustness of deep learning algorithms in astronomy -- galaxy morphology
studies
- URL: http://arxiv.org/abs/2111.00961v2
- Date: Tue, 2 Nov 2021 14:35:00 GMT
- Title: Robustness of deep learning algorithms in astronomy -- galaxy morphology
studies
- Authors: A. \'Ciprijanovi\'c, D. Kafkes, G. N. Perdue, K. Pedro, G. Snyder, F.
J. S\'anchez, S. Madireddy, S. M. Wild, B. Nord
- Abstract summary: We study the effect of observational noise from the exposure time on performance of ResNet18 trained to distinguish between galaxies of different morphologies in LSST mock data.
We also explore how domain adaptation techniques can help improve model robustness in case of this type of naturally occurring attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models are being increasingly adopted in wide array of
scientific domains, especially to handle high-dimensionality and volume of the
scientific data. However, these models tend to be brittle due to their
complexity and overparametrization, especially to the inadvertent adversarial
perturbations that can appear due to common image processing such as
compression or blurring that are often seen with real scientific data. It is
crucial to understand this brittleness and develop models robust to these
adversarial perturbations. To this end, we study the effect of observational
noise from the exposure time, as well as the worst case scenario of a one-pixel
attack as a proxy for compression or telescope errors on performance of
ResNet18 trained to distinguish between galaxies of different morphologies in
LSST mock data. We also explore how domain adaptation techniques can help
improve model robustness in case of this type of naturally occurring attacks
and help scientists build more trustworthy and stable models.
Related papers
- Discovering interpretable models of scientific image data with deep
learning [0.0]
We implement representation learning, sparse deep neural network training and symbolic regression.
We demonstrate their relevance to the field of bioimaging using a well-studied test problem of classifying cell states in microscopy data.
We explore the utility of such interpretable models in producing scientific explanations of the underlying biological phenomenon.
arXiv Detail & Related papers (2024-02-05T15:45:55Z) - Quantifying the robustness of deep multispectral segmentation models
against natural perturbations and data poisoning [0.0]
We characterize the performance and robustness of a multispectral (RGB and near infrared) image segmentation model subjected to adversarial attacks and natural perturbations.
We find both RGB and multispectral models are vulnerable to data poisoning attacks regardless of input or fusion architectures.
arXiv Detail & Related papers (2023-05-18T23:43:33Z) - Generating artificial digital image correlation data using
physics-guided adversarial networks [2.07180164747172]
Digital image correlation (DIC) has become a valuable tool to monitor and evaluate mechanical experiments of cracked specimen.
We present a method to directly generate large amounts of artificial displacement data of cracked specimen resembling real interpolated DIC displacements.
arXiv Detail & Related papers (2023-03-28T12:52:40Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - DeepAdversaries: Examining the Robustness of Deep Learning Models for
Galaxy Morphology Classification [47.38422424155742]
In morphological classification of galaxies, we study the effects of perturbations in imaging data.
We show that training with domain adaptation improves model robustness and mitigates the effects of these perturbations.
arXiv Detail & Related papers (2021-12-28T21:29:02Z) - Causal Navigation by Continuous-time Neural Networks [108.84958284162857]
We propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks.
We evaluate our method in the context of visual-control learning of drones over a series of complex tasks.
arXiv Detail & Related papers (2021-06-15T17:45:32Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z) - Model-Based Robust Deep Learning: Generalizing to Natural,
Out-of-Distribution Data [104.69689574851724]
We propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning.
Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data.
arXiv Detail & Related papers (2020-05-20T13:46:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.