Quantifying the robustness of deep multispectral segmentation models
against natural perturbations and data poisoning
- URL: http://arxiv.org/abs/2305.11347v1
- Date: Thu, 18 May 2023 23:43:33 GMT
- Title: Quantifying the robustness of deep multispectral segmentation models
against natural perturbations and data poisoning
- Authors: Elise Bishoff, Charles Godfrey, Myles McKay, Eleanor Byler
- Abstract summary: We characterize the performance and robustness of a multispectral (RGB and near infrared) image segmentation model subjected to adversarial attacks and natural perturbations.
We find both RGB and multispectral models are vulnerable to data poisoning attacks regardless of input or fusion architectures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In overhead image segmentation tasks, including additional spectral bands
beyond the traditional RGB channels can improve model performance. However, it
is still unclear how incorporating this additional data impacts model
robustness to adversarial attacks and natural perturbations. For adversarial
robustness, the additional information could improve the model's ability to
distinguish malicious inputs, or simply provide new attack avenues and
vulnerabilities. For natural perturbations, the additional information could
better inform model decisions and weaken perturbation effects or have no
significant influence at all. In this work, we seek to characterize the
performance and robustness of a multispectral (RGB and near infrared) image
segmentation model subjected to adversarial attacks and natural perturbations.
While existing adversarial and natural robustness research has focused
primarily on digital perturbations, we prioritize on creating realistic
perturbations designed with physical world conditions in mind. For adversarial
robustness, we focus on data poisoning attacks whereas for natural robustness,
we focus on extending ImageNet-C common corruptions for fog and snow that
coherently and self-consistently perturbs the input data. Overall, we find both
RGB and multispectral models are vulnerable to data poisoning attacks
regardless of input or fusion architectures and that while physically
realizable natural perturbations still degrade model performance, the impact
differs based on fusion architecture and input data.
Related papers
- Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and Efficiency [3.3490724063380215]
Adrial training has been presented as a mitigation strategy which can result in more robust models.
We explore the effects of two different model compression methods -- structured weight pruning and quantization -- on adversarial robustness.
We show that adversarial fine-tuning of compressed models can achieve robustness performance comparable to adversarially trained models.
arXiv Detail & Related papers (2024-03-14T14:34:25Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - Understanding Robust Overfitting from the Feature Generalization Perspective [61.770805867606796]
Adversarial training (AT) constructs robust neural networks by incorporating adversarial perturbations into natural data.
It is plagued by the issue of robust overfitting (RO), which severely damages the model's robustness.
In this paper, we investigate RO from a novel feature generalization perspective.
arXiv Detail & Related papers (2023-10-01T07:57:03Z) - Measuring the Effect of Causal Disentanglement on the Adversarial
Robustness of Neural Network Models [1.3927943269211591]
Causal Neural Network models have shown high levels of robustness to adversarial attacks.
No quantitative study has yet measured the level of disentanglement achieved by these types of causal models.
arXiv Detail & Related papers (2023-08-21T13:22:12Z) - Interpretable Computer Vision Models through Adversarial Training:
Unveiling the Robustness-Interpretability Connection [0.0]
Interpretability is as essential as robustness when we deploy the models to the real world.
Standard models, compared to robust are more susceptible to adversarial attacks, and their learned representations are less meaningful to humans.
arXiv Detail & Related papers (2023-07-04T13:51:55Z) - Robustness of deep learning algorithms in astronomy -- galaxy morphology
studies [0.0]
We study the effect of observational noise from the exposure time on performance of ResNet18 trained to distinguish between galaxies of different morphologies in LSST mock data.
We also explore how domain adaptation techniques can help improve model robustness in case of this type of naturally occurring attacks.
arXiv Detail & Related papers (2021-11-01T14:12:15Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Contextual Fusion For Adversarial Robustness [0.0]
Deep neural networks are usually designed to process one particular information stream and susceptible to various types of adversarial perturbations.
We developed a fusion model using a combination of background and foreground features extracted in parallel from Places-CNN and Imagenet-CNN.
For gradient based attacks, our results show that fusion allows for significant improvements in classification without decreasing performance on unperturbed data.
arXiv Detail & Related papers (2020-11-18T20:13:23Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.