Robustness and invariance properties of image classifiers
- URL: http://arxiv.org/abs/2209.02408v1
- Date: Tue, 30 Aug 2022 11:00:59 GMT
- Title: Robustness and invariance properties of image classifiers
- Authors: Apostolos Modas
- Abstract summary: Deep neural networks have achieved impressive results in many image classification tasks.
Deep networks are not robust to a large variety of semantic-preserving image modifications.
The poor robustness of image classifiers to small data distribution shifts raises serious concerns regarding their trustworthiness.
- Score: 8.970032486260695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have achieved impressive results in many image
classification tasks. However, since their performance is usually measured in
controlled settings, it is important to ensure that their decisions remain
correct when deployed in noisy environments. In fact, deep networks are not
robust to a large variety of semantic-preserving image modifications, even to
imperceptible image changes known as adversarial perturbations. The poor
robustness of image classifiers to small data distribution shifts raises
serious concerns regarding their trustworthiness. To build reliable machine
learning models, we must design principled methods to analyze and understand
the mechanisms that shape robustness and invariance. This is exactly the focus
of this thesis.
First, we study the problem of computing sparse adversarial perturbations. We
exploit the geometry of the decision boundaries of image classifiers for
computing sparse perturbations very fast, and reveal a qualitative connection
between adversarial examples and the data features that image classifiers
learn. Then, to better understand this connection, we propose a geometric
framework that connects the distance of data samples to the decision boundary,
with the features existing in the data. We show that deep classifiers have a
strong inductive bias towards invariance to non-discriminative features, and
that adversarial training exploits this property to confer robustness. Finally,
we focus on the challenging problem of generalization to unforeseen corruptions
of the data, and we propose a novel data augmentation scheme for achieving
state-of-the-art robustness to common corruptions of the images.
Overall, our results contribute to the understanding of the fundamental
mechanisms of deep image classifiers, and pave the way for building more
reliable machine learning systems that can be deployed in real-world
environments.
Related papers
- DiG-IN: Diffusion Guidance for Investigating Networks -- Uncovering Classifier Differences Neuron Visualisations and Visual Counterfactual Explanations [35.458709912618176]
Deep learning has led to huge progress in complex image classification tasks like ImageNet, unexpected failure modes, e.g. via spurious features.
For safety-critical tasks the black-box nature of their decisions is problematic, and explanations or at least methods which make decisions plausible are needed urgently.
We address these problems by generating images that optimize a classifier-derived objective using a framework for guided image generation.
arXiv Detail & Related papers (2023-11-29T17:35:29Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Counterfactual Image Generation for adversarially robust and
interpretable Classifiers [1.3859669037499769]
We propose a unified framework leveraging image-to-image translation Generative Adrial Networks (GANs) to produce counterfactual samples.
This is achieved by combining the classifier and discriminator into a single model that attributes real images to their respective classes and flags generated images as "fake"
We show how the model exhibits improved robustness to adversarial attacks, and we show how the discriminator's "fakeness" value serves as an uncertainty measure of the predictions.
arXiv Detail & Related papers (2023-10-01T18:50:29Z) - Self-supervised Semantic Segmentation: Consistency over Transformation [3.485615723221064]
We propose a novel self-supervised algorithm, textbfS$3$-Net, which integrates a robust framework based on the proposed Inception Large Kernel Attention (I-LKA) modules.
We leverage deformable convolution as an integral component to effectively capture and delineate lesion deformations for superior object boundary definition.
Our experimental results on skin lesion and lung organ segmentation tasks show the superior performance of our method compared to the SOTA approaches.
arXiv Detail & Related papers (2023-08-31T21:28:46Z) - RobustCLEVR: A Benchmark and Framework for Evaluating Robustness in
Object-centric Learning [9.308581290987783]
We present the RobustCLEVR benchmark dataset and evaluation framework.
Our framework takes a novel approach to evaluating robustness by enabling the specification of causal dependencies.
Overall, we find that object-centric methods are not inherently robust to image corruptions.
arXiv Detail & Related papers (2023-08-28T20:52:18Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Robustness in Deep Learning for Computer Vision: Mind the gap? [13.576376492050185]
We identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision.
We find that this area of research has received disproportionately little attention relative to adversarial machine learning.
arXiv Detail & Related papers (2021-12-01T16:42:38Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.