Real World Robustness from Systematic Noise
- URL: http://arxiv.org/abs/2109.00864v1
- Date: Thu, 2 Sep 2021 12:25:16 GMT
- Title: Real World Robustness from Systematic Noise
- Authors: Yan Wang, Yuhang Li, Ruihao Gong
- Abstract summary: In this paper, we exhibit some long-neglected but frequent-happening adversarial examples caused by systematic error.
To benchmark these real-world adversarial examples, we propose ImageNet-S dataset.
For example, we find a normal ResNet-50 trained on ImageNet can have 1%-5% accuracy difference due to the systematic error.
- Score: 13.034436864136103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Systematic error, which is not determined by chance, often refers to the
inaccuracy (involving either the observation or measurement process) inherent
to a system. In this paper, we exhibit some long-neglected but
frequent-happening adversarial examples caused by systematic error. More
specifically, we find the trained neural network classifier can be fooled by
inconsistent implementations of image decoding and resize. This tiny difference
between these implementations often causes an accuracy drop from training to
deployment. To benchmark these real-world adversarial examples, we propose
ImageNet-S dataset, which enables researchers to measure a classifier's
robustness to systematic error. For example, we find a normal ResNet-50 trained
on ImageNet can have 1%-5% accuracy difference due to the systematic error.
Together our evaluation and dataset may aid future work toward real-world
robustness and practical generalization.
Related papers
- Typicalness-Aware Learning for Failure Detection [26.23185979968123]
Deep neural networks (DNNs) often suffer from the overconfidence issue, where incorrect predictions are made with high confidence scores.
We propose a novel approach called Typicalness-Aware Learning (TAL) to address this issue and improve failure detection performance.
arXiv Detail & Related papers (2024-11-04T11:09:47Z) - An accurate detection is not all you need to combat label noise in web-noisy datasets [23.020126612431746]
We show that direct estimation of the separating hyperplane can indeed offer an accurate detection of OOD samples.
We propose a hybrid solution that alternates between noise detection using linear separation and a state-of-the-art (SOTA) small-loss approach.
arXiv Detail & Related papers (2024-07-08T00:21:42Z) - Mitigating the Impact of Labeling Errors on Training via Rockafellian Relaxation [0.8741284539870512]
We propose and study the implementation of Rockafellian Relaxation (RR) for neural network training.
RR can enhance standard neural network methods to achieve robust performance across classification tasks.
We find that RR can mitigate the effects of dataset corruption due to both (heavy) labeling error and/or adversarial perturbation.
arXiv Detail & Related papers (2024-05-30T23:13:01Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Distribution Mismatch Correction for Improved Robustness in Deep Neural
Networks [86.42889611784855]
normalization methods increase the vulnerability with respect to noise and input corruptions.
We propose an unsupervised non-parametric distribution correction method that adapts the activation distribution of each layer.
In our experiments, we empirically show that the proposed method effectively reduces the impact of intense image corruptions.
arXiv Detail & Related papers (2021-10-05T11:36:25Z) - An Introduction to Robust Graph Convolutional Networks [71.68610791161355]
We propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data.
By incorporating an extra layers via Autoencoders into traditional graph convolutional networks, we characterize and handle typical error models explicitly.
arXiv Detail & Related papers (2021-03-27T04:47:59Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Detecting Misclassification Errors in Neural Networks with a Gaussian
Process Model [20.948038514886377]
This paper presents a new framework that produces a quantitative metric for detecting misclassification errors.
The framework, RED, builds an error detector on top of the base classifier and estimates uncertainty of the detection scores using Gaussian Processes.
arXiv Detail & Related papers (2020-10-05T15:01:30Z) - System Identification Through Lipschitz Regularized Deep Neural Networks [0.4297070083645048]
We use neural networks to learn governing equations from data.
We reconstruct the right-hand side of a system of ODEs $dotx(t) = f(t, x(t))$ directly from observed uniformly time-sampled data.
arXiv Detail & Related papers (2020-09-07T17:52:51Z) - Salvage Reusable Samples from Noisy Data for Robust Learning [70.48919625304]
We propose a reusable sample selection and correction approach, termed as CRSSC, for coping with label noise in training deep FG models with web images.
Our key idea is to additionally identify and correct reusable samples, and then leverage them together with clean examples to update the networks.
arXiv Detail & Related papers (2020-08-06T02:07:21Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.