Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor
Perturbation
- URL: http://arxiv.org/abs/2103.03102v5
- Date: Mon, 26 Jun 2023 17:50:00 GMT
- Title: Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor
Perturbation
- Authors: Wei Dai, Daniel Berleant
- Abstract summary: This paper adds to the fundamental body of work on benchmarking the robustness of deep learning (DL) classifiers.
Also, we introduce a new four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation.
All source codes, related image sets, and preliminary data, are shared on a GitHub website to support future academic research and industry projects.
- Score: 4.016928101928335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper adds to the fundamental body of work on benchmarking the
robustness of deep learning (DL) classifiers. We innovate a new benchmarking
methodology to evaluate robustness of DL classifiers. Also, we introduce a new
four-quadrant statistical visualization tool, including minimum accuracy,
maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking
robustness of DL classifiers. To measure robust DL classifiers, we created a
comprehensive 69 benchmarking image set, including a clean set, sets with
single factor perturbations, and sets with two-factor perturbation conditions.
After collecting experimental results, we first report that using two-factor
perturbed images improves both robustness and accuracy of DL classifiers. The
two-factor perturbation includes (1) two digital perturbations (salt & pepper
noise and Gaussian noise) applied in both sequences, and (2) one digital
perturbation (salt & pepper noise) and a geometric perturbation (rotation)
applied in both sequences. All source codes, related image sets, and
preliminary data, figures are shared on a GitHub website to support future
academic research and industry projects. The web resources locate at
https://github.com/caperock/robustai
Related papers
- Few-shot Image Classification based on Gradual Machine Learning [6.935034849731568]
Few-shot image classification aims to accurately classify unlabeled images using only a few labeled samples.
We propose a novel approach based on the non-i.i.d paradigm of gradual machine learning (GML)
We show that the proposed approach can improve the SOTA performance by 1-5% in terms of accuracy.
arXiv Detail & Related papers (2023-07-28T12:30:41Z) - Utilizing Class Separation Distance for the Evaluation of Corruption
Robustness of Machine Learning Classifiers [0.6882042556551611]
We propose a test data augmentation method that uses a robustness distance $epsilon$ derived from the datasets minimal class separation distance.
The resulting MSCR metric allows a dataset-specific comparison of different classifiers with respect to their corruption robustness.
Our results indicate that robustness training through simple data augmentation can already slightly improve accuracy.
arXiv Detail & Related papers (2022-06-27T15:56:16Z) - Ensemble Classifier Design Tuned to Dataset Characteristics for Network
Intrusion Detection [0.0]
Two new algorithms are proposed to address the class overlap issue in the dataset.
The proposed design is evaluated for both binary and multi-category classification.
arXiv Detail & Related papers (2022-05-08T21:06:42Z) - Zero-Shot Logit Adjustment [89.68803484284408]
Generalized Zero-Shot Learning (GZSL) is a semantic-descriptor-based learning technique.
In this paper, we propose a new generation-based technique to enhance the generator's effect while neglecting the improvement of the classifier.
Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative zero-shot learning frameworks.
arXiv Detail & Related papers (2022-04-25T17:54:55Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor
Perturbation [4.016928101928335]
This paper adds to the fundamental body of work on benchmarking the robustness of DL classifiers on defective images.
We created comprehensive 69 benchmarking image sets, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions.
arXiv Detail & Related papers (2022-03-02T03:53:21Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.