Comprehensive Assessment of the Performance of Deep Learning Classifiers
Reveals a Surprising Lack of Robustness
- URL: http://arxiv.org/abs/2308.04137v1
- Date: Tue, 8 Aug 2023 08:50:27 GMT
- Title: Comprehensive Assessment of the Performance of Deep Learning Classifiers
Reveals a Surprising Lack of Robustness
- Authors: Michael W. Spratling
- Abstract summary: This article advocates bench-marking performance using a wide range of different types of data.
It is found that current deep neural networks, including those trained with methods that are believed to produce state-of-the-art robustness, are extremely vulnerable to making mistakes on certain types of data.
- Score: 2.1320960069210484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reliable and robust evaluation methods are a necessary first step towards
developing machine learning models that are themselves robust and reliable.
Unfortunately, current evaluation protocols typically used to assess
classifiers fail to comprehensively evaluate performance as they tend to rely
on limited types of test data, and ignore others. For example, using the
standard test data fails to evaluate the predictions made by the classifier to
samples from classes it was not trained on. On the other hand, testing with
data containing samples from unknown classes fails to evaluate how well the
classifier can predict the labels for known classes. This article advocates
bench-marking performance using a wide range of different types of data and
using a single metric that can be applied to all such data types to produce a
consistent evaluation of performance. Using such a benchmark it is found that
current deep neural networks, including those trained with methods that are
believed to produce state-of-the-art robustness, are extremely vulnerable to
making mistakes on certain types of data. This means that such models will be
unreliable in real-world scenarios where they may encounter data from many
different domains, and that they are insecure as they can easily be fooled into
making the wrong decisions. It is hoped that these results will motivate the
wider adoption of more comprehensive testing methods that will, in turn, lead
to the development of more robust machine learning methods in the future.
Code is available at:
\url{https://codeberg.org/mwspratling/RobustnessEvaluation}
Related papers
- Provable Robustness for Streaming Models with a Sliding Window [51.85182389861261]
In deep learning applications such as online content recommendation and stock market analysis, models use historical data to make predictions.
We derive robustness certificates for models that use a fixed-size sliding window over the input stream.
Our guarantees hold for the average model performance across the entire stream and are independent of stream size, making them suitable for large data streams.
arXiv Detail & Related papers (2023-03-28T21:02:35Z) - A Call to Reflect on Evaluation Practices for Failure Detection in Image
Classification [0.491574468325115]
We present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions.
The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation.
arXiv Detail & Related papers (2022-11-28T12:25:27Z) - Classification of datasets with imputed missing values: does imputation
quality matter? [2.7646249774183]
Classifying samples in incomplete datasets is non-trivial.
We demonstrate how the commonly used measures for assessing quality are flawed.
We propose a new class of discrepancy scores which focus on how well the method recreates the overall distribution of the data.
arXiv Detail & Related papers (2022-06-16T22:58:03Z) - Estimating Confidence of Predictions of Individual Classifiers and Their
Ensembles for the Genre Classification Task [0.0]
Genre identification is a subclass of non-topical text classification.
Nerve models based on pre-trained transformers, such as BERT or XLM-RoBERTa, demonstrate SOTA results in many NLP tasks.
arXiv Detail & Related papers (2022-06-15T09:59:05Z) - Certifying Data-Bias Robustness in Linear Regression [12.00314910031517]
We present a technique for certifying whether linear regression models are pointwise-robust to label bias in a training dataset.
We show how to solve this problem exactly for individual test points, and provide an approximate but more scalable method.
We also unearth gaps in bias-robustness, such as high levels of non-robustness for certain bias assumptions on some datasets.
arXiv Detail & Related papers (2022-06-07T20:47:07Z) - Smoothed Embeddings for Certified Few-Shot Learning [63.68667303948808]
We extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings.
Our results are confirmed by experiments on different datasets.
arXiv Detail & Related papers (2022-02-02T18:19:04Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.