Predicting the Reliability of an Image Classifier under Image Distortion
- URL: http://arxiv.org/abs/2412.16881v1
- Date: Sun, 22 Dec 2024 06:21:06 GMT
- Title: Predicting the Reliability of an Image Classifier under Image Distortion
- Authors: Dang Nguyen, Sunil Gupta, Kien Do, Svetha Venkatesh,
- Abstract summary: In image classification tasks, deep learning models are vulnerable to image distortions.
For a quality control purpose, it is important to predict if the image-classifier is unreliable/reliable under a distortion level.
Our solution is to construct a training set consisting of distortion levels along with their "non-reliable" or "reliable" labels, and train a machine learning predictive model (called distortion-classifier) to classify unseen distortion levels.
- Score: 48.866196348385
- License:
- Abstract: In image classification tasks, deep learning models are vulnerable to image distortions i.e. their accuracy significantly drops if the input images are distorted. An image-classifier is considered "reliable" if its accuracy on distorted images is above a user-specified threshold. For a quality control purpose, it is important to predict if the image-classifier is unreliable/reliable under a distortion level. In other words, we want to predict whether a distortion level makes the image-classifier "non-reliable" or "reliable". Our solution is to construct a training set consisting of distortion levels along with their "non-reliable" or "reliable" labels, and train a machine learning predictive model (called distortion-classifier) to classify unseen distortion levels. However, learning an effective distortion-classifier is a challenging problem as the training set is highly imbalanced. To address this problem, we propose two Gaussian process based methods to rebalance the training set. We conduct extensive experiments to show that our method significantly outperforms several baselines on six popular image datasets.
Related papers
- Few-shot Algorithm Assurance [11.924406021826606]
deep learning models are vulnerable to image distortion.
Model Assurance under Image Distortion is a classification task.
We propose a novel Conditional Level Set Estimation algorithm.
arXiv Detail & Related papers (2024-12-28T21:11:55Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Discriminative Class Tokens for Text-to-Image Diffusion Models [102.88033622546251]
We propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text.
Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images.
We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier.
arXiv Detail & Related papers (2023-03-30T05:25:20Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human
Pose Estimation [80.02124918255059]
Semi-supervised learning aims to boost the accuracy of a model by exploring unlabeled images.
We learn two networks to mutually teach each other.
The more reliable predictions on easy images in each network are used to teach the other network to learn about the corresponding hard images.
arXiv Detail & Related papers (2020-11-25T03:29:52Z) - A Deep Ordinal Distortion Estimation Approach for Distortion Rectification [62.72089758481803]
We propose a novel distortion rectification approach that can obtain more accurate parameters with higher efficiency.
We design a local-global associated estimation network that learns the ordinal distortion to approximate the realistic distortion distribution.
Considering the redundancy of distortion information, our approach only uses a part of distorted image for the ordinal distortion estimation.
arXiv Detail & Related papers (2020-07-21T10:03:42Z) - Overinterpretation reveals image classification model pathologies [15.950659318117694]
convolutional neural networks (CNNs) on popular benchmarks exhibit troubling pathologies that allow them to display high accuracy even in the absence of semantically salient features.
We demonstrate that neural networks trained on CIFAR-10 and ImageNet suffer from overinterpretation.
Although these patterns portend potential model fragility in real-world deployment, they are in fact valid statistical patterns of the benchmark that alone suffice to attain high test accuracy.
arXiv Detail & Related papers (2020-03-19T17:12:23Z) - Distortion Agnostic Deep Watermarking [21.493370728114815]
We propose a new framework for distortion-agnostic watermarking.
The robustness of our system comes from two sources: adversarial training and channel coding.
arXiv Detail & Related papers (2020-01-14T01:04:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.