I Am Going MAD: Maximum Discrepancy Competition for Comparing
Classifiers Adaptively
- URL: http://arxiv.org/abs/2002.10648v1
- Date: Tue, 25 Feb 2020 03:32:29 GMT
- Title: I Am Going MAD: Maximum Discrepancy Competition for Comparing
Classifiers Adaptively
- Authors: Haotao Wang, Tianlong Chen, Zhangyang Wang and Kede Ma
- Abstract summary: We name the MAximum Discrepancy (MAD) competition.
We adaptively sample a small test set from an arbitrarily large corpus of unlabeled images.
Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers.
- Score: 135.7695909882746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The learning of hierarchical representations for image classification has
experienced an impressive series of successes due in part to the availability
of large-scale labeled data for training. On the other hand, the trained
classifiers have traditionally been evaluated on small and fixed sets of test
images, which are deemed to be extremely sparsely distributed in the space of
all natural images. It is thus questionable whether recent performance
improvements on the excessively re-used test sets generalize to real-world
natural images with much richer content variations. Inspired by efficient
stimulus selection for testing perceptual models in psychophysical and
physiological studies, we present an alternative framework for comparing image
classifiers, which we name the MAximum Discrepancy (MAD) competition. Rather
than comparing image classifiers using fixed test images, we adaptively sample
a small test set from an arbitrarily large corpus of unlabeled images so as to
maximize the discrepancies between the classifiers, measured by the distance
over WordNet hierarchy. Human labeling on the resulting model-dependent image
sets reveals the relative performance of the competing classifiers, and
provides useful insights on potential ways to improve them. We report the MAD
competition results of eleven ImageNet classifiers while noting that the
framework is readily extensible and cost-effective to add future classifiers
into the competition. Codes can be found at https://github.com/TAMU-VITA/MAD.
Related papers
- Efficient Exploration of Image Classifier Failures with Bayesian Optimization and Text-to-Image Models [4.59357989139429]
Performance evaluated on a validation set may not reflect performance in the real world.
Recent advances in text-to-image generative models make them valuable for benchmarking computer vision models.
arXiv Detail & Related papers (2024-04-26T06:22:43Z) - Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Diversified in-domain synthesis with efficient fine-tuning for few-shot
classification [64.86872227580866]
Few-shot image classification aims to learn an image classifier using only a small set of labeled examples per class.
We propose DISEF, a novel approach which addresses the generalization challenge in few-shot learning using synthetic data.
We validate our method in ten different benchmarks, consistently outperforming baselines and establishing a new state-of-the-art for few-shot classification.
arXiv Detail & Related papers (2023-12-05T17:18:09Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Mix-up Self-Supervised Learning for Contrast-agnostic Applications [33.807005669824136]
We present the first mix-up self-supervised learning framework for contrast-agnostic applications.
We address the low variance across images based on cross-domain mix-up and build the pretext task based on image reconstruction and transparency prediction.
arXiv Detail & Related papers (2022-04-02T16:58:36Z) - Weakly-supervised Generative Adversarial Networks for medical image
classification [1.479639149658596]
We propose a novel medical image classification algorithm called Weakly-Supervised Generative Adversarial Networks (WSGAN)
WSGAN only uses a small number of real images without labels to generate fake images or mask images to enlarge the sample size of the training set.
We show that WSGAN can obtain relatively high learning performance by using few labeled and unlabeled data.
arXiv Detail & Related papers (2021-11-29T15:38:48Z) - Multi-Label Image Classification with Contrastive Learning [57.47567461616912]
We show that a direct application of contrastive learning can hardly improve in multi-label cases.
We propose a novel framework for multi-label classification with contrastive learning in a fully supervised setting.
arXiv Detail & Related papers (2021-07-24T15:00:47Z) - Adaptive Label Smoothing [1.3198689566654107]
We present a novel approach to classification that combines the ideas of objectness and label smoothing during training.
We show extensive results using ImageNet to demonstrate that CNNs trained using adaptive label smoothing are much less likely to be overconfident in their predictions.
arXiv Detail & Related papers (2020-09-14T13:37:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.