ImageNot: A contrast with ImageNet preserves model rankings
- URL: http://arxiv.org/abs/2404.02112v1
- Date: Tue, 2 Apr 2024 17:13:04 GMT
- Title: ImageNot: A contrast with ImageNet preserves model rankings
- Authors: Olawale Salaudeen, Moritz Hardt,
- Abstract summary: We introduce ImageNot, a dataset designed to match the scale of ImageNet while differing drastically in other aspects.
Key model architectures developed for ImageNet over the years rank identically when trained and evaluated on ImageNot to how they rank on ImageNet.
- Score: 16.169858780154893
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce ImageNot, a dataset designed to match the scale of ImageNet while differing drastically in other aspects. We show that key model architectures developed for ImageNet over the years rank identically when trained and evaluated on ImageNot to how they rank on ImageNet. This is true when training models from scratch or fine-tuning them. Moreover, the relative improvements of each model over earlier models strongly correlate in both datasets. We further give evidence that ImageNot has a similar utility as ImageNet for transfer learning purposes. Our work demonstrates a surprising degree of external validity in the relative performance of image classification models. This stands in contrast with absolute accuracy numbers that typically drop sharply even under small changes to a dataset.
Related papers
- ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object [78.58860252442045]
We introduce generative model as a data source for hard images that benchmark deep models' robustness.
We are able to generate images with more diversified backgrounds, textures, and materials than any prior work, where we term this benchmark as ImageNet-D.
Our work suggests that diffusion models can be an effective source to test vision models.
arXiv Detail & Related papers (2024-03-27T17:23:39Z) - Evaluating Data Attribution for Text-to-Image Models [62.844382063780365]
We evaluate attribution through "customization" methods, which tune an existing large-scale model toward a given exemplar object or style.
Our key insight is that this allows us to efficiently create synthetic images that are computationally influenced by the exemplar by construction.
By taking into account the inherent uncertainty of the problem, we can assign soft attribution scores over a set of training images.
arXiv Detail & Related papers (2023-06-15T17:59:51Z) - ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing [45.14977000707886]
Higher accuracy on ImageNet usually leads to better robustness against different corruptions.
We create a toolkit for object editing with controls of backgrounds, sizes, positions, and directions.
We evaluate the performance of current deep learning models, including both convolutional neural networks and vision transformers.
arXiv Detail & Related papers (2023-03-30T02:02:32Z) - Diverse, Difficult, and Odd Instances (D2O): A New Test Set for Object
Classification [47.64219291655723]
We introduce a new test set, called D2O, which is sufficiently different from existing test sets.
Our dataset contains 8,060 images spread across 36 categories, out of which 29 appear in ImageNet.
The best Top-1 accuracy on our dataset is around 60% which is much lower than 91% best Top-1 accuracy on ImageNet.
arXiv Detail & Related papers (2023-01-29T19:58:32Z) - Does progress on ImageNet transfer to real-world datasets? [28.918770106968843]
We evaluate ImageNet pre-trained models with varying accuracy on six practical image classification datasets.
On multiple datasets, models with higher ImageNet accuracy do not consistently yield performance improvements.
We hope that future benchmarks will include more diverse datasets to encourage a more comprehensive approach to improving learning algorithms.
arXiv Detail & Related papers (2023-01-11T18:55:53Z) - Identical Image Retrieval using Deep Learning [0.0]
We are using the BigTransfer Model, which is a state-of-art model itself.
We extract the key features and train on the K-Nearest Neighbor model to obtain the nearest neighbor.
The application of our model is to find similar images, which are hard to achieve through text queries within a low inference time.
arXiv Detail & Related papers (2022-05-10T13:34:41Z) - Is it Enough to Optimize CNN Architectures on ImageNet? [0.0]
We train 500 CNN architectures on ImageNet and 8 other image classification datasets.
The relationship between architecture and performance varies wildly, depending on the datasets.
We identify two dataset-specific performance indicators: the cumulative width across layers as well as the total depth of the network.
arXiv Detail & Related papers (2021-03-16T14:42:01Z) - Contemplating real-world object classification [53.10151901863263]
We reanalyze the ObjectNet dataset recently proposed by Barbu et al. containing objects in daily life situations.
We find that applying deep models to the isolated objects, rather than the entire scene as is done in the original paper, results in around 20-30% performance improvement.
arXiv Detail & Related papers (2021-03-08T23:29:59Z) - Rethinking Natural Adversarial Examples for Classification Models [43.87819913022369]
ImageNet-A is a famous dataset of natural adversarial examples.
We validated the hypothesis by reducing the background influence in ImageNet-A examples with object detection techniques.
Experiments showed that the object detection models with various classification models as backbones obtained much higher accuracy than their corresponding classification models.
arXiv Detail & Related papers (2021-02-23T14:46:48Z) - Shape-Texture Debiased Neural Network Training [50.6178024087048]
Convolutional Neural Networks are often biased towards either texture or shape, depending on the training dataset.
We develop an algorithm for shape-texture debiased learning.
Experiments show that our method successfully improves model performance on several image recognition benchmarks.
arXiv Detail & Related papers (2020-10-12T19:16:12Z) - From ImageNet to Image Classification: Contextualizing Progress on
Benchmarks [99.19183528305598]
We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset.
Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for.
arXiv Detail & Related papers (2020-05-22T17:39:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.