Toward Errorless Training ImageNet-1k
- URL: http://arxiv.org/abs/2508.04941v1
- Date: Wed, 06 Aug 2025 23:58:56 GMT
- Title: Toward Errorless Training ImageNet-1k
- Authors: Bo Deng, Levi Heath,
- Abstract summary: We describe a feedforward artificial neural network trained on the ImageNet 2012 contest dataset.<n>The best performing model uses 322,430,160 parameters, with 4 decimal places precision.<n>We conjecture that the reason our model does not achieve a 100% accuracy rate is due to a double-labeling problem.
- Score: 0.4143603294943439
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we describe a feedforward artificial neural network trained on the ImageNet 2012 contest dataset [7] with the new method of [5] to an accuracy rate of 98.3% with a 99.69 Top-1 rate, and an average of 285.9 labels that are perfectly classified over the 10 batch partitions of the dataset. The best performing model uses 322,430,160 parameters, with 4 decimal places precision. We conjecture that the reason our model does not achieve a 100% accuracy rate is due to a double-labeling problem, by which there are duplicate images in the dataset with different labels.
Related papers
- On the Image-Based Detection of Tomato and Corn leaves Diseases : An
in-depth comparative experiments [0.0]
The research introduces a novel plant disease detection model based on Convolutional Neural Networks (CNN) for plant image classification.
The model classifies two distinct plant diseases into four categories, presenting a novel technique for plant disease identification.
arXiv Detail & Related papers (2023-12-14T05:11:30Z) - Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness
with Dataset Reinforcement [68.44100784364987]
We propose a strategy to improve a dataset once such that the accuracy of any model architecture trained on the reinforced dataset is improved at no additional training cost for users.
We create a reinforced version of the ImageNet training dataset, called ImageNet+, as well as reinforced datasets CIFAR-100+, Flowers-102+, and Food-101+.
Models trained with ImageNet+ are more accurate, robust, and calibrated, and transfer well to downstream tasks.
arXiv Detail & Related papers (2023-03-15T23:10:17Z) - Invariant Learning via Diffusion Dreamed Distribution Shifts [121.71383835729848]
We propose a dataset called Diffusion Dreamed Distribution Shifts (D3S)
D3S consists of synthetic images generated through StableDiffusion using text prompts and image guides obtained by pasting a sample foreground image onto a background template image.
Due to the incredible photorealism of the diffusion model, our images are much closer to natural images than previous synthetic datasets.
arXiv Detail & Related papers (2022-11-18T17:07:43Z) - (Certified!!) Adversarial Robustness for Free! [116.6052628829344]
We certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within a 2-norm of 0.5.
We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.
arXiv Detail & Related papers (2022-06-21T17:27:27Z) - MIO : Mutual Information Optimization using Self-Supervised Binary Contrastive Learning [12.365801596593936]
We model our pre-training task as a binary classification problem to induce an implicit contrastive effect.<n>Unlike existing methods, the proposed loss function optimize the mutual information in positive and negative pairs.<n>The proposed method outperforms SOTA self-supervised contrastive frameworks on benchmark datasets.
arXiv Detail & Related papers (2021-11-24T17:51:29Z) - Correlated Input-Dependent Label Noise in Large-Scale Image
Classification [4.979361059762468]
We take a principled probabilistic approach to modelling input-dependent, also known as heteroscedastic, label noise in datasets.
We demonstrate that the learned covariance structure captures known sources of label noise between semantically similar and co-occurring classes.
We set a new state-of-the-art result on WebVision 1.0 with 76.6% top-1 accuracy.
arXiv Detail & Related papers (2021-05-19T17:30:59Z) - Post-training deep neural network pruning via layer-wise calibration [70.65691136625514]
We propose a data-free extension of the approach for computer vision models based on automatically-generated synthetic fractal images.
When using real data, we are able to get a ResNet50 model on ImageNet with 65% sparsity rate in 8-bit precision in a post-training setting.
arXiv Detail & Related papers (2021-04-30T14:20:51Z) - With a Little Help from My Friends: Nearest-Neighbor Contrastive
Learning of Visual Representations [87.72779294717267]
Using the nearest-neighbor as positive in contrastive losses improves performance significantly on ImageNet classification.
We demonstrate empirically that our method is less reliant on complex data augmentations.
arXiv Detail & Related papers (2021-04-29T17:56:08Z) - Pervasive Label Errors in Test Sets Destabilize Machine Learning
Benchmarks [12.992191397900806]
We identify label errors in the test sets of 10 of the most commonly-used computer vision, natural language, and audio datasets.
We estimate an average of 3.4% errors across the 10 datasets, where for example 2916 label errors comprise 6% of the ImageNet validation set.
arXiv Detail & Related papers (2021-03-26T21:54:36Z) - Fixing the train-test resolution discrepancy: FixEfficientNet [98.64315617109344]
This paper provides an analysis of the performance of the EfficientNet image classifiers with several recent training procedures.
The resulting network, called FixEfficientNet, significantly outperforms the initial architecture with the same number of parameters.
arXiv Detail & Related papers (2020-03-18T14:22:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.