MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet
without Tricks
- URL: http://arxiv.org/abs/2009.08453v2
- Date: Fri, 19 Mar 2021 17:40:19 GMT
- Title: MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet
without Tricks
- Authors: Zhiqiang Shen and Marios Savvides
- Abstract summary: We introduce a framework that is able to boost the vanilla ResNet-50 to 80%+ Top-1 accuracy on ImageNet without tricks.
Our method obtains 80.67% top-1 accuracy on ImageNet using a single crop-size of 224x224 with vanilla ResNet-50.
Our framework consistently improves from 69.76% to 73.19% on smaller ResNet-18.
- Score: 57.69809561405253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a simple yet effective distillation framework that is able to
boost the vanilla ResNet-50 to 80%+ Top-1 accuracy on ImageNet without tricks.
We construct such a framework through analyzing the problems in the existing
classification system and simplify the base method ensemble knowledge
distillation via discriminators by: (1) adopting the similarity loss and
discriminator only on the final outputs and (2) using the average of softmax
probabilities from all teacher ensembles as the stronger supervision.
Intriguingly, three novel perspectives are presented for distillation: (1)
weight decay can be weakened or even completely removed since the soft label
also has a regularization effect; (2) using a good initialization for students
is critical; and (3) one-hot/hard label is not necessary in the distillation
process if the weights are well initialized. We show that such a
straight-forward framework can achieve state-of-the-art results without
involving any commonly-used techniques, such as architecture modification;
outside training data beyond ImageNet; autoaug/randaug; cosine learning rate;
mixup/cutmix training; label smoothing; etc. Our method obtains 80.67% top-1
accuracy on ImageNet using a single crop-size of 224x224 with vanilla
ResNet-50, outperforming the previous state-of-the-arts by a significant margin
under the same network structure. Our result can be regarded as a strong
baseline using knowledge distillation, and to our best knowledge, this is also
the first method that is able to boost vanilla ResNet-50 to surpass 80% on
ImageNet without architecture modification or additional training data. On
smaller ResNet-18, our distillation framework consistently improves from 69.76%
to 73.19%, which shows tremendous practical values in real-world applications.
Our code and models are available at: https://github.com/szq0214/MEAL-V2.
Related papers
- A Simple and Generic Framework for Feature Distillation via Channel-wise
Transformation [35.233203757760066]
We propose a learnable nonlinear channel-wise transformation to align the features of the student and the teacher model.
Our method achieves significant performance improvements in various computer vision tasks.
arXiv Detail & Related papers (2023-03-23T12:13:29Z) - Masked Autoencoders Enable Efficient Knowledge Distillers [31.606287119666572]
This paper studies the potential of distilling knowledge from pre-trained models, especially Masked Autoencoders.
We minimize the distance between the intermediate feature map of the teacher model and that of the student model.
Our method can robustly distill knowledge from teacher models even with extremely high masking ratios.
arXiv Detail & Related papers (2022-08-25T17:58:59Z) - Self-distillation with Batch Knowledge Ensembling Improves ImageNet
Classification [57.5041270212206]
We present BAtch Knowledge Ensembling (BAKE) to produce refined soft targets for anchor images.
BAKE achieves online knowledge ensembling across multiple samples with only a single network.
It requires minimal computational and memory overhead compared to existing knowledge ensembling methods.
arXiv Detail & Related papers (2021-04-27T16:11:45Z) - DisCo: Remedy Self-supervised Learning on Lightweight Models with
Distilled Contrastive Learning [94.89221799550593]
Self-supervised representation learning (SSL) has received widespread attention from the community.
Recent research argue that its performance will suffer a cliff fall when the model size decreases.
We propose a simple yet effective Distilled Contrastive Learning (DisCo) to ease the issue by a large margin.
arXiv Detail & Related papers (2021-04-19T08:22:52Z) - Beyond Self-Supervision: A Simple Yet Effective Network Distillation
Alternative to Improve Backbones [40.33419553042038]
We propose to improve existing baseline networks via knowledge distillation from off-the-shelf pre-trained big powerful models.
Our solution performs distillation by only driving prediction of the student model consistent with that of the teacher model.
We empirically find that such simple distillation settings perform extremely effective, for example, the top-1 accuracy on ImageNet-1k validation set of MobileNetV3-large and ResNet50-D can be significantly improved.
arXiv Detail & Related papers (2021-03-10T09:32:44Z) - Improved Residual Networks for Image and Video Recognition [98.10703825716142]
Residual networks (ResNets) represent a powerful type of convolutional neural network (CNN) architecture.
We show consistent improvements in accuracy and learning convergence over the baseline.
Our proposed approach allows us to train extremely deep networks, while the baseline shows severe optimization issues.
arXiv Detail & Related papers (2020-04-10T11:09:50Z) - Fixing the train-test resolution discrepancy: FixEfficientNet [98.64315617109344]
This paper provides an analysis of the performance of the EfficientNet image classifiers with several recent training procedures.
The resulting network, called FixEfficientNet, significantly outperforms the initial architecture with the same number of parameters.
arXiv Detail & Related papers (2020-03-18T14:22:58Z) - Picking Winning Tickets Before Training by Preserving Gradient Flow [9.67608102763644]
We argue that efficient training requires preserving the gradient flow through the network.
We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet.
arXiv Detail & Related papers (2020-02-18T05:14:47Z) - A Simple Framework for Contrastive Learning of Visual Representations [116.37752766922407]
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
We show that composition of data augmentations plays a critical role in defining effective predictive tasks.
We are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet.
arXiv Detail & Related papers (2020-02-13T18:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.