A Technical Report for VIPriors Image Classification Challenge
- URL: http://arxiv.org/abs/2007.08722v1
- Date: Fri, 17 Jul 2020 02:30:09 GMT
- Title: A Technical Report for VIPriors Image Classification Challenge
- Authors: Zhipeng Luo, Ge Li, Zhiguang Zhang
- Abstract summary: This paper is a brief report to our submission to the VIPriors Image Classification Challenge.
In this challenge, the difficulty is how to train the model from scratch without any pretrained weight.
The final Top-1 accuracy of our team DeepBlueAI is 0.7015, ranking second in the leaderboard.
- Score: 25.421167550087205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image classification has always been a hot and challenging task. This paper
is a brief report to our submission to the VIPriors Image Classification
Challenge. In this challenge, the difficulty is how to train the model from
scratch without any pretrained weight. In our method, several strong backbones
and multiple loss functions are used to learn more representative features. To
improve the models' generalization and robustness, efficient image augmentation
strategies are utilized, like autoaugment and cutmix. Finally, ensemble
learning is used to increase the performance of the models. The final Top-1
accuracy of our team DeepBlueAI is 0.7015, ranking second in the leaderboard.
Related papers
- EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training [79.96741042766524]
We reformulate the training curriculum as a soft-selection function.
We show that exposing the contents of natural images can be readily achieved by the intensity of data augmentation.
The resulting method, EfficientTrain++, is simple, general, yet surprisingly effective.
arXiv Detail & Related papers (2024-05-14T17:00:43Z) - HiCD: Change Detection in Quality-Varied Images via Hierarchical
Correlation Distillation [40.03785896317387]
We introduce an innovative training strategy grounded in knowledge distillation.
The core idea revolves around leveraging task knowledge acquired from high-quality image pairs to guide the model's learning.
We develop a hierarchical correlation distillation approach (involving self-correlation, cross-correlation, and global correlation)
arXiv Detail & Related papers (2024-01-19T15:21:51Z) - Understanding Zero-Shot Adversarial Robustness for Large-Scale Models [31.295249927085475]
We identify and explore the problem of emphadapting large-scale models for zero-shot adversarial robustness.
We propose a text-guided contrastive adversarial training loss, which aligns the text embeddings and the adversarial visual features with contrastive learning.
Our approach significantly improves the zero-shot adversarial robustness over CLIP, seeing an average improvement of over 31 points over ImageNet and 15 zero-shot datasets.
arXiv Detail & Related papers (2022-12-14T04:08:56Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Get Fooled for the Right Reason: Improving Adversarial Robustness
through a Teacher-guided Curriculum Learning Approach [17.654350836042813]
Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner or outer minimization steps.
We propose a non-iterative method that enforces the following ideas during training.
Our method achieves significant performance gains with a little extra effort (10-20%) over existing AT models.
arXiv Detail & Related papers (2021-10-30T17:47:14Z) - A Technical Report for ICCV 2021 VIPriors Re-identification Challenge [5.940699390639281]
This paper introduces our solution for the re-identification track in VIPriors Challenge 2021.
It shows use state-of-the-art data processing strategies, model designs, and post-processing ensemble methods.
The final score of our team (ALONG) is 96.5154% mAP, ranking first in the leaderboard.
arXiv Detail & Related papers (2021-09-30T14:29:31Z) - A Strong Baseline for the VIPriors Data-Efficient Image Classification
Challenge [9.017660524497389]
We present a strong baseline for data-efficient image classification on the VIPriors challenge dataset.
Our baseline achieves 69.7% accuracy and outperforms 50% of submissions to the VIPriors 2021 challenge.
arXiv Detail & Related papers (2021-09-28T08:45:15Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then
Training It Toughly [114.81028176850404]
Training generative adversarial networks (GANs) with limited data generally results in deteriorated performance and collapsed models.
We decompose the data-hungry GAN training into two sequential sub-problems.
Such a coordinated framework enables us to focus on lower-complexity and more data-efficient sub-problems.
arXiv Detail & Related papers (2021-02-28T05:20:29Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.