Dynamic Class Queue for Large Scale Face Recognition In the Wild
- URL: http://arxiv.org/abs/2105.11113v1
- Date: Mon, 24 May 2021 06:31:10 GMT
- Title: Dynamic Class Queue for Large Scale Face Recognition In the Wild
- Authors: Bi Li, Teng Xi, Gang Zhang, Haocheng Feng, Junyu Han, Jingtuo Liu,
Errui Ding, Wenyu Liu
- Abstract summary: This work focus on computing resource constraint and long-tailed class distribution.
We propose a dynamic class queue (DCQ) to tackle these two problems.
We empirically verify in large-scale datasets that 10% of classes are sufficient to achieve similar performance as using all classes.
- Score: 45.3063075576461
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learning discriminative representation using large-scale face datasets in the
wild is crucial for real-world applications, yet it remains challenging. The
difficulties lie in many aspects and this work focus on computing resource
constraint and long-tailed class distribution. Recently, classification-based
representation learning with deep neural networks and well-designed losses have
demonstrated good recognition performance. However, the computing and memory
cost linearly scales up to the number of identities (classes) in the training
set, and the learning process suffers from unbalanced classes. In this work, we
propose a dynamic class queue (DCQ) to tackle these two problems. Specifically,
for each iteration during training, a subset of classes for recognition are
dynamically selected and their class weights are dynamically generated
on-the-fly which are stored in a queue. Since only a subset of classes is
selected for each iteration, the computing requirement is reduced. By using a
single server without model parallel, we empirically verify in large-scale
datasets that 10% of classes are sufficient to achieve similar performance as
using all classes. Moreover, the class weights are dynamically generated in a
few-shot manner and therefore suitable for tail classes with only a few
instances. We show clear improvement over a strong baseline in the largest
public dataset Megaface Challenge2 (MF2) which has 672K identities and over 88%
of them have less than 10 instances. Code is available at
https://github.com/bilylee/DCQ
Related papers
- Knowledge Adaptation Network for Few-Shot Class-Incremental Learning [23.90555521006653]
Few-shot class-incremental learning aims to incrementally recognize new classes using a few samples.
One of the effective methods to solve this challenge is to construct prototypical evolution classifiers.
Because representations for new classes are weak and biased, we argue such a strategy is suboptimal.
arXiv Detail & Related papers (2024-09-18T07:51:38Z) - GMM-IL: Image Classification using Incrementally Learnt, Independent
Probabilistic Models for Small Sample Sizes [0.4511923587827301]
We present a novel two stage architecture which couples visual feature learning with probabilistic models to represent each class.
We outperform a benchmark of an equivalent network with a Softmax head, obtaining increased accuracy for sample sizes smaller than 12 and increased weighted F1 score for 3 imbalanced class profiles.
arXiv Detail & Related papers (2022-12-01T15:19:42Z) - Data augmentation on-the-fly and active learning in data stream
classification [9.367903089535684]
There is an emerging need for predictive models to be trained on-the-fly.
Learning models have access to more labelled data without the need to increase the active learning budget.
Augmented Queues significantly improves the performance in terms of learning quality and speed.
arXiv Detail & Related papers (2022-10-13T09:57:08Z) - Class-Difficulty Based Methods for Long-Tailed Visual Recognition [6.875312133832079]
Long-tailed datasets are frequently encountered in real-world use cases where few classes or categories have higher number of data samples compared to the other classes.
We propose a novel approach to dynamically measure the instantaneous difficulty of each class during the training phase of the model.
We also use the difficulty measures of each class to design a novel weighted loss technique called class-wise difficulty based weighted' and a novel data sampling technique called class-wise difficulty based sampling'
arXiv Detail & Related papers (2022-07-29T06:33:22Z) - Do Deep Networks Transfer Invariances Across Classes? [123.84237389985236]
We show how a generative approach for learning the nuisance transformations can help transfer invariances across classes.
Our results provide one explanation for why classifiers generalize poorly on unbalanced and longtailed distributions.
arXiv Detail & Related papers (2022-03-18T04:38:18Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - ResLT: Residual Learning for Long-tailed Recognition [64.19728932445523]
We propose a more fundamental perspective for long-tailed recognition, i.e., from the aspect of parameter space.
We design the effective residual fusion mechanism -- with one main branch optimized to recognize images from all classes, another two residual branches are gradually fused and optimized to enhance images from medium+tail classes and tail classes respectively.
We test our method on several benchmarks, i.e., long-tailed version of CIFAR-10, CIFAR-100, Places, ImageNet, and iNaturalist 2018.
arXiv Detail & Related papers (2021-01-26T08:43:50Z) - Feature Space Augmentation for Long-Tailed Data [74.65615132238291]
Real-world data often follow a long-tailed distribution as the frequency of each class is typically different.
Class-balanced loss and advanced methods on data re-sampling and augmentation are among the best practices to alleviate the data imbalance problem.
We present a novel approach to address the long-tailed problem by augmenting the under-represented classes in the feature space with the features learned from the classes with ample samples.
arXiv Detail & Related papers (2020-08-09T06:38:00Z) - Many-Class Few-Shot Learning on Multi-Granularity Class Hierarchy [57.68486382473194]
We study many-class few-shot (MCFS) problem in both supervised learning and meta-learning settings.
In this paper, we leverage the class hierarchy as a prior knowledge to train a coarse-to-fine classifier.
The model, "memory-augmented hierarchical-classification network (MahiNet)", performs coarse-to-fine classification where each coarse class can cover multiple fine classes.
arXiv Detail & Related papers (2020-06-28T01:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.