Learning Muti-expert Distribution Calibration for Long-tailed Video
Classification
- URL: http://arxiv.org/abs/2205.10788v1
- Date: Sun, 22 May 2022 09:52:34 GMT
- Title: Learning Muti-expert Distribution Calibration for Long-tailed Video
Classification
- Authors: Yufan Hu, Junyu Gao, Changsheng Xu
- Abstract summary: We propose an end-to-end multi-experts distribution calibration method based on two-level distribution information.
By modeling this two-level distribution information, the model can consider the head classes and the tail classes.
Our method achieves state-of-the-art performance on the long-tailed video classification task.
- Score: 88.12433458277168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing state-of-the-art video classification methods assume the
training data obey a uniform distribution. However, video data in the real
world typically exhibit long-tail class distribution and imbalance, which
extensively results in a model bias towards head class and leads to relatively
low performance on tail class. While the current long-tail classification
methods usually focus on image classification, adapting it to video data is not
a trivial extension. We propose an end-to-end multi-experts distribution
calibration method based on two-level distribution information to address these
challenges. The method jointly considers the distribution of samples in each
class (intra-class distribution) and the diverse distributions of overall data
(inter-class distribution) to solve the problem of imbalanced data under
long-tailed distribution. By modeling this two-level distribution information,
the model can consider the head classes and the tail classes and significantly
transfer the knowledge from the head classes to improve the performance of the
tail classes. Extensive experiments verify that our method achieves
state-of-the-art performance on the long-tailed video classification task.
Related papers
- Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition [37.62659619941791]
We study the problem of long-tailed visual recognition from the perspective of feature level.
Two novel logit adjustment methods are proposed to improve model performance at a modest computational overhead.
Experiments conducted on benchmark datasets demonstrate the superior performance of the proposed method over the state-of-the-art ones.
arXiv Detail & Related papers (2023-05-18T02:06:06Z) - Class-Balancing Diffusion Models [57.38599989220613]
Class-Balancing Diffusion Models (CBDM) are trained with a distribution adjustment regularizer as a solution.
Our method benchmarked the generation results on CIFAR100/CIFAR100LT dataset and shows outstanding performance on the downstream recognition task.
arXiv Detail & Related papers (2023-04-30T20:00:14Z) - Transfer Knowledge from Head to Tail: Uncertainty Calibration under
Long-tailed Distribution [24.734851889816206]
Current calibration techniques treat different classes equally and implicitly assume that the distribution of training data is balanced.
We propose a novel knowledge-transferring-based calibration method by estimating the importance weights for samples of tail classes.
arXiv Detail & Related papers (2023-04-13T13:48:18Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Label-Aware Distribution Calibration for Long-tailed Classification [25.588323749920324]
We propose a label-Aware Distribution LADC approach to calibrate the distribution of tail classes.
Experiments on both image and text long-tailed datasets demonstrate that LADC significantly outperforms existing methods.
arXiv Detail & Related papers (2021-11-09T01:38:35Z) - Improving Tail-Class Representation with Centroid Contrastive Learning [145.73991900239017]
We propose interpolative centroid contrastive learning (ICCL) to improve long-tailed representation learning.
ICCL interpolates two images from a class-agnostic sampler and a class-aware sampler, and trains the model such that the representation of the ICCL can be used to retrieve the centroids for both source classes.
Our result shows a significant accuracy gain of 2.8% on the iNaturalist 2018 dataset with a real-world long-tailed distribution.
arXiv Detail & Related papers (2021-10-19T15:24:48Z) - Long-tailed Recognition by Routing Diverse Distribution-Aware Experts [64.71102030006422]
We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE)
It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module.
RIDE outperforms the state-of-the-art by 5% to 7% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks.
arXiv Detail & Related papers (2020-10-05T06:53:44Z) - Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition
from a Domain Adaptation Perspective [98.70226503904402]
Object frequency in the real world often follows a power law, leading to a mismatch between datasets with long-tailed class distributions.
We propose to augment the classic class-balanced learning by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach.
arXiv Detail & Related papers (2020-03-24T11:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.