Class-Aware Universum Inspired Re-Balance Learning for Long-Tailed
Recognition
- URL: http://arxiv.org/abs/2207.12808v1
- Date: Tue, 26 Jul 2022 11:03:39 GMT
- Title: Class-Aware Universum Inspired Re-Balance Learning for Long-Tailed
Recognition
- Authors: Enhao Zhang, Chuanxing Geng, and Songcan Chen
- Abstract summary: Class-aware Universum Inspired Re-balance Learning(CaUIRL) for long-tailed recognition.
We develop a higher-order mixup approach, which can automatically generate class-aware Universum(CaU) data without resorting to any external data.
- Score: 24.35287225775304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data augmentation for minority classes is an effective strategy for
long-tailed recognition, thus developing a large number of methods. Although
these methods all ensure the balance in sample quantity, the quality of the
augmented samples is not always satisfactory for recognition, being prone to
such problems as over-fitting, lack of diversity, semantic drift, etc. For
these issues, we propose the Class-aware Universum Inspired Re-balance
Learning(CaUIRL) for long-tailed recognition, which endows the Universum with
class-aware ability to re-balance individual minority classes from both sample
quantity and quality. In particular, we theoretically prove that the
classifiers learned by CaUIRL are consistent with those learned under the
balanced condition from a Bayesian perspective. In addition, we further develop
a higher-order mixup approach, which can automatically generate class-aware
Universum(CaU) data without resorting to any external data. Unlike the
traditional Universum, such generated Universum additionally takes the domain
similarity, class separability, and sample diversity into account. Extensive
experiments on benchmark datasets demonstrate the surprising advantages of our
method, especially the top1 accuracy in minority classes is improved by 1.9% 6%
compared to the state-of-the-art method.
Related papers
- Exploring Vacant Classes in Label-Skewed Federated Learning [113.65301899666645]
Label skews, characterized by disparities in local label distribution across clients, pose a significant challenge in federated learning.
This paper introduces FedVLS, a novel approach to label-skewed federated learning that integrates vacant-class distillation and logit suppression simultaneously.
arXiv Detail & Related papers (2024-01-04T16:06:31Z) - CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition [10.441880303257468]
Class imbalance problems frequently occur in real-world tasks.
To mitigate this problem, many approaches have aimed to balance among given classes by re-weighting or re-sampling training samples.
These re-balancing methods increase the impact of minority classes and reduce the influence of majority classes on the output of models.
Several methods have been developed that increase the representations of minority samples by the features of the majority samples.
arXiv Detail & Related papers (2023-02-10T20:30:22Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Long-tail Recognition via Compositional Knowledge Transfer [60.03764547406601]
We introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem.
Our objective is to transfer knowledge acquired from information-rich common classes to semantically similar, and yet data-hungry, rare classes.
Experiments show that our approach can achieve significant performance boosts on rare classes while maintaining robust common class performance.
arXiv Detail & Related papers (2021-12-13T15:48:59Z) - Targeted Supervised Contrastive Learning for Long-Tailed Recognition [50.24044608432207]
Real-world data often exhibits long tail distributions with heavy class imbalance.
We show that while supervised contrastive learning can help improve performance, past baselines suffer from poor uniformity brought in by imbalanced data distribution.
We propose targeted supervised contrastive learning (TSC), which improves the uniformity of the feature distribution on the hypersphere.
arXiv Detail & Related papers (2021-11-27T22:40:10Z) - Long-Tailed Recognition Using Class-Balanced Experts [128.73438243408393]
We propose an ensemble of class-balanced experts that combines the strength of diverse classifiers.
Our ensemble of class-balanced experts reaches results close to state-of-the-art and an extended ensemble establishes a new state-of-the-art on two benchmarks for long-tailed recognition.
arXiv Detail & Related papers (2020-04-07T20:57:44Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z) - M2m: Imbalanced Classification via Major-to-minor Translation [79.09018382489506]
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples from more-frequent classes.
Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods.
arXiv Detail & Related papers (2020-04-01T13:21:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.