Combating Representation Learning Disparity with Geometric Harmonization
- URL: http://arxiv.org/abs/2310.17622v1
- Date: Thu, 26 Oct 2023 17:41:11 GMT
- Title: Combating Representation Learning Disparity with Geometric Harmonization
- Authors: Zhihan Zhou and Jiangchao Yao and Feng Hong and Ya Zhang and Bo Han
and Yanfeng Wang
- Abstract summary: We propose a novel Geometric Harmonization (GH) method to encourage category-level uniformity in representation learning.
Our proposal does not alter the setting of SSL and can be easily integrated into existing methods in a low-cost manner.
- Score: 50.29859682439571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning (SSL) as an effective paradigm of representation
learning has achieved tremendous success on various curated datasets in diverse
scenarios. Nevertheless, when facing the long-tailed distribution in real-world
applications, it is still hard for existing methods to capture transferable and
robust representation. Conventional SSL methods, pursuing sample-level
uniformity, easily leads to representation learning disparity where head
classes dominate the feature regime but tail classes passively collapse. To
address this problem, we propose a novel Geometric Harmonization (GH) method to
encourage category-level uniformity in representation learning, which is more
benign to the minority and almost does not hurt the majority under long-tailed
distribution. Specially, GH measures the population statistics of the embedding
space on top of self-supervised learning, and then infer an fine-grained
instance-wise calibration to constrain the space expansion of head classes and
avoid the passive collapse of tail classes. Our proposal does not alter the
setting of SSL and can be easily integrated into existing methods in a low-cost
manner. Extensive results on a range of benchmark datasets show the
effectiveness of GH with high tolerance to the distribution skewness. Our code
is available at https://github.com/MediaBrain-SJTU/Geometric-Harmonization.
Related papers
- Covariance-based Space Regularization for Few-shot Class Incremental Learning [25.435192867105552]
Few-shot Class Incremental Learning (FSCIL) requires the model to continually learn new classes with limited labeled data.
Due to the limited data in incremental sessions, models are prone to overfitting new classes and suffering catastrophic forgetting of base classes.
Recent advancements resort to prototype-based approaches to constrain the base class distribution and learn discriminative representations of new classes.
arXiv Detail & Related papers (2024-11-02T08:03:04Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Local overlap reduction procedure for dynamic ensemble selection [13.304462985219237]
Class imbalance is a characteristic known for making learning more challenging for classification models.
We propose a DS technique which attempts to minimize the effects of the local class overlap during the classification procedure.
Experimental results show that the proposed technique can significantly outperform the baseline.
arXiv Detail & Related papers (2022-06-16T21:31:05Z) - Targeted Supervised Contrastive Learning for Long-Tailed Recognition [50.24044608432207]
Real-world data often exhibits long tail distributions with heavy class imbalance.
We show that while supervised contrastive learning can help improve performance, past baselines suffer from poor uniformity brought in by imbalanced data distribution.
We propose targeted supervised contrastive learning (TSC), which improves the uniformity of the feature distribution on the hypersphere.
arXiv Detail & Related papers (2021-11-27T22:40:10Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Unsupervised Feature Learning by Cross-Level Instance-Group
Discrimination [68.83098015578874]
We integrate between-instance similarity into contrastive learning, not directly by instance grouping, but by cross-level discrimination.
CLD effectively brings unsupervised learning closer to natural data and real-world applications.
New state-of-the-art on self-supervision, semi-supervision, and transfer learning benchmarks, and beats MoCo v2 and SimCLR on every reported performance.
arXiv Detail & Related papers (2020-08-09T21:13:13Z) - Boosting Few-Shot Learning With Adaptive Margin Loss [109.03665126222619]
This paper proposes an adaptive margin principle to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems.
Extensive experiments demonstrate that the proposed method can boost the performance of current metric-based meta-learning approaches.
arXiv Detail & Related papers (2020-05-28T07:58:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.