Long-Tailed Visual Recognition via Permutation-Invariant Head-to-Tail Feature Fusion
- URL: http://arxiv.org/abs/2506.00625v1
- Date: Sat, 31 May 2025 16:31:43 GMT
- Title: Long-Tailed Visual Recognition via Permutation-Invariant Head-to-Tail Feature Fusion
- Authors: Mengke Li, Zhikai Hu, Yang Lu, Weichao Lan, Yiu-ming Cheung, Hui Huang,
- Abstract summary: imbalanced distribution of long-tailed data presents a significant challenge for deep learning models.<n>Two key factors contributing to low recognition accuracy are the deformed representation space and a biased classifier.<n>We propose permutation-invariant and head-to-tail feature fusion (PI-H2T) to address these issues.
- Score: 37.62659619941791
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The imbalanced distribution of long-tailed data presents a significant challenge for deep learning models, causing them to prioritize head classes while neglecting tail classes. Two key factors contributing to low recognition accuracy are the deformed representation space and a biased classifier, stemming from insufficient semantic information in tail classes. To address these issues, we propose permutation-invariant and head-to-tail feature fusion (PI-H2T), a highly adaptable method. PI-H2T enhances the representation space through permutation-invariant representation fusion (PIF), yielding more clustered features and automatic class margins. Additionally, it adjusts the biased classifier by transferring semantic information from head to tail classes via head-to-tail fusion (H2TF), improving tail class diversity. Theoretical analysis and experiments show that PI-H2T optimizes both the representation space and decision boundaries. Its plug-and-play design ensures seamless integration into existing methods, providing a straightforward path to further performance improvements. Extensive experiments on long-tailed benchmarks confirm the effectiveness of PI-H2T.
Related papers
- Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - Generalized Face Forgery Detection via Adaptive Learning for Pre-trained Vision Transformer [54.32283739486781]
We present a textbfForgery-aware textbfAdaptive textbfVision textbfTransformer (FA-ViT) under the adaptive learning paradigm.
FA-ViT achieves 93.83% and 78.32% AUC scores on Celeb-DF and DFDC datasets in the cross-dataset evaluation.
arXiv Detail & Related papers (2023-09-20T06:51:11Z) - Learning Diverse Features in Vision Transformers for Improved
Generalization [15.905065768434403]
We show that vision transformers (ViTs) tend to extract robust and spurious features with distinct attention heads.
As a result of this modularity, their performance under distribution shifts can be significantly improved at test time.
We propose a method to further enhance the diversity and complement of the learned features by encouragingarity of the attention heads' input gradients.
arXiv Detail & Related papers (2023-08-30T19:04:34Z) - Dual Compensation Residual Networks for Class Imbalanced Learning [98.35401757647749]
We propose Dual Compensation Residual Networks to better fit both tail and head classes.
An important factor causing overfitting is that there is severe feature drift between training and test data on tail classes.
We also propose a Residual Balanced Multi-Proxies classifier to alleviate the under-fitting issue.
arXiv Detail & Related papers (2023-08-25T04:06:30Z) - Feature Fusion from Head to Tail for Long-Tailed Visual Recognition [39.86973663532936]
The biased decision boundary caused by inadequate semantic information in tail classes is one of the key factors contributing to their low recognition accuracy.
We propose to augment tail classes by grafting the diverse semantic information from head classes, referred to as head-to-tail fusion (H2T)
Both theoretical analysis and practical experimentation demonstrate that H2T can contribute to a more optimized solution for the decision boundary.
arXiv Detail & Related papers (2023-06-12T08:50:46Z) - FF2: A Feature Fusion Two-Stream Framework for Punctuation Restoration [27.14686854704104]
We propose a Feature Fusion two-stream framework (FF2) for punctuation restoration.
Specifically, one stream leverages a pre-trained language model to capture the semantic feature, while another auxiliary module captures the feature at hand.
Without additional data, the experimental results on the popular benchmark IWSLT demonstrate that FF2 achieves new SOTA performance.
arXiv Detail & Related papers (2022-11-09T06:18:17Z) - Dual-branch Hybrid Learning Network for Unbiased Scene Graph Generation [87.13847750383778]
We propose a Dual-branch Hybrid Learning network (DHL) to take care of both head predicates and tail ones for Scene Graph Generation (SGG)
We show that our approach achieves a new state-of-the-art performance on VG and GQA datasets.
arXiv Detail & Related papers (2022-07-16T11:53:50Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.