Geometric Prior Guided Feature Representation Learning for Long-Tailed
Classification
- URL: http://arxiv.org/abs/2401.11436v1
- Date: Sun, 21 Jan 2024 09:16:29 GMT
- Title: Geometric Prior Guided Feature Representation Learning for Long-Tailed
Classification
- Authors: Yanbiao Ma, Licheng Jiao, Fang Liu, Shuyuan Yang, Xu Liu, Puhua Chen
- Abstract summary: We propose to leverage the geometric information of the feature distribution of the well-represented head class to guide the model to learn the underlying distribution of the tail class.
It aims to make the perturbed features cover the underlying distribution of the tail class as much as possible, thus improving the model's generalization performance in the test domain.
- Score: 49.90107582624604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world data are long-tailed, the lack of tail samples leads to a
significant limitation in the generalization ability of the model. Although
numerous approaches of class re-balancing perform well for moderate class
imbalance problems, additional knowledge needs to be introduced to help the
tail class recover the underlying true distribution when the observed
distribution from a few tail samples does not represent its true distribution
properly, thus allowing the model to learn valuable information outside the
observed domain. In this work, we propose to leverage the geometric information
of the feature distribution of the well-represented head class to guide the
model to learn the underlying distribution of the tail class. Specifically, we
first systematically define the geometry of the feature distribution and the
similarity measures between the geometries, and discover four phenomena
regarding the relationship between the geometries of different feature
distributions. Then, based on four phenomena, feature uncertainty
representation is proposed to perturb the tail features by utilizing the
geometry of the head class feature distribution. It aims to make the perturbed
features cover the underlying distribution of the tail class as much as
possible, thus improving the model's generalization performance in the test
domain. Finally, we design a three-stage training scheme enabling feature
uncertainty modeling to be successfully applied. Experiments on
CIFAR-10/100-LT, ImageNet-LT, and iNaturalist2018 show that our proposed
approach outperforms other similar methods on most metrics. In addition, the
experimental phenomena we discovered are able to provide new perspectives and
theoretical foundations for subsequent studies.
Related papers
- Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Unleashing the power of Neural Collapse for Transferability Estimation [42.09673383041276]
Well-trained models exhibit the phenomenon of Neural Collapse.
We propose a novel method termed Fair Collapse (FaCe) for transferability estimation.
FaCe yields state-of-the-art performance on different tasks including image classification, semantic segmentation, and text classification.
arXiv Detail & Related papers (2023-10-09T14:30:10Z) - Predicting and Enhancing the Fairness of DNNs with the Curvature of Perceptual Manifolds [44.79535333220044]
Recent studies have shown that tail classes are not always hard to learn, and model bias has been observed on sample-balanced datasets.
In this work, we first establish a geometric perspective for analyzing model fairness and then systematically propose a series of geometric measurements.
arXiv Detail & Related papers (2023-03-22T04:49:23Z) - Modeling Uncertain Feature Representation for Domain Generalization [49.129544670700525]
We show that our method consistently improves the network generalization ability on multiple vision tasks.
Our methods are simple yet effective and can be readily integrated into networks without additional trainable parameters or loss constraints.
arXiv Detail & Related papers (2023-01-16T14:25:02Z) - Bias-inducing geometries: an exactly solvable data model with fairness
implications [13.690313475721094]
We introduce an exactly solvable high-dimensional model of data imbalance.
We analytically unpack the typical properties of learning models trained in this synthetic framework.
We obtain exact predictions for the observables that are commonly employed for fairness assessment.
arXiv Detail & Related papers (2022-05-31T16:27:57Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Provable Benefits of Overparameterization in Model Compression: From
Double Descent to Pruning Neural Networks [38.153825455980645]
Recent empirical evidence indicates that the practice of overization not only benefits training large models, but also assists - perhaps counterintuitively - building lightweight models.
This paper sheds light on these empirical findings by theoretically characterizing the high-dimensional toolsets of model pruning.
We analytically identify regimes in which, even if the location of the most informative features is known, we are better off fitting a large model and then pruning.
arXiv Detail & Related papers (2020-12-16T05:13:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.