The Devil is the Classifier: Investigating Long Tail Relation
Classification with Decoupling Analysis
- URL: http://arxiv.org/abs/2009.07022v1
- Date: Tue, 15 Sep 2020 12:47:00 GMT
- Title: The Devil is the Classifier: Investigating Long Tail Relation
Classification with Decoupling Analysis
- Authors: Haiyang Yu, Ningyu Zhang, Shumin Deng, Zonggang Yuan, Yantao Jia,
Huajun Chen
- Abstract summary: Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase.
We propose a robust classifier with attentive relation routing, which assigns soft weights by automatically aggregating the relations.
- Score: 36.298869931803836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-tailed relation classification is a challenging problem as the head
classes may dominate the training phase, thereby leading to the deterioration
of the tail performance. Existing solutions usually address this issue via
class-balancing strategies, e.g., data re-sampling and loss re-weighting, but
all these methods adhere to the schema of entangling learning of the
representation and classifier. In this study, we conduct an in-depth empirical
investigation into the long-tailed problem and found that pre-trained models
with instance-balanced sampling already capture the well-learned
representations for all classes; moreover, it is possible to achieve better
long-tailed classification ability at low cost by only adjusting the
classifier. Inspired by this observation, we propose a robust classifier with
attentive relation routing, which assigns soft weights by automatically
aggregating the relations. Extensive experiments on two datasets demonstrate
the effectiveness of our proposed approach. Code and datasets are available in
https://github.com/zjunlp/deepke.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - How Re-sampling Helps for Long-Tail Learning? [45.187004699024435]
Long-tail learning has received significant attention due to the challenge it poses with extremely imbalanced datasets.
Recent studies claim that re-sampling brings negligible performance improvements in modern long-tail learning tasks.
We propose a new context shift augmentation module that generates diverse training images for the tail class.
arXiv Detail & Related papers (2023-10-27T16:20:34Z) - Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition [37.62659619941791]
We study the problem of long-tailed visual recognition from the perspective of feature level.
Two novel logit adjustment methods are proposed to improve model performance at a modest computational overhead.
Experiments conducted on benchmark datasets demonstrate the superior performance of the proposed method over the state-of-the-art ones.
arXiv Detail & Related papers (2023-05-18T02:06:06Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Overcoming Classifier Imbalance for Long-tail Object Detection with
Balanced Group Softmax [88.11979569564427]
We provide the first systematic analysis on the underperformance of state-of-the-art models in front of long-tail distribution.
We propose a novel balanced group softmax (BAGS) module for balancing the classifiers within the detection frameworks through group-wise training.
Extensive experiments on the very recent long-tail large vocabulary object recognition benchmark LVIS show that our proposed BAGS significantly improves the performance of detectors.
arXiv Detail & Related papers (2020-06-18T10:24:26Z) - Long-Tailed Recognition Using Class-Balanced Experts [128.73438243408393]
We propose an ensemble of class-balanced experts that combines the strength of diverse classifiers.
Our ensemble of class-balanced experts reaches results close to state-of-the-art and an extended ensemble establishes a new state-of-the-art on two benchmarks for long-tailed recognition.
arXiv Detail & Related papers (2020-04-07T20:57:44Z) - Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition
from a Domain Adaptation Perspective [98.70226503904402]
Object frequency in the real world often follows a power law, leading to a mismatch between datasets with long-tailed class distributions.
We propose to augment the classic class-balanced learning by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach.
arXiv Detail & Related papers (2020-03-24T11:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.