Difficulty-Net: Learning to Predict Difficulty for Long-Tailed
Recognition
- URL: http://arxiv.org/abs/2209.02960v1
- Date: Wed, 7 Sep 2022 07:04:08 GMT
- Title: Difficulty-Net: Learning to Predict Difficulty for Long-Tailed
Recognition
- Authors: Saptarshi Sinha and Hiroki Ohashi
- Abstract summary: We propose Difficulty-Net, which learns to predict the difficulty of classes using the model's performance in a meta-learning framework.
We introduce two key concepts, namely the relative difficulty and the driver loss.
Experiments on popular long-tailed datasets demonstrated the effectiveness of the proposed method.
- Score: 5.977483447975081
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Long-tailed datasets, where head classes comprise much more training samples
than tail classes, cause recognition models to get biased towards the head
classes. Weighted loss is one of the most popular ways of mitigating this
issue, and a recent work has suggested that class-difficulty might be a better
clue than conventionally used class-frequency to decide the distribution of
weights. A heuristic formulation was used in the previous work for quantifying
the difficulty, but we empirically find that the optimal formulation varies
depending on the characteristics of datasets. Therefore, we propose
Difficulty-Net, which learns to predict the difficulty of classes using the
model's performance in a meta-learning framework. To make it learn reasonable
difficulty of a class within the context of other classes, we newly introduce
two key concepts, namely the relative difficulty and the driver loss. The
former helps Difficulty-Net take other classes into account when calculating
difficulty of a class, while the latter is indispensable for guiding the
learning to a meaningful direction. Extensive experiments on popular
long-tailed datasets demonstrated the effectiveness of the proposed method, and
it achieved state-of-the-art performance on multiple long-tailed datasets.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - The Unreasonable Effectiveness of Easy Training Data for Hard Tasks [84.30018805150607]
We present the surprising conclusion that current pretrained language models often generalize relatively well from easy to hard data.
We demonstrate this kind of easy-to-hard generalization using simple finetuning methods like in-context learning, linear heads, and QLoRA.
We conclude that easy-to-hard generalization in LMs is surprisingly strong for the tasks studied.
arXiv Detail & Related papers (2024-01-12T18:36:29Z) - APAM: Adaptive Pre-training and Adaptive Meta Learning in Language Model
for Noisy Labels and Long-tailed Learning [9.433150673299163]
Practical natural language processing (NLP) tasks are commonly long-tailed with noisy labels.
Some commonly used resampling techniques, such as oversampling or undersampling, could easily lead to overfitting.
We propose a general framework to handle the problem of both long-tail and noisy labels.
arXiv Detail & Related papers (2023-02-06T18:40:04Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Class-Difficulty Based Methods for Long-Tailed Visual Recognition [6.875312133832079]
Long-tailed datasets are frequently encountered in real-world use cases where few classes or categories have higher number of data samples compared to the other classes.
We propose a novel approach to dynamically measure the instantaneous difficulty of each class during the training phase of the model.
We also use the difficulty measures of each class to design a novel weighted loss technique called class-wise difficulty based weighted' and a novel data sampling technique called class-wise difficulty based sampling'
arXiv Detail & Related papers (2022-07-29T06:33:22Z) - Prototype-Anchored Learning for Learning with Imperfect Annotations [83.7763875464011]
It is challenging to learn unbiased classification models from imperfectly annotated datasets.
We propose a prototype-anchored learning (PAL) method, which can be easily incorporated into various learning-based classification schemes.
We verify the effectiveness of PAL on class-imbalanced learning and noise-tolerant learning by extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-06-23T10:25:37Z) - Let the Model Decide its Curriculum for Multitask Learning [22.043291547405545]
We propose two classes of techniques to arrange training instances into a learning curriculum based on difficulty scores computed via model-based approaches.
We show that instance-level and dataset-level techniques result in strong representations as they lead to an average performance improvement of 4.17% and 3.15% over their respective baselines.
arXiv Detail & Related papers (2022-05-19T23:34:22Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Class-Wise Difficulty-Balanced Loss for Solving Class-Imbalance [6.875312133832079]
We propose a novel loss function named Class-wise Difficulty-Balanced loss.
It dynamically distributes weights to each sample according to the difficulty of the class that the sample belongs to.
The results show that CDB loss consistently outperforms the recently proposed loss functions on class-imbalanced datasets.
arXiv Detail & Related papers (2020-10-05T07:19:19Z) - The Devil is the Classifier: Investigating Long Tail Relation
Classification with Decoupling Analysis [36.298869931803836]
Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase.
We propose a robust classifier with attentive relation routing, which assigns soft weights by automatically aggregating the relations.
arXiv Detail & Related papers (2020-09-15T12:47:00Z) - Long-Tailed Recognition Using Class-Balanced Experts [128.73438243408393]
We propose an ensemble of class-balanced experts that combines the strength of diverse classifiers.
Our ensemble of class-balanced experts reaches results close to state-of-the-art and an extended ensemble establishes a new state-of-the-art on two benchmarks for long-tailed recognition.
arXiv Detail & Related papers (2020-04-07T20:57:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.