Leave No Stone Unturned: Mine Extra Knowledge for Imbalanced Facial
Expression Recognition
- URL: http://arxiv.org/abs/2310.19636v1
- Date: Mon, 30 Oct 2023 15:26:26 GMT
- Title: Leave No Stone Unturned: Mine Extra Knowledge for Imbalanced Facial
Expression Recognition
- Authors: Yuhang Zhang, Yaqi Li, Lixiong Qin, Xuannan Liu, Weihong Deng
- Abstract summary: Facial expression data is characterized by a significant imbalance, with most collected data showing happy or neutral expressions and fewer instances of fear or disgust.
This imbalance poses challenges to facial expression recognition (FER) models, hindering their ability to fully understand various human emotional states.
Existing FER methods typically report overall accuracy on highly imbalanced test sets but exhibit low performance in terms of the mean accuracy across all expression classes.
- Score: 39.08466869516571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial expression data is characterized by a significant imbalance, with most
collected data showing happy or neutral expressions and fewer instances of fear
or disgust. This imbalance poses challenges to facial expression recognition
(FER) models, hindering their ability to fully understand various human
emotional states. Existing FER methods typically report overall accuracy on
highly imbalanced test sets but exhibit low performance in terms of the mean
accuracy across all expression classes. In this paper, our aim is to address
the imbalanced FER problem. Existing methods primarily focus on learning
knowledge of minor classes solely from minor-class samples. However, we propose
a novel approach to extract extra knowledge related to the minor classes from
both major and minor class samples. Our motivation stems from the belief that
FER resembles a distribution learning task, wherein a sample may contain
information about multiple classes. For instance, a sample from the major class
surprise might also contain useful features of the minor class fear. Inspired
by that, we propose a novel method that leverages re-balanced attention maps to
regularize the model, enabling it to extract transformation invariant
information about the minor classes from all training samples. Additionally, we
introduce re-balanced smooth labels to regulate the cross-entropy loss, guiding
the model to pay more attention to the minor classes by utilizing the extra
information regarding the label distribution of the imbalanced training data.
Extensive experiments on different datasets and backbones show that the two
proposed modules work together to regularize the model and achieve
state-of-the-art performance under the imbalanced FER task. Code is available
at https://github.com/zyh-uaiaaaa.
Related papers
- Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - When Noisy Labels Meet Long Tail Dilemmas: A Representation Calibration
Method [40.25499257944916]
Real-world datasets are both noisily labeled and class-imbalanced.
We propose a representation calibration method RCAL.
We derive theoretical results to discuss the effectiveness of our representation calibration.
arXiv Detail & Related papers (2022-11-20T11:36:48Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Transfer and Share: Semi-Supervised Learning from Long-Tailed Data [27.88381366842497]
We present the TRAS (TRAnsfer and Share) to effectively utilize long-tailed semi-supervised data.
TRAS transforms the imbalanced pseudo-label distribution of a traditional SSL model.
It then transfers the distribution to a target model such that the minority class will receive significant attention.
arXiv Detail & Related papers (2022-05-26T13:37:59Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Class-Wise Difficulty-Balanced Loss for Solving Class-Imbalance [6.875312133832079]
We propose a novel loss function named Class-wise Difficulty-Balanced loss.
It dynamically distributes weights to each sample according to the difficulty of the class that the sample belongs to.
The results show that CDB loss consistently outperforms the recently proposed loss functions on class-imbalanced datasets.
arXiv Detail & Related papers (2020-10-05T07:19:19Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.