Learn From All: Erasing Attention Consistency for Noisy Label Facial
Expression Recognition
- URL: http://arxiv.org/abs/2207.10299v1
- Date: Thu, 21 Jul 2022 04:30:33 GMT
- Title: Learn From All: Erasing Attention Consistency for Noisy Label Facial
Expression Recognition
- Authors: Yuhang Zhang, Chengrui Wang, Xu Ling and Weihong Deng
- Abstract summary: We explore dealing with noisy labels from a new feature-learning perspective.
We find that FER models remember noisy samples by focusing on a part of the features that can be considered related to the noisy labels.
Inspired by that, we propose a novel Erasing Attention Consistency (EAC) method to suppress the noisy samples.
- Score: 37.83484824542302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Noisy label Facial Expression Recognition (FER) is more challenging than
traditional noisy label classification tasks due to the inter-class similarity
and the annotation ambiguity. Recent works mainly tackle this problem by
filtering out large-loss samples. In this paper, we explore dealing with noisy
labels from a new feature-learning perspective. We find that FER models
remember noisy samples by focusing on a part of the features that can be
considered related to the noisy labels instead of learning from the whole
features that lead to the latent truth. Inspired by that, we propose a novel
Erasing Attention Consistency (EAC) method to suppress the noisy samples during
the training process automatically. Specifically, we first utilize the flip
semantic consistency of facial images to design an imbalanced framework. We
then randomly erase input images and use flip attention consistency to prevent
the model from focusing on a part of the features. EAC significantly
outperforms state-of-the-art noisy label FER methods and generalizes well to
other tasks with a large number of classes like CIFAR100 and Tiny-ImageNet. The
code is available at
https://github.com/zyh-uaiaaaa/Erasing-Attention-Consistency.
Related papers
- Clean Label Disentangling for Medical Image Segmentation with Noisy
Labels [25.180056839942345]
Current methods focusing on medical image segmentation suffer from incorrect annotations, which is known as the noisy label issue.
We propose a class-balanced sampling strategy to tackle the class-imbalanced problem.
We extend our clean label disentangling framework to a new noisy feature-aided clean label disentangling framework.
arXiv Detail & Related papers (2023-11-28T07:54:27Z) - CAPro: Webly Supervised Learning with Cross-Modality Aligned Prototypes [93.71909293023663]
Cross-modality Aligned Prototypes (CAPro) is a unified contrastive learning framework to learn visual representations with correct semantics.
CAPro achieves new state-of-the-art performance and exhibits robustness to open-set recognition.
arXiv Detail & Related papers (2023-10-15T07:20:22Z) - Generative Reasoning Integrated Label Noise Robust Deep Image
Representation Learning [0.0]
We introduce a generative reasoning integrated label noise robust deep representation learning (GRID) approach.
Our approach aims to model the complementary characteristics of discriminative and generative reasoning for IRL under noisy labels.
Our approach learns discriminative image representations while preventing interference of noisy labels independently from the IRL method being selected.
arXiv Detail & Related papers (2022-12-02T15:57:36Z) - Learning advisor networks for noisy image classification [22.77447144331876]
We introduce the novel concept of advisor network to address the problem of noisy labels in image classification.
We trained it with a meta-learning strategy so that it can adapt throughout the training of the main model.
We tested our method on CIFAR10 and CIFAR100 with synthetic noise, and on Clothing1M which contains real-world noise, reporting state-of-the-art results.
arXiv Detail & Related papers (2022-11-08T11:44:08Z) - Large Loss Matters in Weakly Supervised Multi-Label Classification [50.262533546999045]
We first regard unobserved labels as negative labels, casting the W task into noisy multi-label classification.
We propose novel methods for W which reject or correct the large loss samples to prevent model from memorizing the noisy label.
Our methodology actually works well, validating that treating large loss properly matters in a weakly supervised multi-label classification.
arXiv Detail & Related papers (2022-06-08T08:30:24Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Facial Emotion Recognition with Noisy Multi-task Annotations [88.42023952684052]
We introduce a new problem of facial emotion recognition with noisy multi-task annotations.
For this new problem, we suggest a formulation from the point of joint distribution match view.
We exploit a new method to enable the emotion prediction and the joint distribution learning.
arXiv Detail & Related papers (2020-10-19T20:39:37Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - ExpertNet: Adversarial Learning and Recovery Against Noisy Labels [8.88412854076574]
We propose a novel framework, composed of Amateur and Expert, which iteratively learn from each other.
The trained Amateur and Expert proactively leverage the images and their noisy labels to infer image classes.
Our empirical evaluations on noisy versions of CIFAR-10, CIFAR-100 and real-world data of Clothing1M show that the proposed model can achieve robust classification against a wide range of noise ratios.
arXiv Detail & Related papers (2020-07-10T11:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.