Towards Semi-Supervised Deep Facial Expression Recognition with An
Adaptive Confidence Margin
- URL: http://arxiv.org/abs/2203.12341v2
- Date: Thu, 24 Mar 2022 02:40:17 GMT
- Title: Towards Semi-Supervised Deep Facial Expression Recognition with An
Adaptive Confidence Margin
- Authors: Hangyu Li, Nannan Wang, Xi Yang, Xiaoyu Wang, and Xinbo Gao
- Abstract summary: We learn an Adaptive Confidence Margin (Ada-CM) to fully leverage all unlabeled data for semi-supervised deep facial expression recognition.
All unlabeled samples are partitioned into two subsets by comparing their confidence scores with the adaptively learned confidence margin.
Our method achieves state-of-the-art performance, especially surpassing fully-supervised baselines in a semi-supervised manner.
- Score: 92.76372026435858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Only parts of unlabeled data are selected to train models for most
semi-supervised learning methods, whose confidence scores are usually higher
than the pre-defined threshold (i.e., the confidence margin). We argue that the
recognition performance should be further improved by making full use of all
unlabeled data. In this paper, we learn an Adaptive Confidence Margin (Ada-CM)
to fully leverage all unlabeled data for semi-supervised deep facial expression
recognition. All unlabeled samples are partitioned into two subsets by
comparing their confidence scores with the adaptively learned confidence margin
at each training epoch: (1) subset I including samples whose confidence scores
are no lower than the margin; (2) subset II including samples whose confidence
scores are lower than the margin. For samples in subset I, we constrain their
predictions to match pseudo labels. Meanwhile, samples in subset II participate
in the feature-level contrastive objective to learn effective facial expression
features. We extensively evaluate Ada-CM on four challenging datasets, showing
that our method achieves state-of-the-art performance, especially surpassing
fully-supervised baselines in a semi-supervised manner. Ablation study further
proves the effectiveness of our method. The source code is available at
https://github.com/hangyu94/Ada-CM.
Related papers
- Memory Consistency Guided Divide-and-Conquer Learning for Generalized
Category Discovery [56.172872410834664]
Generalized category discovery (GCD) aims at addressing a more realistic and challenging setting of semi-supervised learning.
We propose a Memory Consistency guided Divide-and-conquer Learning framework (MCDL)
Our method outperforms state-of-the-art models by a large margin on both seen and unseen classes of the generic image recognition.
arXiv Detail & Related papers (2024-01-24T09:39:45Z) - JointMatch: A Unified Approach for Diverse and Collaborative
Pseudo-Labeling to Semi-Supervised Text Classification [65.268245109828]
Semi-supervised text classification (SSTC) has gained increasing attention due to its ability to leverage unlabeled data.
Existing approaches based on pseudo-labeling suffer from the issues of pseudo-label bias and error accumulation.
We propose JointMatch, a holistic approach for SSTC that addresses these challenges by unifying ideas from recent semi-supervised learning.
arXiv Detail & Related papers (2023-10-23T05:43:35Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Confidence Estimation Using Unlabeled Data [12.512654188295764]
We propose the first confidence estimation method for a semi-supervised setting, when most training labels are unavailable.
We use training consistency as a surrogate function and propose a consistency ranking loss for confidence estimation.
On both image classification and segmentation tasks, our method achieves state-of-the-art performances in confidence estimation.
arXiv Detail & Related papers (2023-07-19T20:11:30Z) - Exploring the Boundaries of Semi-Supervised Facial Expression Recognition using In-Distribution, Out-of-Distribution, and Unconstrained Data [23.4909421082857]
We present a study on 11 of the most recent semi-supervised methods, in the context of facial expression recognition (FER)
Our investigation covers semi-supervised learning from in-distribution, out-of-distribution, unconstrained, and very small unlabelled data.
With an equal number of labelled samples, semi-supervised learning delivers a considerable improvement over supervised learning.
arXiv Detail & Related papers (2023-06-02T01:40:08Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning [46.95063831057502]
We propose emphFreeMatch to define and adjust the confidence threshold in a self-adaptive manner according to the model's learning status.
FreeMatch achieves textbf5.78%, textbf13.59%, and textbf1.28% error rate reduction over the latest state-of-the-art method FlexMatch on CIFAR-10 with 1 label per class.
arXiv Detail & Related papers (2022-05-15T10:07:52Z) - SemiFed: Semi-supervised Federated Learning with Consistency and
Pseudo-Labeling [14.737638416823772]
Federated learning enables multiple clients, such as mobile phones and organizations, to collaboratively learn a shared model for prediction.
In this work, we focus on a new scenario for cross-silo federated learning, where data samples of each client are partially labeled.
We propose a new framework dubbed SemiFed that unifies two dominant approaches for semi-supervised learning: consistency regularization and pseudo-labeling.
arXiv Detail & Related papers (2021-08-21T01:14:27Z) - Binary Classification from Positive Data with Skewed Confidence [85.18941440826309]
Positive-confidence (Pconf) classification is a promising weakly-supervised learning method.
In practice, the confidence may be skewed by bias arising in an annotation process.
We introduce the parameterized model of the skewed confidence, and propose the method for selecting the hyper parameter.
arXiv Detail & Related papers (2020-01-29T00:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.