Boosting Facial Expression Recognition by A Semi-Supervised Progressive
Teacher
- URL: http://arxiv.org/abs/2205.14361v1
- Date: Sat, 28 May 2022 07:47:53 GMT
- Title: Boosting Facial Expression Recognition by A Semi-Supervised Progressive
Teacher
- Authors: Jing Jiang and Weihong Deng
- Abstract summary: We propose a semi-supervised learning algorithm named Progressive Teacher (PT) to utilize reliable FER datasets as well as large-scale unlabeled expression images for effective training.
Experiments on widely-used databases RAF-DB and FERPlus validate the effectiveness of our method, which achieves state-of-the-art performance with accuracy of 89.57% on RAF-DB.
- Score: 54.50747989860957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we aim to improve the performance of in-the-wild Facial
Expression Recognition (FER) by exploiting semi-supervised learning.
Large-scale labeled data and deep learning methods have greatly improved the
performance of image recognition. However, the performance of FER is still not
ideal due to the lack of training data and incorrect annotations (e.g., label
noises). Among existing in-the-wild FER datasets, reliable ones contain
insufficient data to train robust deep models while large-scale ones are
annotated in lower quality. To address this problem, we propose a
semi-supervised learning algorithm named Progressive Teacher (PT) to utilize
reliable FER datasets as well as large-scale unlabeled expression images for
effective training. On the one hand, PT introduces semi-supervised learning
method to relieve the shortage of data in FER. On the other hand, it selects
useful labeled training samples automatically and progressively to alleviate
label noise. PT uses selected clean labeled data for computing the supervised
classification loss and unlabeled data for unsupervised consistency loss.
Experiments on widely-used databases RAF-DB and FERPlus validate the
effectiveness of our method, which achieves state-of-the-art performance with
accuracy of 89.57% on RAF-DB. Additionally, when the synthetic noise rate
reaches even 30%, the performance of our PT algorithm only degrades by 4.37%.
Related papers
- Learning from Noisy Labels via Self-Taught On-the-Fly Meta Loss Rescaling [6.861041888341339]
We propose unsupervised on-the-fly meta loss rescaling to reweight training samples.
We are among the first to attempt on-the-fly training data reweighting on the challenging task of dialogue modeling.
Our strategy is robust in the face of noisy and clean data, handles class imbalance, and prevents overfitting to noisy labels.
arXiv Detail & Related papers (2024-12-17T14:37:50Z) - Leveraging Semi-Supervised Learning to Enhance Data Mining for Image Classification under Limited Labeled Data [35.431340001608476]
Traditional data mining methods are inadequate when faced with large-scale, high-dimensional and complex data.
This study introduces semi-supervised learning methods, aiming to improve the algorithm's ability to utilize unlabeled data.
Specifically, we adopt a self-training method and combine it with a convolutional neural network (CNN) for image feature extraction and classification.
arXiv Detail & Related papers (2024-11-27T18:59:50Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Debiased Pseudo Labeling in Self-Training [77.83549261035277]
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets.
To mitigate the requirement for labeled data, self-training is widely used in both academia and industry by pseudo labeling on readily-available unlabeled data.
We propose Debiased, in which the generation and utilization of pseudo labels are decoupled by two independent heads.
arXiv Detail & Related papers (2022-02-15T02:14:33Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.