A Systematic Evaluation of Domain Adaptation in Facial Expression
Recognition
- URL: http://arxiv.org/abs/2106.15453v1
- Date: Tue, 29 Jun 2021 14:41:19 GMT
- Title: A Systematic Evaluation of Domain Adaptation in Facial Expression
Recognition
- Authors: Yan San Kong, Varsha Suresh, Jonathan Soh, Desmond C. Ong
- Abstract summary: This paper provides a systematic evaluation of domain adaptation in facial expression recognition.
We use state-of-the-art transfer learning techniques and six commonly-used facial expression datasets.
We find sobering results that the accuracy of transfer learning is not high, and varies idiosyncratically with the target dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Facial Expression Recognition is a commercially important application, but
one common limitation is that applications often require making predictions on
out-of-sample distributions, where target images may have very different
properties from the images that the model was trained on. How well, or badly,
do these models do on unseen target domains? In this paper, we provide a
systematic evaluation of domain adaptation in facial expression recognition.
Using state-of-the-art transfer learning techniques and six commonly-used
facial expression datasets (three collected in the lab and three
"in-the-wild"), we conduct extensive round-robin experiments to examine the
classification accuracies for a state-of-the-art CNN model. We also perform
multi-source experiments where we examine a model's ability to transfer from
multiple source datasets, including (i) within-setting (e.g., lab to lab), (ii)
cross-setting (e.g., in-the-wild to lab), (iii) mixed-setting (e.g., lab and
wild to lab) transfer learning experiments. We find sobering results that the
accuracy of transfer learning is not high, and varies idiosyncratically with
the target dataset, and to a lesser extent the source dataset. Generally, the
best settings for transfer include fine-tuning the weights of a pre-trained
model, and we find that training with more datasets, regardless of setting,
improves transfer performance. We end with a discussion of the need for more --
and regular -- systematic investigations into the generalizability of FER
models, especially for deployed applications.
Related papers
- Validation of Practicality for CSI Sensing Utilizing Machine Learning [0.0]
We develop and evaluate five distinct machine learning models for recognizing human postures.
We analyze how the accuracy of these models varied with different amounts of training data.
We evaluate the models' performance in a setting distinct from the one used for data collection.
arXiv Detail & Related papers (2024-09-09T09:25:08Z) - Regularized Training with Generated Datasets for Name-Only Transfer of Vision-Language Models [36.59260354292177]
Recent advancements in text-to-image generation have inspired researchers to generate datasets tailored for perception models using generative models.
We aim to fine-tune vision-language models to a specific classification model without access to any real images.
Despite the high fidelity of generated images, we observed a significant performance degradation when fine-tuning the model using the generated datasets.
arXiv Detail & Related papers (2024-06-08T10:43:49Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Transfer Learning between Motor Imagery Datasets using Deep Learning --
Validation of Framework and Comparison of Datasets [0.0]
We present a simple deep learning-based framework commonly used in computer vision.
We demonstrate its effectiveness for cross-dataset transfer learning in mental imagery decoding tasks.
arXiv Detail & Related papers (2023-09-04T20:58:57Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Self-Supervised In-Domain Representation Learning for Remote Sensing
Image Scene Classification [1.0152838128195465]
Transferring the ImageNet pre-trained weights to the various remote sensing tasks has produced acceptable results.
Recent research has demonstrated that self-supervised learning methods capture visual features that are more discriminative and transferable.
We are motivated by these facts to pre-train the in-domain representations of remote sensing imagery using contrastive self-supervised learning.
arXiv Detail & Related papers (2023-02-03T15:03:07Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Intra-domain and cross-domain transfer learning for time series data --
How transferable are the features? [0.0]
This study aims to assess how transferable are the features between different domains of time series data.
The effects of transfer learning are observed in terms of predictive performance of the models and their convergence rate during training.
arXiv Detail & Related papers (2022-01-12T12:55:21Z) - Multi-Domain Joint Training for Person Re-Identification [51.73921349603597]
Deep learning-based person Re-IDentification (ReID) often requires a large amount of training data to achieve good performance.
It appears that collecting more training data from diverse environments tends to improve the ReID performance.
We propose an approach called Domain-Camera-Sample Dynamic network (DCSD) whose parameters can be adaptive to various factors.
arXiv Detail & Related papers (2022-01-06T09:20:59Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - Uniform Priors for Data-Efficient Transfer [65.086680950871]
We show that features that are most transferable have high uniformity in the embedding space.
We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data.
arXiv Detail & Related papers (2020-06-30T04:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.