Disentangled Source-Free Personalization for Facial Expression Recognition with Neutral Target Data
- URL: http://arxiv.org/abs/2503.20771v3
- Date: Sat, 05 Apr 2025 12:55:44 GMT
- Title: Disentangled Source-Free Personalization for Facial Expression Recognition with Neutral Target Data
- Authors: Masoumeh Sharafi, Emma Ollivier, Muhammad Osama Zeeshan, Soufiane Belharbi, Marco Pedersoli, Alessandro Lameiras Koerich, Simon Bacon, Eric Granger,
- Abstract summary: Source-free domain adaptation (SFDA) methods are employed to adapt a pre-trained source model using only unlabeled target domain data.<n>This paper introduces the Disentangled Source-Free Domain Adaptation (DSFDA) method to address the SFDA challenge posed by missing target expression data.<n>Our method learns to disentangle features related to expressions and identity while generating the missing non-neutral target data.
- Score: 49.25159192831934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial Expression Recognition (FER) from videos is a crucial task in various application areas, such as human-computer interaction and health monitoring (e.g., pain, depression, fatigue, and stress). Beyond the challenges of recognizing subtle emotional or health states, the effectiveness of deep FER models is often hindered by the considerable variability of expressions among subjects. Source-free domain adaptation (SFDA) methods are employed to adapt a pre-trained source model using only unlabeled target domain data, thereby avoiding data privacy and storage issues. Typically, SFDA methods adapt to a target domain dataset corresponding to an entire population and assume it includes data from all recognition classes. However, collecting such comprehensive target data can be difficult or even impossible for FER in healthcare applications. In many real-world scenarios, it may be feasible to collect a short neutral control video (displaying only neutral expressions) for target subjects before deployment. These videos can be used to adapt a model to better handle the variability of expressions among subjects. This paper introduces the Disentangled Source-Free Domain Adaptation (DSFDA) method to address the SFDA challenge posed by missing target expression data. DSFDA leverages data from a neutral target control video for end-to-end generation and adaptation of target data with missing non-neutral data. Our method learns to disentangle features related to expressions and identity while generating the missing non-neutral target data, thereby enhancing model accuracy. Additionally, our self-supervision strategy improves model adaptation by reconstructing target images that maintain the same identity and source expression. Experimental results on the challenging BioVid and UNBC-McMaster pain datasets indicate that our DSFDA approach can outperform state-of-the-art adaptation method.
Related papers
- Progressive Multi-Source Domain Adaptation for Personalized Facial Expression Recognition [51.61979855488214]
Personalized facial expression recognition (FER) involves adapting a machine learning model using samples from labeled sources and unlabeled target domains.
We propose a progressive MSDA approach that gradually introduces information from subjects based on their similarity to the target subject.
Our experiments show the effectiveness of our proposed method on pain datasets: Biovid and UNBC-McMaster.
arXiv Detail & Related papers (2025-04-05T19:14:51Z) - Towards Practical Emotion Recognition: An Unsupervised Source-Free Approach for EEG Domain Adaptation [0.5755004576310334]
We propose a novel SF-UDA approach for EEG-based emotion classification across domains.
We introduce Dual-Loss Adaptive Regularization (DLAR) to minimize prediction discrepancies and align predictions with expected pseudo-labels.
Our approach significantly outperforms state-of-the-art methods, achieving 65.84% accuracy when trained on DEAP and tested on SEED.
arXiv Detail & Related papers (2025-03-26T14:29:20Z) - Bridge then Begin Anew: Generating Target-relevant Intermediate Model for Source-free Visual Emotion Adaptation [22.638915084704344]
Visual emotion recognition (VER) aims at understanding humans' emotional reactions toward different visual stimuli.
domain adaptation offers an alternative solution by adapting models trained on labeled source data to unlabeled target data.
Due to privacy concerns, source emotional data may be inaccessible.
We propose a novel framework termed Bridge then Begin Anew (BBA), which consists of two steps: domain-bridged model generation (DMG) and target-related model adaptation (TMA)
arXiv Detail & Related papers (2024-12-18T07:51:35Z) - Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - SFHarmony: Source Free Domain Adaptation for Distributed Neuroimaging
Analysis [2.371982686172067]
Different MRI scanners produce images with different characteristics, resulting in a domain shift known as the harmonisation problem'
We propose an Unsupervised Source-Free Domain Adaptation (SFDA) method, SFHarmony, to overcome these barriers.
Our method outperforms existing SFDA approaches across a range of realistic data scenarios.
arXiv Detail & Related papers (2023-03-28T13:35:10Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - Source-Free Domain Adaptation for Semantic Segmentation [11.722728148523366]
Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network-based approaches for semantic segmentation heavily rely on the pixel-level annotated data.
We propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation.
arXiv Detail & Related papers (2021-03-30T14:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.