Domain Adaptation Gaze Estimation by Embedding with Prediction
Consistency
- URL: http://arxiv.org/abs/2011.07526v1
- Date: Sun, 15 Nov 2020 13:33:43 GMT
- Title: Domain Adaptation Gaze Estimation by Embedding with Prediction
Consistency
- Authors: Zidong Guo, Zejian Yuan, Chong Zhang, Wanchao Chi, Yonggen Ling, and
Shenghao Zhang
- Abstract summary: This paper proposes an unsupervised method for domain adaptation gaze estimation.
We employ source gaze to form a locally linear representation in the gaze space for each target domain prediction.
The same linear combinations are applied in the embedding space to generate hypothesis embedding for the target domain sample.
- Score: 10.246471430786244
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Gaze is the essential manifestation of human attention. In recent years, a
series of work has achieved high accuracy in gaze estimation. However, the
inter-personal difference limits the reduction of the subject-independent gaze
estimation error. This paper proposes an unsupervised method for domain
adaptation gaze estimation to eliminate the impact of inter-personal diversity.
In domain adaption, we design an embedding representation with prediction
consistency to ensure that the linear relationship between gaze directions in
different domains remains consistent on gaze space and embedding space.
Specifically, we employ source gaze to form a locally linear representation in
the gaze space for each target domain prediction. Then the same linear
combinations are applied in the embedding space to generate hypothesis
embedding for the target domain sample, remaining prediction consistency. The
deviation between the target and source domain is reduced by approximating the
predicted and hypothesis embedding for the target domain sample. Guided by the
proposed strategy, we design Domain Adaptation Gaze Estimation Network(DAGEN),
which learns embedding with prediction consistency and achieves
state-of-the-art results on both the MPIIGaze and the EYEDIAP datasets.
Related papers
- Unsupervised Structural-Counterfactual Generation under Domain Shift [0.0]
We present a novel generative modeling challenge: generating counterfactual samples in a target domain based on factual observations from a source domain.
Our framework combines the posterior distribution of effect-intrinsic variables from the source domain with the prior distribution of domain-intrinsic variables from the target domain to synthesize the desired counterfactuals.
arXiv Detail & Related papers (2025-02-17T16:48:16Z) - Exploiting Aggregation and Segregation of Representations for Domain Adaptive Human Pose Estimation [50.31351006532924]
Human pose estimation (HPE) has received increasing attention recently due to its wide application in motion analysis, virtual reality, healthcare, etc.
It suffers from the lack of labeled diverse real-world datasets due to the time- and labor-intensive annotation.
We introduce a novel framework that capitalizes on both representation aggregation and segregation for domain adaptive human pose estimation.
arXiv Detail & Related papers (2024-12-29T17:59:45Z) - Causal Representation-Based Domain Generalization on Gaze Estimation [10.283904882611463]
We propose the Causal Representation-Based Domain Generalization on Gaze Estimation framework.
We employ an adversarial training manner and an additional penalizing term to extract domain-invariant features.
By leveraging these modules, CauGE ensures that the neural networks learn from representations that meet the causal mechanisms' general principles.
arXiv Detail & Related papers (2024-08-30T01:45:22Z) - Learning When the Concept Shifts: Confounding, Invariance, and Dimension Reduction [5.38274042816001]
In observational data, the distribution shift is often driven by unobserved confounding factors.
This motivates us to study the domain adaptation problem with observational data.
We show a model that uses the learned lower-dimensional subspace can incur nearly ideal gap between target and source risk.
arXiv Detail & Related papers (2024-06-22T17:43:08Z) - Optimal Aggregation of Prediction Intervals under Unsupervised Domain Shift [9.387706860375461]
A distribution shift occurs when the underlying data-generating process changes, leading to a deviation in the model's performance.
The prediction interval serves as a crucial tool for characterizing uncertainties induced by their underlying distribution.
We propose methodologies for aggregating prediction intervals to obtain one with minimal width and adequate coverage on the target domain.
arXiv Detail & Related papers (2024-05-16T17:55:42Z) - Variational Counterfactual Prediction under Runtime Domain Corruption [50.89405221574912]
Co-occurrence of domain shift and inaccessible variables runtime domain corruption seriously impairs generalizability of trained counterfactual predictor.
We build an adversarially unified variational causal effect model, named VEGAN, with a novel two-stage adversarial domain adaptation scheme.
We demonstrate that VEGAN outperforms other state-of-the-art baselines on individual-level treatment effect estimation in the presence of runtime domain corruption.
arXiv Detail & Related papers (2023-06-23T02:54:34Z) - Label Alignment Regularization for Distribution Shift [63.228879525056904]
Recent work has highlighted the label alignment property (LAP) in supervised learning, where the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix.
We propose a regularization method for unsupervised domain adaptation that encourages alignment between the predictions in the target domain and its top singular vectors.
We report improved performance over domain adaptation baselines in well-known tasks such as MNIST-USPS domain adaptation and cross-lingual sentiment analysis.
arXiv Detail & Related papers (2022-11-27T22:54:48Z) - Jitter Does Matter: Adapting Gaze Estimation to New Domains [12.482427155726413]
We propose to utilize gaze jitter to analyze and optimize gaze domain adaptation task.
We find that the high-frequency component (HFC) is an important factor that leads to jitter.
We employ contrastive learning to encourage the model to obtain similar representations between original and perturbed data.
arXiv Detail & Related papers (2022-10-05T08:20:41Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Relation Matters: Foreground-aware Graph-based Relational Reasoning for
Domain Adaptive Object Detection [81.07378219410182]
We propose a new and general framework for DomainD, named Foreground-aware Graph-based Reasoning (FGRR)
FGRR incorporates graph structures into the detection pipeline to explicitly model the intra- and inter-domain foreground object relations.
Empirical results demonstrate that the proposed FGRR exceeds the state-of-the-art on four DomainD benchmarks.
arXiv Detail & Related papers (2022-06-06T05:12:48Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.