A Generalized Label Shift Perspective for Cross-Domain Gaze Estimation
- URL: http://arxiv.org/abs/2505.13043v1
- Date: Mon, 19 May 2025 12:33:52 GMT
- Title: A Generalized Label Shift Perspective for Cross-Domain Gaze Estimation
- Authors: Hao-Ran Yang, Xiaohui Chen, Chuan-Xian Ren,
- Abstract summary: Cross-domain Gaze Estimation (CDGE) is developed for real-world application scenarios.<n>We introduce a novel Generalized Label Shift (GLS) perspective to CDGE.<n>To embed the reweighted source distribution to conditional invariant learning, we derive a probability-aware estimation of conditional operator discrepancy.
- Score: 27.20591585252664
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Aiming to generalize the well-trained gaze estimation model to new target domains, Cross-domain Gaze Estimation (CDGE) is developed for real-world application scenarios. Existing CDGE methods typically extract the domain-invariant features to mitigate domain shift in feature space, which is proved insufficient by Generalized Label Shift (GLS) theory. In this paper, we introduce a novel GLS perspective to CDGE and modelize the cross-domain problem by label and conditional shift problem. A GLS correction framework is presented and a feasible realization is proposed, in which a importance reweighting strategy based on truncated Gaussian distribution is introduced to overcome the continuity challenges in label shift correction. To embed the reweighted source distribution to conditional invariant learning, we further derive a probability-aware estimation of conditional operator discrepancy. Extensive experiments on standard CDGE tasks with different backbone models validate the superior generalization capability across domain and applicability on various models of proposed method.
Related papers
- Generative Classifier for Domain Generalization [84.92088101715116]
Domain generalization aims to the generalizability of computer vision models toward distribution shifts.<n>We propose Generative-driven Domain Generalization (GCDG)<n>GCDG consists of three key modules: Heterogeneity Learning(HLC), Spurious Correlation(SCB), and Diverse Component Balancing(DCB)
arXiv Detail & Related papers (2025-04-03T04:38:33Z) - Partial Transportability for Domain Generalization [56.37032680901525]
Building on the theory of partial identification and transportability, this paper introduces new results for bounding the value of a functional of the target distribution.<n>Our contribution is to provide the first general estimation technique for transportability problems.<n>We propose a gradient-based optimization scheme for making scalable inferences in practice.
arXiv Detail & Related papers (2025-03-30T22:06:37Z) - COD: Learning Conditional Invariant Representation for Domain Adaptation Regression [20.676363400841495]
Domain Adaptation Regression is developed to generalize label knowledge from a source domain to an unlabeled target domain.
Existing conditional distribution alignment theory and methods with discrete prior are no longer applicable.
To minimize the discrepancy, a COD-based conditional invariant representation learning model is proposed.
arXiv Detail & Related papers (2024-08-13T05:08:13Z) - Cross Contrasting Feature Perturbation for Domain Generalization [11.863319505696184]
Domain generalization aims to learn a robust model from source domains that generalize well on unseen target domains.
Recent studies focus on generating novel domain samples or features to diversify distributions complementary to source domains.
We propose an online one-stage Cross Contrasting Feature Perturbation framework to simulate domain shift.
arXiv Detail & Related papers (2023-07-24T03:27:41Z) - Label Alignment Regularization for Distribution Shift [63.228879525056904]
Recent work has highlighted the label alignment property (LAP) in supervised learning, where the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix.
We propose a regularization method for unsupervised domain adaptation that encourages alignment between the predictions in the target domain and its top singular vectors.
We report improved performance over domain adaptation baselines in well-known tasks such as MNIST-USPS domain adaptation and cross-lingual sentiment analysis.
arXiv Detail & Related papers (2022-11-27T22:54:48Z) - Mitigating Both Covariate and Conditional Shift for Domain
Generalization [14.91361835243516]
Domain generalization (DG) aims to learn a model on several source domains, hoping that the model can generalize well to unseen target domains.
In this paper, a novel DG method is proposed to deal with the distribution shift via Visual Alignment and Uncertainty-guided belief Ensemble (VAUE)
arXiv Detail & Related papers (2022-09-17T05:13:56Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Mapping conditional distributions for domain adaptation under
generalized target shift [0.0]
We consider the problem of unsupervised domain adaptation (UDA) between a source and a target domain under conditional and label shift a.k.a Generalized Target Shift (GeTarS)
Recent approaches learn domain-invariant representations, yet they have practical limitations and rely on strong assumptions that may not hold in practice.
In this paper, we explore a novel and general approach to align pretrained representations, which circumvents existing drawbacks.
arXiv Detail & Related papers (2021-10-26T14:25:07Z) - Variational Disentanglement for Domain Generalization [68.85458536180437]
We propose to tackle the problem of domain generalization by delivering an effective framework named Variational Disentanglement Network (VDN)
VDN is capable of disentangling the domain-specific features and task-specific features, where the task-specific features are expected to be better generalized to unseen but related test data.
arXiv Detail & Related papers (2021-09-13T09:55:32Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.