Learning Fair Domain Adaptation with Virtual Label Distribution
- URL: http://arxiv.org/abs/2601.18171v1
- Date: Mon, 26 Jan 2026 05:48:47 GMT
- Title: Learning Fair Domain Adaptation with Virtual Label Distribution
- Authors: Yuguang Zhang, Lijun Sheng, Jian Liang, Ran He,
- Abstract summary: Unsupervised Domain Adaptation (UDA) aims to mitigate performance degradation when training and testing data are sampled from different distributions.<n>We propose Virtual Label-distribution-aware Learning (VILL) to improve worst-case performance while preserving high overall accuracy.
- Score: 35.20492905112689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) aims to mitigate performance degradation when training and testing data are sampled from different distributions. While significant progress has been made in enhancing overall accuracy, most existing methods overlook performance disparities across categories-an issue we refer to as category fairness. Our empirical analysis reveals that UDA classifiers tend to favor certain easy categories while neglecting difficult ones. To address this, we propose Virtual Label-distribution-aware Learning (VILL), a simple yet effective framework designed to improve worst-case performance while preserving high overall accuracy. The core of VILL is an adaptive re-weighting strategy that amplifies the influence of hard-to-classify categories. Furthermore, we introduce a KL-divergence-based re-balancing strategy, which explicitly adjusts decision boundaries to enhance category fairness. Experiments on commonly used datasets demonstrate that VILL can be seamlessly integrated as a plug-and-play module into existing UDA methods, significantly improving category fairness.
Related papers
- Reducing Class-Wise Performance Disparity via Margin Regularization [82.81746960548382]
Deep neural networks often exhibit substantial disparities in class-wise accuracy, even when trained on class-balanced data.<n>We present Margin Regularization for Performance Disparity Reduction (MR$2$), a theoretically principled regularization for classification.<n>Our analysis reveals how per-class feature variability contributes to error, motivating the use of larger margins for hard classes.
arXiv Detail & Related papers (2026-01-30T12:56:08Z) - Optimal Learning from Label Proportions with General Loss Functions [33.827617632719864]
We introduce a novel and versatile low-variance de-biasing methodology to learn from aggregate label information.<n>Our approach exhibits remarkable flexibility, seamlessly accommodating a broad spectrum of practically relevant loss functions.<n>We empirically validate the efficacy of our proposed approach across a diverse array of benchmark datasets.
arXiv Detail & Related papers (2025-09-18T16:53:32Z) - Gradient-based Class Weighting for Unsupervised Domain Adaptation in Dense Prediction Visual Tasks [3.776249047528669]
This paper proposes a class-imbalance mitigation strategy that incorporates class-weights into the UDA learning losses.
The novelty of estimating these weights dynamically through the loss gradient defines a Gradient-based class weighting (GBW) learning.
GBW naturally increases the contribution of classes whose learning is hindered by large-represented classes.
arXiv Detail & Related papers (2024-07-01T14:34:25Z) - Enhancing cross-domain detection: adaptive class-aware contrastive
transformer [15.666766743738531]
Insufficient labels in the target domain exacerbate issues of class imbalance and model performance degradation.
We propose a class-aware cross domain detection transformer based on the adversarial learning and mean-teacher framework.
arXiv Detail & Related papers (2024-01-24T07:11:05Z) - Understanding the Detrimental Class-level Effects of Data Augmentation [63.1733767714073]
achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet.
We present a framework for understanding how DA interacts with class-level learning dynamics.
We show that simple class-conditional augmentation strategies improve performance on the negatively affected classes.
arXiv Detail & Related papers (2023-12-07T18:37:43Z) - Better Practices for Domain Adaptation [62.70267990659201]
Domain adaptation (DA) aims to provide frameworks for adapting models to deployment data without using labels.
Unclear validation protocol for DA has led to bad practices in the literature.
We show challenges across all three branches of domain adaptation methodology.
arXiv Detail & Related papers (2023-09-07T17:44:18Z) - Balanced Classification: A Unified Framework for Long-Tailed Object
Detection [74.94216414011326]
Conventional detectors suffer from performance degradation when dealing with long-tailed data due to a classification bias towards the majority head categories.
We introduce a unified framework called BAlanced CLassification (BACL), which enables adaptive rectification of inequalities caused by disparities in category distribution.
BACL consistently achieves performance improvements across various datasets with different backbones and architectures.
arXiv Detail & Related papers (2023-08-04T09:11:07Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning [20.66927648806676]
We propose a novel framework for semi-supervised semantic segmentation, named adaptive equalization learning (AEL)
AEL balances the training of well and badly performed categories, with a confidence bank to track category-wise performance.
AEL outperforms the state-of-the-art methods by a large margin on the Cityscapes and Pascal VOC benchmarks.
arXiv Detail & Related papers (2021-10-11T17:59:55Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Adaptive Adversarial Logits Pairing [65.51670200266913]
An adversarial training solution Adversarial Logits Pairing (ALP) tends to rely on fewer high-contribution features compared with vulnerable ones.
Motivated by these observations, we design an Adaptive Adversarial Logits Pairing (AALP) solution by modifying the training process and training target of ALP.
AALP consists of an adaptive feature optimization module with Guided Dropout to systematically pursue fewer high-contribution features.
arXiv Detail & Related papers (2020-05-25T03:12:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.