Fair Bayesian Data Selection via Generalized Discrepancy Measures
- URL: http://arxiv.org/abs/2511.07032v1
- Date: Mon, 10 Nov 2025 12:28:04 GMT
- Title: Fair Bayesian Data Selection via Generalized Discrepancy Measures
- Authors: Yixuan Zhang, Jiabin Luo, Zhenggang Wang, Feng Zhou, Quyu Kong,
- Abstract summary: We propose a data selection framework that ensures fairness by aligning group-specific posterior distributions of model parameters and sample weights with a shared central distribution.<n>Our framework supports flexible alignment via various distributional discrepancy measures, including Wasserstein distance, maximum mean discrepancy, and $f$-divergence.<n> Experiments on benchmark datasets show that our method consistently outperforms existing data selection and model-based fairness methods in both fairness and accuracy.
- Score: 11.013077130984973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness concerns are increasingly critical as machine learning models are deployed in high-stakes applications. While existing fairness-aware methods typically intervene at the model level, they often suffer from high computational costs, limited scalability, and poor generalization. To address these challenges, we propose a Bayesian data selection framework that ensures fairness by aligning group-specific posterior distributions of model parameters and sample weights with a shared central distribution. Our framework supports flexible alignment via various distributional discrepancy measures, including Wasserstein distance, maximum mean discrepancy, and $f$-divergence, allowing geometry-aware control without imposing explicit fairness constraints. This data-centric approach mitigates group-specific biases in training data and improves fairness in downstream tasks, with theoretical guarantees. Experiments on benchmark datasets show that our method consistently outperforms existing data selection and model-based fairness methods in both fairness and accuracy.
Related papers
- On the use of graph models to achieve individual and group fairness [0.6299766708197883]
We provide a theoretical framework based on Sheaf Diffusion to leverage tools based on dynamical systems and homology to model fairness.<n>We present a collection of network topologies handling different fairness metrics, leading to a unified method capable of dealing with both individual and group bias.<n>The paper showcases the performance of the proposed models in terms of accuracy and fairness.
arXiv Detail & Related papers (2026-01-13T18:17:43Z) - Class-Conditional Distribution Balancing for Group Robust Classification [11.525201208566925]
Spurious correlations that lead models to correct predictions for the wrong reasons pose a critical challenge for robust real-world generalization.<n>We offer a novel perspective by reframing the spurious correlations as imbalances or mismatches in class-conditional distributions.<n>We propose a simple yet effective robust learning method that eliminates the need for both bias annotations and predictions.
arXiv Detail & Related papers (2025-04-24T07:15:53Z) - Noise-Adaptive Conformal Classification with Marginal Coverage [53.74125453366155]
We introduce an adaptive conformal inference method capable of efficiently handling deviations from exchangeability caused by random label noise.<n>We validate our method through extensive numerical experiments demonstrating its effectiveness on synthetic and real data sets.
arXiv Detail & Related papers (2025-01-29T23:55:23Z) - Navigating Towards Fairness with Data Selection [27.731128352096555]
We introduce a data selection method designed to efficiently and flexibly mitigate label bias.<n>Our approach utilizes a zero-shot predictor as a proxy model that simulates training on a clean holdout set.<n>Our modality-agnostic method has proven efficient and effective in handling label bias and improving fairness across diverse datasets in experimental evaluations.
arXiv Detail & Related papers (2024-12-15T06:11:05Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - Achievable Fairness on Your Data With Utility Guarantees [16.78730663293352]
In machine learning fairness, training models that minimize disparity across different sensitive groups often leads to diminished accuracy.
We present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets.
We introduce a novel methodology for quantifying uncertainty in our estimates, thereby providing practitioners with a robust framework for auditing model fairness.
arXiv Detail & Related papers (2024-02-27T00:59:32Z) - Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.