Fair Data Representation for Machine Learning at the Pareto Frontier
- URL: http://arxiv.org/abs/2201.00292v4
- Date: Fri, 24 Nov 2023 15:06:36 GMT
- Title: Fair Data Representation for Machine Learning at the Pareto Frontier
- Authors: Shizhou Xu, Thomas Strohmer
- Abstract summary: We propose a pre-processing algorithm for fair data representation via supervised learning.
We show that the Wasserstein-2 geodesics from the conditional (on sensitive information) distributions of the learning outcome to their barycenter characterizes the frontier between $L2$-loss and the average pairwise Wasserstein-2 distance.
Numerical simulations underscore the advantages: (1) the pre-processing step is compositive with arbitrary conditional expectation estimation supervised learning methods and unseen data; (2) the fair representation protects the sensitive information by limiting the inference capability of the remaining data with respect to the sensitive data; and (3) the optimal affine
- Score: 3.6052935394000234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning powered decision-making becomes increasingly important in
our daily lives, it is imperative to strive for fairness in the underlying data
processing. We propose a pre-processing algorithm for fair data representation
via which supervised learning results in estimations of the Pareto frontier
between prediction error and statistical disparity. Particularly, the present
work applies the optimal affine transport to approach the post-processing
Wasserstein-2 barycenter characterization of the optimal fair $L^2$-objective
supervised learning via a pre-processing data deformation. Furthermore, we show
that the Wasserstein-2 geodesics from the conditional (on sensitive
information) distributions of the learning outcome to their barycenter
characterizes the Pareto frontier between $L^2$-loss and the average pairwise
Wasserstein-2 distance among sensitive groups on the learning outcome.
Numerical simulations underscore the advantages: (1) the pre-processing step is
compositive with arbitrary conditional expectation estimation supervised
learning methods and unseen data; (2) the fair representation protects the
sensitive information by limiting the inference capability of the remaining
data with respect to the sensitive data; (3) the optimal affine maps are
computationally efficient even for high-dimensional data.
Related papers
- Sliced-Wasserstein Distance-based Data Selection [0.0]
We propose a new unsupervised anomaly detection method based on the sliced-Wasserstein distance.
Our filtering technique is interesting for decision-making pipelines deploying machine learning models in critical sectors.
We present the filtering patterns of our method on synthetic datasets and numerically benchmark our method for training data selection.
arXiv Detail & Related papers (2025-04-17T13:07:26Z) - Targeted Learning for Data Fairness [52.59573714151884]
We expand fairness inference by evaluating fairness in the data generating process itself.
We derive estimators demographic parity, equal opportunity, and conditional mutual information.
To validate our approach, we perform several simulations and apply our estimators to real data.
arXiv Detail & Related papers (2025-02-06T18:51:28Z) - Capturing the Temporal Dependence of Training Data Influence [100.91355498124527]
We formalize the concept of trajectory-specific leave-one-out influence, which quantifies the impact of removing a data point during training.
We propose data value embedding, a novel technique enabling efficient approximation of trajectory-specific LOO.
As data value embedding captures training data ordering, it offers valuable insights into model training dynamics.
arXiv Detail & Related papers (2024-12-12T18:28:55Z) - LAVA: Data Valuation without Pre-Specified Learning Algorithms [20.578106028270607]
We introduce a new framework that can value training data in a way that is oblivious to the downstream learning algorithm.
We develop a proxy for the validation performance associated with a training set based on a non-conventional class-wise Wasserstein distance between training and validation sets.
We show that the distance characterizes the upper bound of the validation performance for any given model under certain Lipschitz conditions.
arXiv Detail & Related papers (2023-04-28T19:05:16Z) - An Operational Perspective to Fairness Interventions: Where and How to
Intervene [9.833760837977222]
We present a holistic framework for evaluating and contextualizing fairness interventions.
We demonstrate our framework with a case study on predictive parity.
We find predictive parity is difficult to achieve without using group data.
arXiv Detail & Related papers (2023-02-03T07:04:33Z) - Fair Representation Learning using Interpolation Enabled Disentanglement [9.043741281011304]
We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
arXiv Detail & Related papers (2021-07-31T17:32:12Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Evaluating representations by the complexity of learning low-loss
predictors [55.94170724668857]
We consider the problem of evaluating representations of data for use in solving a downstream task.
We propose to measure the quality of a representation by the complexity of learning a predictor on top of the representation that achieves low loss on a task of interest.
arXiv Detail & Related papers (2020-09-15T22:06:58Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.