An Operational Perspective to Fairness Interventions: Where and How to
Intervene
- URL: http://arxiv.org/abs/2302.01574v2
- Date: Thu, 23 Mar 2023 21:20:38 GMT
- Title: An Operational Perspective to Fairness Interventions: Where and How to
Intervene
- Authors: Brian Hsu, Xiaotong Chen, Ying Han, Hongseok Namkoong, Kinjal Basu
- Abstract summary: We present a holistic framework for evaluating and contextualizing fairness interventions.
We demonstrate our framework with a case study on predictive parity.
We find predictive parity is difficult to achieve without using group data.
- Score: 9.833760837977222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As AI-based decision systems proliferate, their successful operationalization
requires balancing multiple desiderata: predictive performance, disparity
across groups, safeguarding sensitive group attributes (e.g., race), and
engineering cost. We present a holistic framework for evaluating and
contextualizing fairness interventions with respect to the above desiderata.
The two key points of practical consideration are \emph{where} (pre-, in-,
post-processing) and \emph{how} (in what way the sensitive group data is used)
the intervention is introduced. We demonstrate our framework with a case study
on predictive parity. In it, we first propose a novel method for achieving
predictive parity fairness without using group data at inference time via
distibutionally robust optimization. Then, we showcase the effectiveness of
these methods in a benchmarking study of close to 400 variations across two
major model types (XGBoost vs. Neural Net), ten datasets, and over twenty
unique methodologies. Methodological insights derived from our empirical study
inform the practical design of ML workflow with fairness as a central concern.
We find predictive parity is difficult to achieve without using group data, and
despite requiring group data during model training (but not inference),
distributionally robust methods we develop provide significant Pareto
improvement. Moreover, a plain XGBoost model often Pareto-dominates neural
networks with fairness interventions, highlighting the importance of model
inductive bias.
Related papers
- Deriving Causal Order from Single-Variable Interventions: Guarantees & Algorithm [14.980926991441345]
We show that datasets containing interventional data can be effectively extracted under realistic assumptions about the data distribution.
We introduce interventional faithfulness, which relies on comparisons between the marginal distributions of each variable across observational and interventional settings.
We also introduce Intersort, an algorithm designed to infer the causal order from datasets containing large numbers of single-variable interventions.
arXiv Detail & Related papers (2024-05-28T16:07:17Z) - Group Robust Classification Without Any Group Information [5.053622900542495]
This study contends that current bias-unsupervised approaches to group robustness continue to rely on group information to achieve optimal performance.
bias labels are still crucial for effective model selection, restricting the practicality of these methods in real-world scenarios.
We propose a revised methodology for training and validating debiased models in an entirely bias-unsupervised manner.
arXiv Detail & Related papers (2023-10-28T01:29:18Z) - Improving Link Prediction in Social Networks Using Local and Global
Features: A Clustering-based Approach [0.0]
We propose an approach based on the combination of first and second group methods to tackle the link prediction problem.
Our two-phase developed method firstly determines new features related to the position and dynamic behavior of nodes.
Then, a subspace clustering algorithm is applied to group social objects based on the computed similarity measures.
arXiv Detail & Related papers (2023-05-17T14:45:02Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.