Algorithmic Decision Making with Conditional Fairness
- URL: http://arxiv.org/abs/2006.10483v5
- Date: Sun, 18 Jul 2021 17:20:24 GMT
- Title: Algorithmic Decision Making with Conditional Fairness
- Authors: Renzhe Xu, Peng Cui, Kun Kuang, Bo Li, Linjun Zhou, Zheyan Shen, Wei
Cui
- Abstract summary: We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
- Score: 48.76267073341723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays fairness issues have raised great concerns in decision-making
systems. Various fairness notions have been proposed to measure the degree to
which an algorithm is unfair. In practice, there frequently exist a certain set
of variables we term as fair variables, which are pre-decision covariates such
as users' choices. The effects of fair variables are irrelevant in assessing
the fairness of the decision support algorithm. We thus define conditional
fairness as a more sound fairness metric by conditioning on the fairness
variables. Given different prior knowledge of fair variables, we demonstrate
that traditional fairness notations, such as demographic parity and equalized
odds, are special cases of our conditional fairness notations. Moreover, we
propose a Derivable Conditional Fairness Regularizer (DCFR), which can be
integrated into any decision-making model, to track the trade-off between
precision and fairness of algorithmic decision making. Specifically, an
adversarial representation based conditional independence loss is proposed in
our DCFR to measure the degree of unfairness. With extensive experiments on
three real-world datasets, we demonstrate the advantages of our conditional
fairness notation and DCFR.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Remembering to Be Fair: Non-Markovian Fairness in Sequential Decision Making [15.289872878202399]
We explore the notion of non-Markovian fairness in the context of sequential decision making.
We identify properties of non-Markovian fairness, including notions of long-term, anytime, periodic, and bounded fairness.
We introduce the FairQCM algorithm, which can automatically augment its training data to improve sample efficiency in the synthesis of fair policies.
arXiv Detail & Related papers (2023-12-08T01:04:36Z) - Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - BaBE: Enhancing Fairness via Estimation of Latent Explaining Variables [6.7932860553262415]
We consider the problem of unfair discrimination between two groups and propose a pre-processing method to achieve fairness.
BaBE is an approach based on a combination of Bayes inference and the Expectation-Maximization method.
We show, by experiments on synthetic and real data sets, that our approach provides a good level of fairness as well as high accuracy.
arXiv Detail & Related papers (2023-07-06T09:53:56Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - On Learning and Testing of Counterfactual Fairness through Data
Preprocessing [27.674565351048077]
Machine learning has become more important in real-life decision-making but people are concerned about the ethical problems it may bring when used improperly.
Recent work brings the discussion of machine learning fairness into the causal framework and elaborates on the concept of Counterfactual Fairness.
We develop the Fair Learning through dAta Preprocessing (FLAP) algorithm to learn counterfactually fair decisions from biased training data.
arXiv Detail & Related papers (2022-02-25T00:21:46Z) - Unfairness Despite Awareness: Group-Fair Classification with Strategic
Agents [37.31138342300617]
We show that strategic agents may possess both the ability and the incentive to manipulate an observed feature vector in order to attain a more favorable outcome.
We further demonstrate that both the increased selectiveness of the fair classifier, and consequently the loss of fairness, arises when performing fair learning on domains in which the advantaged group is overrepresented.
arXiv Detail & Related papers (2021-12-06T02:42:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.