Longitudinal Fairness with Censorship
- URL: http://arxiv.org/abs/2203.16024v2
- Date: Thu, 31 Mar 2022 01:13:09 GMT
- Title: Longitudinal Fairness with Censorship
- Authors: Wenbin Zhang and Jeremy C. Weiss
- Abstract summary: We devise applicable fairness measures, propose a debiasing algorithm, and provide necessary theoretical constructs to bridge fairness with and without censorship.
Our experiments on four censored datasets confirm the utility of our approach.
- Score: 1.5688552250473473
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recent works in artificial intelligence fairness attempt to mitigate
discrimination by proposing constrained optimization programs that achieve
parity for some fairness statistic. Most assume availability of the class
label, which is impractical in many real-world applications such as precision
medicine, actuarial analysis and recidivism prediction. Here we consider
fairness in longitudinal right-censored environments, where the time to event
might be unknown, resulting in censorship of the class label and
inapplicability of existing fairness studies. We devise applicable fairness
measures, propose a debiasing algorithm, and provide necessary theoretical
constructs to bridge fairness with and without censorship for these important
and socially-sensitive tasks. Our experiments on four censored datasets confirm
the utility of our approach.
Related papers
- What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Causal Fairness under Unobserved Confounding: A Neural Sensitivity Framework [24.91413609641092]
We analyze the sensitivity of causal fairness to unobserved confounding.
We propose a novel neural framework for learning fair predictions.
To the best of our knowledge, ours is the first work to study causal fairness under unobserved confounding.
arXiv Detail & Related papers (2023-11-30T11:11:26Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Fair Decision-making Under Uncertainty [1.5688552250473473]
We study a longitudinal censored learning problem subject to fairness constraints.
We show how the newly devised fairness notions involving censored information and the general framework for fair predictions in the presence of censorship allow us to measure and discrimination under uncertainty.
arXiv Detail & Related papers (2023-01-29T05:42:39Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Beyond traditional assumptions in fair machine learning [5.029280887073969]
This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fairness in consequential decision making.
We show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited.
We overcome the assumption that sensitive data is readily available in practice.
arXiv Detail & Related papers (2021-01-29T09:02:15Z) - A Statistical Test for Probabilistic Fairness [11.95891442664266]
We propose a statistical hypothesis test for detecting unfair classifiers.
We show both theoretically as well as empirically that the proposed test is correct.
In addition, the proposed framework offers interpretability by identifying the most favorable perturbation of the data.
arXiv Detail & Related papers (2020-12-09T00:20:02Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.