Beyond traditional assumptions in fair machine learning
- URL: http://arxiv.org/abs/2101.12476v1
- Date: Fri, 29 Jan 2021 09:02:15 GMT
- Title: Beyond traditional assumptions in fair machine learning
- Authors: Niki Kilbertus
- Abstract summary: This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fairness in consequential decision making.
We show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited.
We overcome the assumption that sensitive data is readily available in practice.
- Score: 5.029280887073969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This thesis scrutinizes common assumptions underlying traditional machine
learning approaches to fairness in consequential decision making. After
challenging the validity of these assumptions in real-world applications, we
propose ways to move forward when they are violated. First, we show that group
fairness criteria purely based on statistical properties of observed data are
fundamentally limited. Revisiting this limitation from a causal viewpoint we
develop a more versatile conceptual framework, causal fairness criteria, and
first algorithms to achieve them. We also provide tools to analyze how
sensitive a believed-to-be causally fair algorithm is to misspecifications of
the causal graph. Second, we overcome the assumption that sensitive data is
readily available in practice. To this end we devise protocols based on secure
multi-party computation to train, validate, and contest fair decision
algorithms without requiring users to disclose their sensitive data or decision
makers to disclose their models. Finally, we also accommodate the fact that
outcome labels are often only observed when a certain decision has been made.
We suggest a paradigm shift away from training predictive models towards
directly learning decisions to relax the traditional assumption that labels can
always be recorded. The main contribution of this thesis is the development of
theoretically substantiated and practically feasible methods to move research
on fair machine learning closer to real-world applications.
Related papers
- Parametric Fairness with Statistical Guarantees [0.46040036610482665]
We extend the concept of Demographic Parity to incorporate distributional properties in predictions, allowing expert knowledge to be used in the fair solution.
We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges.
arXiv Detail & Related papers (2023-10-31T14:52:39Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - On Learning and Testing of Counterfactual Fairness through Data
Preprocessing [27.674565351048077]
Machine learning has become more important in real-life decision-making but people are concerned about the ethical problems it may bring when used improperly.
Recent work brings the discussion of machine learning fairness into the causal framework and elaborates on the concept of Counterfactual Fairness.
We develop the Fair Learning through dAta Preprocessing (FLAP) algorithm to learn counterfactually fair decisions from biased training data.
arXiv Detail & Related papers (2022-02-25T00:21:46Z) - Knowledge-driven Active Learning [70.37119719069499]
Active learning strategies aim at minimizing the amount of labelled data required to train a Deep Learning model.
Most active strategies are based on uncertain sample selection, and even often restricted to samples lying close to the decision boundary.
Here we propose to take into consideration common domain-knowledge and enable non-expert users to train a model with fewer samples.
arXiv Detail & Related papers (2021-10-15T06:11:53Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Just Label What You Need: Fine-Grained Active Selection for Perception
and Prediction through Partially Labeled Scenes [78.23907801786827]
We introduce generalizations that ensure that our approach is both cost-aware and allows for fine-grained selection of examples through partially labeled scenes.
Our experiments on a real-world, large-scale self-driving dataset suggest that fine-grained selection can improve the performance across perception, prediction, and downstream planning tasks.
arXiv Detail & Related papers (2021-04-08T17:57:41Z) - All of the Fairness for Edge Prediction with Optimal Transport [11.51786288978429]
We study the problem of fairness for the task of edge prediction in graphs.
We propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness.
arXiv Detail & Related papers (2020-10-30T15:33:13Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fair Meta-Learning For Few-Shot Classification [7.672769260569742]
A machine learning algorithm trained on biased data tends to make unfair predictions.
We propose a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train.
We empirically demonstrate that our proposed approach efficiently mitigates biases on model output and generalizes both accuracy and fairness to unseen tasks.
arXiv Detail & Related papers (2020-09-23T22:33:47Z) - Improving Fair Predictions Using Variational Inference In Causal Models [8.557308138001712]
The importance of algorithmic fairness grows with the increasing impact machine learning has on people's lives.
Recent work on fairness metrics shows the need for causal reasoning in fairness constraints.
This research aims to contribute to machine learning techniques which honour our ethical and legal boundaries.
arXiv Detail & Related papers (2020-08-25T08:27:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.