An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning
- URL: http://arxiv.org/abs/2104.14504v1
- Date: Thu, 29 Apr 2021 17:18:17 GMT
- Title: An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning
- Authors: Cyrus Cousins
- Abstract summary: We define malfare, measuring overall societal harm (rather than wellbeing)
We propose an equivalently-axmatically justified alternative, and study the resulting computational and statistical learning questions.
We show broad conditions under which, with appropriate modifications, many standard PAC-learners may be converted to fair-PAC learners.
- Score: 5.634825161148484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address an inherent difficulty in welfare-theoretic fair machine learning,
proposing an equivalently-axiomatically justified alternative, and studying the
resulting computational and statistical learning questions. Welfare metrics
quantify overall wellbeing across a population of one or more groups, and
welfare-based objectives and constraints have recently been proposed to
incentivize fair machine learning methods to produce satisfactory solutions
that consider the diverse needs of multiple groups. Unfortunately, many
machine-learning problems are more naturally cast as loss minimization, rather
than utility maximization tasks, which complicates direct application of
welfare-centric methods to fair-ML tasks. In this work, we define a
complementary measure, termed malfare, measuring overall societal harm (rather
than wellbeing), with axiomatic justification via the standard axioms of
cardinal welfare. We then cast fair machine learning as a direct malfare
minimization problem, where a group's malfare is their risk (expected loss).
Surprisingly, the axioms of cardinal welfare (malfare) dictate that this is not
equivalent to simply defining utility as negative loss. Building upon these
concepts, we define fair-PAC learning, where a fair PAC-learner is an algorithm
that learns an $\varepsilon$-$\delta$ malfare-optimal model with bounded sample
complexity, for any data distribution, and for any malfare concept. We show
broad conditions under which, with appropriate modifications, many standard
PAC-learners may be converted to fair-PAC learners. This places fair-PAC
learning on firm theoretical ground, as it yields statistical, and in some
cases computational, efficiency guarantees for many well-studied
machine-learning models, and is also practically relevant, as it democratizes
fair ML by providing concrete training algorithms and rigorous generalization
guarantees for these models.
Related papers
- Parametric Fairness with Statistical Guarantees [0.46040036610482665]
We extend the concept of Demographic Parity to incorporate distributional properties in predictions, allowing expert knowledge to be used in the fair solution.
We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges.
arXiv Detail & Related papers (2023-10-31T14:52:39Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Towards A Holistic View of Bias in Machine Learning: Bridging
Algorithmic Fairness and Imbalanced Learning [8.602734307457387]
A key element in achieving algorithmic fairness with respect to protected groups is the simultaneous reduction of class and protected group imbalance in the underlying training data.
We propose a novel oversampling algorithm, Fair Oversampling, that addresses both skewed class distributions and protected features.
arXiv Detail & Related papers (2022-07-13T09:48:52Z) - Fair Algorithm Design: Fair and Efficacious Machine Scheduling [0.0]
There is often a dichotomy between fairness and efficacy: fair algorithms may proffer low social welfare solutions whereas welfare optimizing algorithms may be very unfair.
In this paper, we prove that this dichotomy can be overcome if we allow for a negligible amount of bias.
arXiv Detail & Related papers (2022-04-13T14:56:22Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z) - Model-Augmented Q-learning [112.86795579978802]
We propose a MFRL framework that is augmented with the components of model-based RL.
Specifically, we propose to estimate not only the $Q$-values but also both the transition and the reward with a shared network.
We show that the proposed scheme, called Model-augmented $Q$-learning (MQL), obtains a policy-invariant solution which is identical to the solution obtained by learning with true reward.
arXiv Detail & Related papers (2021-02-07T17:56:50Z) - Fairness in Machine Learning [15.934879442202785]
We show how causal Bayesian networks can play an important role to reason about and deal with fairness.
We present a unified framework that encompasses methods that can deal with different settings and fairness criteria.
arXiv Detail & Related papers (2020-12-31T18:38:58Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.