Fair Policy Targeting
- URL: http://arxiv.org/abs/2005.12395v3
- Date: Wed, 29 Jun 2022 18:11:22 GMT
- Title: Fair Policy Targeting
- Authors: Davide Viviano, Jelena Bradic
- Abstract summary: One of the major concerns of targeting interventions on individuals in social welfare programs is discrimination.
This paper addresses the question of the design of fair and efficient treatment allocation rules.
- Score: 0.6091702876917281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the major concerns of targeting interventions on individuals in social
welfare programs is discrimination: individualized treatments may induce
disparities across sensitive attributes such as age, gender, or race. This
paper addresses the question of the design of fair and efficient treatment
allocation rules. We adopt the non-maleficence perspective of first do no harm:
we select the fairest allocation within the Pareto frontier. We cast the
optimization into a mixed-integer linear program formulation, which can be
solved using off-the-shelf algorithms. We derive regret bounds on the
unfairness of the estimated policy function and small sample guarantees on the
Pareto frontier under general notions of fairness. Finally, we illustrate our
method using an application from education economics.
Related papers
- Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Bayes-Optimal Fair Classification with Linear Disparity Constraints via
Pre-, In-, and Post-processing [32.5214395114507]
We develop methods for Bayes-optimal fair classification, aiming to minimize classification error subject to given group fairness constraints.
We show that several popular disparity measures -- the deviations from demographic parity, equality of opportunity, and predictive equality -- are bilinear.
Our methods control disparity directly while achieving near-optimal fairness-accuracy tradeoffs.
arXiv Detail & Related papers (2024-02-05T08:59:47Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Optimal and Fair Encouragement Policy Evaluation and Learning [11.712023983596914]
We study causal identification, statistical variance-reduced estimation, and robust estimation of optimal treatment rules.
We develop a two-stage algorithm for solving over parametrized policy classes under general constraints to obtain variance-sensitive regret bounds.
arXiv Detail & Related papers (2023-09-12T20:45:30Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Provable Offline Preference-Based Reinforcement Learning [95.00042541409901]
We investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback.
We consider the general reward setting where the reward can be defined over the whole trajectory.
We introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability.
arXiv Detail & Related papers (2023-05-24T07:11:26Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Equality of Effort via Algorithmic Recourse [3.3517146652431378]
This paper proposes a method for measuring fairness through equality of effort by applying algorithmic recourse through minimal interventions.
We extend the existing definition of equality of effort and present an algorithm for its assessment via algorithmic recourse.
arXiv Detail & Related papers (2022-11-21T22:41:24Z) - Learning to be Fair: A Consequentialist Approach to Equitable
Decision-Making [21.152377319502705]
We present an alternative framework for designing equitable algorithms.
In our approach, one first elicits stakeholder preferences over the space of possible decisions.
We then optimize over the space of decision policies, making trade-offs in a way that maximizes the elicited utility.
arXiv Detail & Related papers (2021-09-18T00:30:43Z) - Accuracy and Fairness Trade-offs in Machine Learning: A Stochastic
Multi-Objective Approach [0.0]
In the application of machine learning to real-life decision-making systems, the prediction outcomes might discriminate against people with sensitive attributes, leading to unfairness.
The commonly used strategy in fair machine learning is to include fairness as a constraint or a penalization term in the minimization of the prediction loss.
In this paper, we introduce a new approach to handle fairness by formulating a multi-objective optimization problem.
arXiv Detail & Related papers (2020-08-03T18:51:24Z) - Bias no more: high-probability data-dependent regret bounds for
adversarial bandits and MDPs [48.44657553192801]
We develop a new approach to obtaining high probability regret bounds for online learning with bandit feedback against an adaptive adversary.
Our approach relies on a simple increasing learning rate schedule, together with the help of logarithmically homogeneous self-concordant barriers and a strengthened Freedman's inequality.
arXiv Detail & Related papers (2020-06-14T22:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.