Fair Algorithm Design: Fair and Efficacious Machine Scheduling
- URL: http://arxiv.org/abs/2204.06438v2
- Date: Sun, 9 Jul 2023 16:16:43 GMT
- Title: Fair Algorithm Design: Fair and Efficacious Machine Scheduling
- Authors: April Niu, Agnes Totschnig, Adrian Vetta
- Abstract summary: There is often a dichotomy between fairness and efficacy: fair algorithms may proffer low social welfare solutions whereas welfare optimizing algorithms may be very unfair.
In this paper, we prove that this dichotomy can be overcome if we allow for a negligible amount of bias.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by a plethora of practical examples where bias is induced by
automated-decision making algorithms, there has been strong recent interest in
the design of fair algorithms. However, there is often a dichotomy between
fairness and efficacy: fair algorithms may proffer low social welfare solutions
whereas welfare optimizing algorithms may be very unfair. This issue is
exemplified in the machine scheduling problem where, for $n$ jobs, the social
welfare of any fair solution may be a factor $\Omega(n)$ worse than the optimal
welfare. In this paper, we prove that this dichotomy between fairness and
efficacy can be overcome if we allow for a negligible amount of bias: there
exist algorithms that are both "almost perfectly fair" and have a constant
factor efficacy ratio, that is, are guaranteed to output solutions that have
social welfare within a constant factor of optimal welfare. Specifically, for
any $\epsilon>0$, there exist mechanisms with efficacy ratio
$\Theta(\frac{1}{\epsilon})$ and where no agent is more than an $\epsilon$
fraction worse off than they are in the fairest possible solution (given by an
algorithm that does not use personal or type data). Moreover, these bicriteria
guarantees are tight and apply to both the single machine case and the multiple
machine case. The key to our results are the use of Pareto scheduling
mechanisms. These mechanisms, by the judicious use of personal or type data,
are able to exploit Pareto improvements that benefit every individual; such
Pareto improvements would typically be forbidden by fair scheduling algorithms
designed to satisfy standard statistical measures of group fairness. We
anticipate this paradigm, the judicious use of personal data by a fair
algorithm to greatly improve performance at the cost of negligible bias, has
wider application.
Related papers
- Perturb-and-Project: Differentially Private Similarities and Marginals [73.98880839337873]
We revisit the input perturbations framework for differential privacy where noise is added to the input $Ain mathcalS$.
We first design novel efficient algorithms to privately release pair-wise cosine similarities.
We derive a novel algorithm to compute $k$-way marginal queries over $n$ features.
arXiv Detail & Related papers (2024-06-07T12:07:16Z) - Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games [66.2085181793014]
We show that a model-free stage-based Q-learning algorithm can enjoy the same optimality in the $H$ dependence as model-based algorithms.
Our algorithm features a key novel design of updating the reference value functions as the pair of optimistic and pessimistic value functions.
arXiv Detail & Related papers (2023-08-17T08:34:58Z) - Designing Equitable Algorithms [1.9006392177894293]
Predictive algorithms are now used to help distribute a large share of our society's resources and sanctions.
These algorithms can improve the efficiency and equity of decision-making.
But they could entrench and exacerbate disparities, particularly along racial, ethnic, and gender lines.
arXiv Detail & Related papers (2023-02-17T22:00:44Z) - Scalable Differentially Private Clustering via Hierarchically Separated
Trees [82.69664595378869]
We show that our method computes a solution with cost at most $O(d3/2log n)cdot OPT + O(k d2 log2 n / epsilon2)$, where $epsilon$ is the privacy guarantee.
Although the worst-case guarantee is worse than that of state of the art private clustering methods, the algorithm we propose is practical.
arXiv Detail & Related papers (2022-06-17T09:24:41Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - Inducing Equilibria via Incentives: Simultaneous Design-and-Play Finds
Global Optima [114.31577038081026]
We propose an efficient method that tackles the designer's and agents' problems simultaneously in a single loop.
Although the designer does not solve the equilibrium problem repeatedly, it can anticipate the overall influence of the incentives on the agents.
We prove that the algorithm converges to the global optima at a sublinear rate for a broad class of games.
arXiv Detail & Related papers (2021-10-04T06:53:59Z) - An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning [5.634825161148484]
We define malfare, measuring overall societal harm (rather than wellbeing)
We propose an equivalently-axmatically justified alternative, and study the resulting computational and statistical learning questions.
We show broad conditions under which, with appropriate modifications, many standard PAC-learners may be converted to fair-PAC learners.
arXiv Detail & Related papers (2021-04-29T17:18:17Z) - Individually Fair Gradient Boosting [86.1984206610373]
We consider the task of enforcing individual fairness in gradient boosting.
We show that our algorithm converges globally and generalizes.
We also demonstrate the efficacy of our algorithm on three ML problems susceptible to algorithmic bias.
arXiv Detail & Related papers (2021-03-31T03:06:57Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - The Fairness-Accuracy Pareto Front [19.556904444436523]
Algorithmic fairness seeks to identify and correct sources of bias in machine learning algorithms.
We provide formal tools for reconciling this fundamental tension in algorithm fairness.
arXiv Detail & Related papers (2020-08-25T03:32:15Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.