The Fairness-Accuracy Pareto Front
- URL: http://arxiv.org/abs/2008.10797v2
- Date: Thu, 18 Nov 2021 22:58:54 GMT
- Title: The Fairness-Accuracy Pareto Front
- Authors: Susan Wei, Marc Niethammer
- Abstract summary: Algorithmic fairness seeks to identify and correct sources of bias in machine learning algorithms.
We provide formal tools for reconciling this fundamental tension in algorithm fairness.
- Score: 19.556904444436523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic fairness seeks to identify and correct sources of bias in machine
learning algorithms. Confoundingly, ensuring fairness often comes at the cost
of accuracy. We provide formal tools in this work for reconciling this
fundamental tension in algorithm fairness. Specifically, we put to use the
concept of Pareto optimality from multi-objective optimization and seek the
fairness-accuracy Pareto front of a neural network classifier. We demonstrate
that many existing algorithmic fairness methods are performing the so-called
linear scalarization scheme which has severe limitations in recovering Pareto
optimal solutions. We instead apply the Chebyshev scalarization scheme which is
provably superior theoretically and no more computationally burdensome at
recovering Pareto optimal solutions compared to the linear scheme.
Related papers
- Towards Fair Class-wise Robustness: Class Optimal Distribution Adversarial Training [1.5565181723989001]
Adversarial training has proven to be a highly effective method for improving the robustness of deep neural networks against adversarial attacks.
It has been observed to exhibit a limitation in terms of robust fairness, characterized by a significant disparity in robustness across different classes.
Recent efforts to mitigate this problem have turned to class-wise-weighted methods.
This paper proposes a novel min-max training framework, Class Optimal Distribution Adversarial Training.
arXiv Detail & Related papers (2025-01-08T14:19:03Z) - A Quantum Genetic Algorithm Framework for the MaxCut Problem [49.59986385400411]
The proposed method introduces a Quantum Genetic Algorithm (QGA) using a Grover-based evolutionary framework and divide-and-conquer principles.
On complete graphs, the proposed method consistently achieves the true optimal MaxCut values, outperforming the Semidefinite Programming (SDP) approach.
On ErdHos-R'enyi random graphs, the QGA demonstrates competitive performance, achieving median solutions within 92-96% of the SDP results.
arXiv Detail & Related papers (2025-01-02T05:06:16Z) - Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - Efficient Fairness-Performance Pareto Front Computation [51.558848491038916]
We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - Overcoming Brittleness in Pareto-Optimal Learning-Augmented Algorithms [6.131022957085439]
We propose a new framework in which the smoothness in the performance of the algorithm is enforced by means of a user-specified profile.
We apply this new approach to a well-studied online problem, namely the one-way trading problem.
arXiv Detail & Related papers (2024-08-07T23:15:21Z) - Inference for an Algorithmic Fairness-Accuracy Frontier [0.9147443443422864]
We provide a consistent estimator for a theoretical fairness-accuracy frontier put forward by Liang, Lu and Mu (2023)
We propose inference methods to test hypotheses that have received much attention in the fairness literature.
We show that the estimated support function converges to a tight process as the sample size increases.
arXiv Detail & Related papers (2024-02-14T00:56:09Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Pareto Efficient Fairness in Supervised Learning: From Extraction to
Tracing [26.704236797908177]
algorithmic decision-making systems are becoming more pervasive.
Due to the inherent trade-off between measures and accuracy, it is desirable to ensure the trade-off between overall loss and other criteria.
We propose a definition-agnostic, meaning that any well-defined notion of can be reduced to the PEF notion.
arXiv Detail & Related papers (2021-04-04T15:49:35Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.