Inference for an Algorithmic Fairness-Accuracy Frontier
- URL: http://arxiv.org/abs/2402.08879v1
- Date: Wed, 14 Feb 2024 00:56:09 GMT
- Title: Inference for an Algorithmic Fairness-Accuracy Frontier
- Authors: Yiqi Liu and Francesca Molinari
- Abstract summary: We provide a consistent estimator for a theoretical fairness-accuracy frontier put forward by Liang, Lu and Mu (2023)
We propose inference methods to test hypotheses that have received much attention in the fairness literature.
We show that the estimated support function converges to a tight process as the sample size increases.
- Score: 0.9147443443422864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision-making processes increasingly rely on the use of algorithms. Yet,
algorithms' predictive ability frequently exhibit systematic variation across
subgroups of the population. While both fairness and accuracy are desirable
properties of an algorithm, they often come at the cost of one another. What
should a fairness-minded policymaker do then, when confronted with finite data?
In this paper, we provide a consistent estimator for a theoretical
fairness-accuracy frontier put forward by Liang, Lu and Mu (2023) and propose
inference methods to test hypotheses that have received much attention in the
fairness literature, such as (i) whether fully excluding a covariate from use
in training the algorithm is optimal and (ii) whether there are less
discriminatory alternatives to an existing algorithm. We also provide an
estimator for the distance between a given algorithm and the fairest point on
the frontier, and characterize its asymptotic distribution. We leverage the
fact that the fairness-accuracy frontier is part of the boundary of a convex
set that can be fully represented by its support function. We show that the
estimated support function converges to a tight Gaussian process as the sample
size increases, and then express policy-relevant hypotheses as restrictions on
the support function to construct valid test statistics.
Related papers
- Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Online POMDP Planning with Anytime Deterministic Guarantees [11.157761902108692]
Planning under uncertainty can be mathematically formalized using partially observable Markov decision processes (POMDPs)
Finding an optimal plan for POMDPs can be computationally expensive and is feasible only for small tasks.
We derive a deterministic relationship between a simplified solution that is easier to obtain and the theoretically optimal one.
arXiv Detail & Related papers (2023-10-03T04:40:38Z) - Provably Efficient Learning in Partially Observable Contextual Bandit [4.910658441596583]
We show how causal bounds can be applied to improving classical bandit algorithms.
This research has the potential to enhance the performance of contextual bandit agents in real-world applications.
arXiv Detail & Related papers (2023-08-07T13:24:50Z) - Online Heavy-tailed Change-point detection [6.7643284029102295]
We present an algorithm based on clipped Gradient Descent (SGD), that works even if we only assume that the second moment of the data generating process is bounded.
We derive guarantees on worst-case, finite-sample false-positive rate (FPR) over the family of all distributions with bounded second moment.
Our method is the first OCPD algorithm that guarantees finite-sample FPR, even if the data is high dimensional and the underlying distributions are heavy-tailed.
arXiv Detail & Related papers (2023-06-15T23:39:05Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Structural Estimation of Markov Decision Processes in High-Dimensional
State Space with Finite-Time Guarantees [39.287388288477096]
We consider the task of estimating a structural model of dynamic decisions by a human agent based upon the observable history of implemented actions and visited states.
This problem has an inherent nested structure: in the inner problem, an optimal policy for a given reward function is identified while in the outer problem, a measure of fit is maximized.
We propose a single-loop estimation algorithm with finite time guarantees that is equipped to deal with high-dimensional state spaces.
arXiv Detail & Related papers (2022-10-04T00:11:38Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Large-Scale Methods for Distributionally Robust Optimization [53.98643772533416]
We prove that our algorithms require a number of evaluations gradient independent of training set size and number of parameters.
Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
arXiv Detail & Related papers (2020-10-12T17:41:44Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z) - Scalable Approximate Inference and Some Applications [2.6541211006790983]
In this thesis, we propose a new framework for approximate inference.
Our proposed four algorithms are motivated by the recent computational progress of Stein's method.
Results on simulated and real datasets indicate the statistical efficiency and wide applicability of our algorithm.
arXiv Detail & Related papers (2020-03-07T04:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.