Active Fairness Auditing
- URL: http://arxiv.org/abs/2206.08450v1
- Date: Thu, 16 Jun 2022 21:12:00 GMT
- Title: Active Fairness Auditing
- Authors: Tom Yan and Chicheng Zhang
- Abstract summary: We study query-based auditing algorithms that can estimate the demographic parity of ML models in a query-efficient manner.
We propose an optimal deterministic algorithm, as well as a practical randomized, oracle-efficient algorithm with comparable guarantees.
Our first exploration of active fairness estimation aims to put AI governance on firmer theoretical foundations.
- Score: 22.301071549943064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fast spreading adoption of machine learning (ML) by companies across
industries poses significant regulatory challenges. One such challenge is
scalability: how can regulatory bodies efficiently audit these ML models,
ensuring that they are fair? In this paper, we initiate the study of
query-based auditing algorithms that can estimate the demographic parity of ML
models in a query-efficient manner. We propose an optimal deterministic
algorithm, as well as a practical randomized, oracle-efficient algorithm with
comparable guarantees. Furthermore, we make inroads into understanding the
optimal query complexity of randomized active fairness estimation algorithms.
Our first exploration of active fairness estimation aims to put AI governance
on firmer theoretical foundations.
Related papers
- EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - On Uncertainty Quantification for Near-Bayes Optimal Algorithms [2.622066970118316]
We show that it is possible to recover the Bayesian posterior defined by the task distribution, which is unknown but optimal in this setting, by building a martingale posterior using the algorithm.
Experiments based on a variety of non-NN and NN algorithms demonstrate the efficacy of our method.
arXiv Detail & Related papers (2024-03-28T12:42:25Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Measuring, Interpreting, and Improving Fairness of Algorithms using
Causal Inference and Randomized Experiments [8.62694928567939]
We present an algorithm-agnostic framework (MIIF) to Measure, Interpret, and Improve the Fairness of an algorithmic decision.
We measure the algorithm bias using randomized experiments, which enables the simultaneous measurement of disparate treatment, disparate impact, and economic value.
We also develop an explainable machine learning model which accurately interprets and distills the beliefs of a blackbox algorithm.
arXiv Detail & Related papers (2023-09-04T19:45:18Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - A Strong Baseline for Batch Imitation Learning [25.392006064406967]
We provide an easy-to-implement, novel algorithm for imitation learning under a strict data paradigm.
This paradigm allows our algorithm to be used for environments in which safety or cost are of critical concern.
arXiv Detail & Related papers (2023-02-06T14:03:33Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - Unpacking the Black Box: Regulating Algorithmic Decisions [1.283555556182245]
We propose a model of oversight over 'black-box' algorithms used in high-stakes applications such as lending, medical testing, or hiring.
We show that allowing for complex algorithms can improve welfare, but the gains depend on how the regulator regulates them.
arXiv Detail & Related papers (2021-10-05T23:20:25Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.