Unpacking the Black Box: Regulating Algorithmic Decisions
- URL: http://arxiv.org/abs/2110.03443v3
- Date: Fri, 31 May 2024 23:47:21 GMT
- Title: Unpacking the Black Box: Regulating Algorithmic Decisions
- Authors: Laura Blattner, Scott Nelson, Jann Spiess,
- Abstract summary: We propose a model of oversight over 'black-box' algorithms used in high-stakes applications such as lending, medical testing, or hiring.
We show that allowing for complex algorithms can improve welfare, but the gains depend on how the regulator regulates them.
- Score: 1.283555556182245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: What should regulators of complex algorithms regulate? We propose a model of oversight over 'black-box' algorithms used in high-stakes applications such as lending, medical testing, or hiring. In our model, a regulator is limited in how much she can learn about a black-box model deployed by an agent with misaligned preferences. The regulator faces two choices: first, whether to allow for the use of complex algorithms; and second, which key properties of algorithms to regulate. We show that limiting agents to algorithms that are simple enough to be fully transparent is inefficient as long as the misalignment is limited and complex algorithms have sufficiently better performance than simple ones. Allowing for complex algorithms can improve welfare, but the gains depend on how the regulator regulates them. Regulation that focuses on the overall average behavior of algorithms, for example based on standard explainer tools, will generally be inefficient. Targeted regulation that focuses on the source of incentive misalignment, e.g., excess false positives or racial disparities, can provide second-best solutions. We provide empirical support for our theoretical findings using an application in consumer lending, where we document that complex models regulated based on context-specific explanation tools outperform simple, fully transparent models. This gain from complex models represents a Pareto improvement across our empirical applications that is preferred both by the lender and from the perspective of the financial regulator.
Related papers
- A General Framework for Learning from Weak Supervision [93.89870459388185]
This paper introduces a general framework for learning from weak supervision (GLWS) with a novel algorithm.
Central to GLWS is an Expectation-Maximization (EM) formulation, adeptly accommodating various weak supervision sources.
We also present an advanced algorithm that significantly simplifies the EM computational demands.
arXiv Detail & Related papers (2024-02-02T21:48:50Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Social Mechanism Design: A Low-Level Introduction [31.564788318133264]
We show that agents have preferences over both decision outcomes and the rules or procedures used to make decisions.
We identify simple, intuitive preference structures at low levels that can be generalized to form the building blocks of preferences at higher levels.
We analyze algorithms for acceptance in two different domains: asymmetric dichotomous choice and constitutional amendment.
arXiv Detail & Related papers (2022-11-15T20:59:34Z) - Actor-Critic based Improper Reinforcement Learning [61.430513757337486]
We consider an improper reinforcement learning setting where a learner is given $M$ base controllers for an unknown Markov decision process.
We propose two algorithms: (1) a Policy Gradient-based approach; and (2) an algorithm that can switch between a simple Actor-Critic scheme and a Natural Actor-Critic scheme.
arXiv Detail & Related papers (2022-07-19T05:55:02Z) - Active Fairness Auditing [22.301071549943064]
We study query-based auditing algorithms that can estimate the demographic parity of ML models in a query-efficient manner.
We propose an optimal deterministic algorithm, as well as a practical randomized, oracle-efficient algorithm with comparable guarantees.
Our first exploration of active fairness estimation aims to put AI governance on firmer theoretical foundations.
arXiv Detail & Related papers (2022-06-16T21:12:00Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - FairXGBoost: Fairness-aware Classification in XGBoost [0.0]
We propose a fair variant of XGBoost that enjoys all the advantages of XGBoost, while also matching the levels of fairness from bias-mitigation algorithms.
We provide an empirical analysis of our proposed method on standard benchmark datasets used in the fairness community.
arXiv Detail & Related papers (2020-09-03T04:08:23Z) - Verification and Validation of Convex Optimization Algorithms for Model
Predictive Control [1.5322124183968633]
This article discusses the formal verification of the Ellipsoid method, a convex optimization algorithm, and its code implementation.
The applicability and limitations of those code properties and proofs are presented as well.
Modifications to the algorithm are presented which can be used to control its numerical stability.
arXiv Detail & Related papers (2020-05-26T09:18:14Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - XtracTree: a Simple and Effective Method for Regulator Validation of
Bagging Methods Used in Retail Banking [0.0]
We propose XtracTree, an algorithm capable of efficiently converting an ML bagging classifier, such as a random forest, into simple "if-then" rules.
Our experiments demonstrate that using XtracTree, one can convert an ML model into a rule-based algorithm.
The proposed approach allowed our banking institution to reduce up to 50% the time of delivery of our AI solutions to the end-user.
arXiv Detail & Related papers (2020-04-05T21:57:06Z) - Improved Algorithms for Conservative Exploration in Bandits [113.55554483194832]
We study the conservative learning problem in the contextual linear bandit setting and introduce a novel algorithm, the Conservative Constrained LinUCB (CLUCB2)
We derive regret bounds for CLUCB2 that match existing results and empirically show that it outperforms state-of-the-art conservative bandit algorithms in a number of synthetic and real-world problems.
arXiv Detail & Related papers (2020-02-08T19:35:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.