The Flaws of Policies Requiring Human Oversight of Government Algorithms
- URL: http://arxiv.org/abs/2109.05067v4
- Date: Mon, 24 Oct 2022 14:48:38 GMT
- Title: The Flaws of Policies Requiring Human Oversight of Government Algorithms
- Authors: Ben Green
- Abstract summary: I propose a shift from human oversight to institutional oversight as the central mechanism for regulating government algorithms.
First, agencies must justify that it is appropriate to incorporate an algorithm into decision-making.
Second, these justifications must receive democratic public review and approval before the agency can adopt the algorithm.
- Score: 2.741266294612776
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As algorithms become an influential component of government decision-making
around the world, policymakers have debated how governments can attain the
benefits of algorithms while preventing the harms of algorithms. One mechanism
that has become a centerpiece of global efforts to regulate government
algorithms is to require human oversight of algorithmic decisions. Despite the
widespread turn to human oversight, these policies rest on an uninterrogated
assumption: that people are able to effectively oversee algorithmic
decision-making. In this article, I survey 41 policies that prescribe human
oversight of government algorithms and find that they suffer from two
significant flaws. First, evidence suggests that people are unable to perform
the desired oversight functions. Second, as a result of the first flaw, human
oversight policies legitimize government uses of faulty and controversial
algorithms without addressing the fundamental issues with these tools. Thus,
rather than protect against the potential harms of algorithmic decision-making
in government, human oversight policies provide a false sense of security in
adopting algorithms and enable vendors and agencies to shirk accountability for
algorithmic harms. In light of these flaws, I propose a shift from human
oversight to institutional oversight as the central mechanism for regulating
government algorithms. This institutional approach operates in two stages.
First, agencies must justify that it is appropriate to incorporate an algorithm
into decision-making and that any proposed forms of human oversight are
supported by empirical evidence. Second, these justifications must receive
democratic public review and approval before the agency can adopt the
algorithm.
Related papers
- Persuasion, Delegation, and Private Information in Algorithm-Assisted
Decisions [0.0]
A principal designs an algorithm that generates a publicly observable prediction of a binary state.
She must decide whether to act directly based on the prediction or to delegate the decision to an agent with private information but potential misalignment.
We study the optimal design of the prediction algorithm and the delegation rule in such environments.
arXiv Detail & Related papers (2024-02-14T18:32:30Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Human Control: Definitions and Algorithms [11.536162323162099]
We show that shutdown instructability implies appropriate shutdown behavior, retention of human autonomy, and avoidance of user harm.
We also analyse the related concepts of non-obstruction and shutdown alignment, three previously proposed algorithms for human control, and one new algorithm.
arXiv Detail & Related papers (2023-05-31T13:53:02Z) - How do "technical" design-choices made when building algorithmic
decision-making tools for criminal justice authorities create constitutional
dangers? Part II [0.0]
We argue that seemingly "technical" choices made by developers of machine-learning based algorithmic tools can create serious constitutional dangers.
We show how public law principles and more specific legal duties are routinely overlooked in algorithmic tool-building and implementation.
We argue that technical developers must collaborate closely with public law experts to ensure that algorithmic decision-support tools are configured and implemented in a manner that is demonstrably compliant with public law principles and doctrine.
arXiv Detail & Related papers (2023-01-11T20:54:49Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Audit and Assurance of AI Algorithms: A framework to ensure ethical
algorithmic practices in Artificial Intelligence [0.0]
U.S. lacks strict legislative prohibitions or specified protocols for measuring damages.
From autonomous vehicles and banking to medical care, housing, and legal decisions, there will soon be enormous amounts of algorithms.
Governments, businesses, and society would have an algorithm audit, which would have systematic verification that algorithms are lawful, ethical, and secure.
arXiv Detail & Related papers (2021-07-14T15:16:40Z) - Towards Meta-Algorithm Selection [78.13985819417974]
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates.
We show that meta-algorithm-selection can indeed prove beneficial in some cases.
arXiv Detail & Related papers (2020-11-17T17:27:33Z) - Imposing Regulation on Advanced Algorithms [0.0]
The book examines universally applicable patterns in administrative decisions and judicial rulings.
It investigates the role and significance of national and indeed supranational regulatory bodies for advanced algorithms.
It considers ENISA, an EU agency that focuses on network and information security, as an interesting candidate for a European regulator of advanced algorithms.
arXiv Detail & Related papers (2020-05-16T20:26:54Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.