How do "technical" design-choices made when building algorithmic
decision-making tools for criminal justice authorities create constitutional
dangers?
- URL: http://arxiv.org/abs/2301.04713v1
- Date: Wed, 11 Jan 2023 20:49:40 GMT
- Title: How do "technical" design-choices made when building algorithmic
decision-making tools for criminal justice authorities create constitutional
dangers?
- Authors: Karen Yeung and Adam Harkens
- Abstract summary: We argue that seemingly "technical" choices made by developers of machine-learning based algorithmic tools can create serious constitutional dangers.
We show how public law principles and more specific legal duties are routinely overlooked in algorithmic tool-building and implementation.
We argue that technical developers must collaborate closely with public law experts to ensure that algorithmic decision-support tools are configured and implemented in a manner that is demonstrably compliant with public law principles and doctrine.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This two part paper argues that seemingly "technical" choices made by
developers of machine-learning based algorithmic tools used to inform decisions
by criminal justice authorities can create serious constitutional dangers,
enhancing the likelihood of abuse of decision-making power and the scope and
magnitude of injustice. Drawing on three algorithmic tools in use, or recently
used, to assess the "risk" posed by individuals to inform how they should be
treated by criminal justice authorities, we integrate insights from data
science and public law scholarship to show how public law principles and more
specific legal duties that are rooted in these principles, are routinely
overlooked in algorithmic tool-building and implementation. We argue that
technical developers must collaborate closely with public law experts to ensure
that if algorithmic decision-support tools are to inform criminal justice
decisions, those tools are configured and implemented in a manner that is
demonstrably compliant with public law principles and doctrine, including
respect for human rights, throughout the tool-building process.
Related papers
- Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - How do "technical" design-choices made when building algorithmic
decision-making tools for criminal justice authorities create constitutional
dangers? Part II [0.0]
We argue that seemingly "technical" choices made by developers of machine-learning based algorithmic tools can create serious constitutional dangers.
We show how public law principles and more specific legal duties are routinely overlooked in algorithmic tool-building and implementation.
We argue that technical developers must collaborate closely with public law experts to ensure that algorithmic decision-support tools are configured and implemented in a manner that is demonstrably compliant with public law principles and doctrine.
arXiv Detail & Related papers (2023-01-11T20:54:49Z) - Transparency, Compliance, And Contestability When Code Is(n't) Law [91.85674537754346]
Both technical security mechanisms and legal processes serve as mechanisms to deal with misbehaviour according to a set of norms.
While they share general similarities, there are also clear differences in how they are defined, act, and the effect they have on subjects.
This paper considers the similarities and differences between both types of mechanisms as ways of dealing with misbehaviour.
arXiv Detail & Related papers (2022-05-08T18:03:07Z) - Beyond Ads: Sequential Decision-Making Algorithms in Law and Public
Policy [2.762239258559568]
We explore the promises and challenges of employing sequential decision-making algorithms in law and public policy.
Our main thesis is that law and public policy pose distinct methodological challenges that the machine learning community has not yet addressed.
We discuss a wide range of potential applications of sequential decision-making algorithms in regulation and governance.
arXiv Detail & Related papers (2021-12-13T17:45:21Z) - The Flaws of Policies Requiring Human Oversight of Government Algorithms [2.741266294612776]
I propose a shift from human oversight to institutional oversight as the central mechanism for regulating government algorithms.
First, agencies must justify that it is appropriate to incorporate an algorithm into decision-making.
Second, these justifications must receive democratic public review and approval before the agency can adopt the algorithm.
arXiv Detail & Related papers (2021-09-10T18:58:45Z) - Audit and Assurance of AI Algorithms: A framework to ensure ethical
algorithmic practices in Artificial Intelligence [0.0]
U.S. lacks strict legislative prohibitions or specified protocols for measuring damages.
From autonomous vehicles and banking to medical care, housing, and legal decisions, there will soon be enormous amounts of algorithms.
Governments, businesses, and society would have an algorithm audit, which would have systematic verification that algorithms are lawful, ethical, and secure.
arXiv Detail & Related papers (2021-07-14T15:16:40Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Imposing Regulation on Advanced Algorithms [0.0]
The book examines universally applicable patterns in administrative decisions and judicial rulings.
It investigates the role and significance of national and indeed supranational regulatory bodies for advanced algorithms.
It considers ENISA, an EU agency that focuses on network and information security, as an interesting candidate for a European regulator of advanced algorithms.
arXiv Detail & Related papers (2020-05-16T20:26:54Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.