Law Informs Code: A Legal Informatics Approach to Aligning Artificial
Intelligence with Humans
- URL: http://arxiv.org/abs/2209.13020v2
- Date: Thu, 29 Sep 2022 17:04:40 GMT
- Title: Law Informs Code: A Legal Informatics Approach to Aligning Artificial
Intelligence with Humans
- Authors: John J Nay
- Abstract summary: Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives.
"Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We are currently unable to specify human goals and societal values in a way
that reliably directs AI behavior. Law-making and legal interpretation form a
computational engine that converts opaque human values into legible directives.
"Law Informs Code" is the research agenda capturing complex computational legal
processes, and embedding them in AI. Similar to how parties to a legal contract
cannot foresee every potential contingency of their future relationship, and
legislators cannot predict all the circumstances under which their proposed
bills will be applied, we cannot ex ante specify rules that provably direct
good AI behavior. Legal theory and practice have developed arrays of tools to
address these specification problems. For instance, legal standards allow
humans to develop shared understandings and adapt them to novel situations. In
contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior
through the threat of sanction), leveraged as an expression of how humans
communicate their goals, and what society values, Law Informs Code.
We describe how data generated by legal processes (methods of law-making,
statutory interpretation, contract drafting, applications of standards, legal
reasoning, etc.) can facilitate the robust specification of inherently vague
human goals. This increases human-AI alignment and the local usefulness of AI.
Toward society-AI alignment, we present a framework for understanding law as
the applied philosophy of multi-agent alignment. Although law is partly a
reflection of historically contingent political power - and thus not a perfect
aggregation of citizen preferences - if properly parsed, its distillation
offers the most legitimate computational comprehension of societal values
available. If law eventually informs powerful AI, engaging in the deliberative
political process to improve law takes on even more meaning.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The Reasonable Person Standard for AI [0.0]
The American legal system often uses the "Reasonable Person Standard"
This paper argues that the reasonable person standard provides useful guidelines for the type of behavior we should develop, probe, and stress-test in models.
arXiv Detail & Related papers (2024-06-07T06:35:54Z) - A Case for AI Safety via Law [0.0]
How to make artificial intelligence (AI) systems safe and aligned with human values is an open research question.
Proposed solutions tend toward relying on human intervention in uncertain situations.
This paper makes a case that effective legal systems are the best way to address AI safety.
arXiv Detail & Related papers (2023-07-31T19:55:27Z) - Large Language Models as Fiduciaries: A Case Study Toward Robustly
Communicating With Artificial Intelligence Through Legal Standards [0.0]
Legal standards facilitate robust communication of inherently vague and underspecified goals.
Our research is an initial step toward a framework for evaluating AI understanding of legal standards more broadly.
arXiv Detail & Related papers (2023-01-24T16:03:20Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Aligning Artificial Intelligence with Humans through Public Policy [0.0]
This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
arXiv Detail & Related papers (2022-06-25T21:31:14Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z) - AI and Legal Argumentation: Aligning the Autonomous Levels of AI Legal
Reasoning [0.0]
Legal argumentation is a vital cornerstone of justice, underpinning an adversarial form of law.
Extensive research has attempted to augment or undertake legal argumentation via the use of computer-based automation including Artificial Intelligence (AI)
An innovative meta-approach is proposed to apply the Levels of Autonomy (LoA) of AI Legal Reasoning to the maturation of AI and Legal Argumentation (AILA)
arXiv Detail & Related papers (2020-09-11T22:05:40Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - How Does NLP Benefit Legal System: A Summary of Legal Artificial
Intelligence [81.04070052740596]
Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.
This paper introduces the history, the current state, and the future directions of research in LegalAI.
arXiv Detail & Related papers (2020-04-25T14:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.