A Case for AI Safety via Law
- URL: http://arxiv.org/abs/2309.12321v2
- Date: Tue, 3 Oct 2023 16:37:40 GMT
- Title: A Case for AI Safety via Law
- Authors: Jeffrey W. Johnston
- Abstract summary: How to make artificial intelligence (AI) systems safe and aligned with human values is an open research question.
Proposed solutions tend toward relying on human intervention in uncertain situations.
This paper makes a case that effective legal systems are the best way to address AI safety.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How to make artificial intelligence (AI) systems safe and aligned with human
values is an open research question. Proposed solutions tend toward relying on
human intervention in uncertain situations, learning human values and
intentions through training or observation, providing off-switches,
implementing isolation or simulation environments, or extrapolating what people
would want if they had more knowledge and more time to think. Law-based
approaches--such as inspired by Isaac Asimov--have not been well regarded. This
paper makes a case that effective legal systems are the best way to address AI
safety. Law is defined as any rules that codify prohibitions and prescriptions
applicable to particular agents in specified domains/contexts and includes
processes for enacting, managing, enforcing, and litigating such rules.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence [0.0]
We explore the applicability of approval regulation -- that is, regulation of a product that combines experimental minima with government licensure conditioned partially or fully upon that experimentation -- to the regulation of frontier AI.
There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks.
We conclude by highlighting the role of policy learning and experimentation in regulatory development.
arXiv Detail & Related papers (2024-08-01T17:54:57Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - The Reasonable Person Standard for AI [0.0]
The American legal system often uses the "Reasonable Person Standard"
This paper argues that the reasonable person standard provides useful guidelines for the type of behavior we should develop, probe, and stress-test in models.
arXiv Detail & Related papers (2024-06-07T06:35:54Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - Statutory Professions in AI governance and their consequences for
explainable AI [2.363388546004777]
Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals.
We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework.
arXiv Detail & Related papers (2023-06-15T08:51:28Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Law Informs Code: A Legal Informatics Approach to Aligning Artificial
Intelligence with Humans [0.0]
Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives.
"Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI.
arXiv Detail & Related papers (2022-09-14T00:49:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.