Is the U.S. Legal System Ready for AI's Challenges to Human Values?
- URL: http://arxiv.org/abs/2308.15906v3
- Date: Tue, 5 Sep 2023 01:01:58 GMT
- Title: Is the U.S. Legal System Ready for AI's Challenges to Human Values?
- Authors: Inyoung Cheong, Aylin Caliskan, Tadayoshi Kohno
- Abstract summary: This study investigates how effectively U.S. laws confront the challenges posed by Generative AI to human values.
We identify notable gaps and uncertainties within the existing legal framework regarding the protection of fundamental values.
We advocate for legal frameworks that evolve to recognize new threats and provide proactive, auditable guidelines to industry stakeholders.
- Score: 16.510834081597377
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Our interdisciplinary study investigates how effectively U.S. laws confront
the challenges posed by Generative AI to human values. Through an analysis of
diverse hypothetical scenarios crafted during an expert workshop, we have
identified notable gaps and uncertainties within the existing legal framework
regarding the protection of fundamental values, such as privacy, autonomy,
dignity, diversity, equity, and physical/mental well-being. Constitutional and
civil rights, it appears, may not provide sufficient protection against
AI-generated discriminatory outputs. Furthermore, even if we exclude the
liability shield provided by Section 230, proving causation for defamation and
product liability claims is a challenging endeavor due to the intricate and
opaque nature of AI systems. To address the unique and unforeseeable threats
posed by Generative AI, we advocate for legal frameworks that evolve to
recognize new threats and provide proactive, auditable guidelines to industry
stakeholders. Addressing these issues requires deep interdisciplinary
collaborations to identify harms, values, and mitigation strategies.
Related papers
- Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems [2.444630714797783]
We review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias.
We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making.
arXiv Detail & Related papers (2024-08-28T06:04:25Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Ethics and Responsible AI Deployment [1.3597551064547502]
Article explores the need for ethical AI systems that safeguard individual privacy while complying with ethical standards.
Research examines innovative algorithmic techniques such as differential privacy, homomorphic encryption, federated learning, international regulatory frameworks, and ethical guidelines.
arXiv Detail & Related papers (2023-11-12T13:32:46Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Queering the ethics of AI [0.6993026261767287]
The chapter emphasizes the ethical concerns surrounding the potential for AI to perpetuate discrimination.
The chapter argues that a critical examination of the conception of equality that often underpins non-discrimination law is necessary.
arXiv Detail & Related papers (2023-08-25T17:26:05Z) - Statutory Professions in AI governance and their consequences for
explainable AI [2.363388546004777]
Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals.
We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework.
arXiv Detail & Related papers (2023-06-15T08:51:28Z) - AI Ethics: An Empirical Study on the Views of Practitioners and
Lawmakers [8.82540441326446]
Transparency, accountability, and privacy are the most critical AI ethics principles.
Lack of ethical knowledge, no legal frameworks, and lacking monitoring bodies are the most common AI ethics challenges.
arXiv Detail & Related papers (2022-06-30T17:24:29Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.