A+AI: Threats to Society, Remedies, and Governance
- URL: http://arxiv.org/abs/2409.02219v2
- Date: Sat, 7 Sep 2024 01:25:30 GMT
- Title: A+AI: Threats to Society, Remedies, and Governance
- Authors: Don Byrd,
- Abstract summary: This document focuses on the threats, especially near-term threats, that Artificial Intelligence (AI) brings to society.
It includes a table showing which countermeasures are likely to mitigate which threats.
The paper lists specific actions government should take as soon as possible.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This document focuses on the threats, especially near-term threats, that Artificial Intelligence (AI) brings to society. Most of the threats discussed here can result from any algorithmic process, not just AI; in addition, defining AI is notoriously difficult. For both reasons, it is important to think of "A+AI": Algorithms and Artificial Intelligence. In addition to the threats, this paper discusses countermeasures to them, and it includes a table showing which countermeasures are likely to mitigate which threats. Thoughtful governance could manage the risks without seriously impeding progress; in fact, chances are it would accelerate progress by reducing the social chaos that would otherwise be likely. The paper lists specific actions government should take as soon as possible, namely: * Require all social media platforms accessible in the U.S. to offer users verification that their accounts are owned by citizens, and to display every account's verification status * Establish regulations to require that all products created or significantly modified with A+AI be clearly labeled as such; to restrict use of generative AI to create likenesses of persons; and to require creators of generative AI software to disclose materials used to train their software and to compensate the creators of any copyrighted material used * Fund a crash project of research on mitigating the threats * Fund educational campaigns to raise awareness of the threats
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - A Survey on Offensive AI Within Cybersecurity [1.8206461789819075]
This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems.
It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure.
The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI.
arXiv Detail & Related papers (2024-09-26T17:36:22Z) - Mapping Technical Safety Research at AI Companies: A literature review and incentives analysis [0.0]
Report analyzes the technical research into safe AI development being conducted by three leading AI companies.
Anthropic, Google DeepMind, and OpenAI.
We defined safe AI development as developing AI systems that are unlikely to pose large-scale misuse or accident risks.
arXiv Detail & Related papers (2024-09-12T09:34:55Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - AI Safety: Necessary, but insufficient and possibly problematic [1.6797508081737678]
This article critically examines the recent hype around AI safety.
We consider what 'AI safety' actually means, and outline the dominant concepts that the digital footprint of AI safety aligns with.
We share our concerns on how AI safety may normalize AI that advances structural harm through providing exploitative and harmful AI with a veneer of safety.
arXiv Detail & Related papers (2024-03-26T06:18:42Z) - A Technological Perspective on Misuse of Available AI [41.94295877935867]
Potential malicious misuse of civilian artificial intelligence (AI) poses serious threats to security on a national and international level.
We show how already existing and openly available AI technology could be misused.
We develop three exemplary use cases of potentially misused AI that threaten political, digital and physical security.
arXiv Detail & Related papers (2024-03-22T16:30:58Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Role of Social Movements, Coalitions, and Workers in Resisting
Harmful Artificial Intelligence and Contributing to the Development of
Responsible AI [0.0]
Coalitions in all sectors are acting worldwide to resist hamful applications of AI.
There are biased, wrongful, and disturbing assumptions embedded in AI algorithms.
Perhaps one of the greatest contributions of AI will be to make us understand how important human wisdom truly is in life on earth.
arXiv Detail & Related papers (2021-07-11T18:51:29Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.