A Technological Perspective on Misuse of Available AI
- URL: http://arxiv.org/abs/2403.15325v1
- Date: Fri, 22 Mar 2024 16:30:58 GMT
- Title: A Technological Perspective on Misuse of Available AI
- Authors: Lukas Pöhler, Valentin Schrader, Alexander Ladwein, Florian von Keller,
- Abstract summary: Potential malicious misuse of civilian artificial intelligence (AI) poses serious threats to security on a national and international level.
We show how already existing and openly available AI technology could be misused.
We develop three exemplary use cases of potentially misused AI that threaten political, digital and physical security.
- Score: 41.94295877935867
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Potential malicious misuse of civilian artificial intelligence (AI) poses serious threats to security on a national and international level. Besides defining autonomous systems from a technological viewpoint and explaining how AI development is characterized, we show how already existing and openly available AI technology could be misused. To underline this, we developed three exemplary use cases of potentially misused AI that threaten political, digital and physical security. The use cases can be built from existing AI technologies and components from academia, the private sector and the developer-community. This shows how freely available AI can be combined into autonomous weapon systems. Based on the use cases, we deduce points of control and further measures to prevent the potential threat through misused AI. Further, we promote the consideration of malicious misuse of civilian AI systems in the discussion on autonomous weapon systems (AWS).
Related papers
- Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Killer Apps: Low-Speed, Large-Scale AI Weapons [2.2899177316144943]
Artificial Intelligence (AI) and Machine Learning (ML) advancements present new challenges and opportunities in warfare and security.
This paper explores the concept of AI weapons, their deployment, detection, and potential countermeasures.
arXiv Detail & Related papers (2024-01-14T12:09:40Z) - A Red Teaming Framework for Securing AI in Maritime Autonomous Systems [0.0]
We propose one of the first red team frameworks for evaluating the AI security of maritime autonomous systems.
This framework is a multi-part checklist, which can be tailored to different systems and requirements.
We demonstrate this framework to be highly effective for a red team to use to uncover numerous vulnerabilities within a real-world maritime autonomous systems AI.
arXiv Detail & Related papers (2023-12-08T14:59:07Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Deception: A Survey of Examples, Risks, and Potential Solutions [20.84424818447696]
This paper argues that a range of current AI systems have learned how to deceive humans.
We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.
arXiv Detail & Related papers (2023-08-28T17:59:35Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Structured access to AI capabilities: an emerging paradigm for safe AI
deployment [0.0]
Instead of openly disseminating AI systems, developers facilitate controlled, arm's length interactions with their AI systems.
Aim is to prevent dangerous AI capabilities from being widely accessible, whilst preserving access to AI capabilities that can be used safely.
arXiv Detail & Related papers (2022-01-13T19:30:16Z) - AI Failures: A Review of Underlying Issues [0.0]
We focus on AI failures on account of flaws in conceptualization, design and deployment.
We find that AI systems fail on account of omission and commission errors in the design of the AI system.
An AI system is quite likely to fail in situations where, in effect, it is called upon to deliver moral judgments.
arXiv Detail & Related papers (2020-07-18T15:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.