Towards AI Forensics: Did the Artificial Intelligence System Do It?
- URL: http://arxiv.org/abs/2005.13635v3
- Date: Fri, 11 Aug 2023 10:17:03 GMT
- Title: Towards AI Forensics: Did the Artificial Intelligence System Do It?
- Authors: Johannes Schneider and Frank Breitinger
- Abstract summary: We focus on AI that is potentially malicious by design'' and grey box analysis.
Our evaluation using convolutional neural networks illustrates challenges and ideas for identifying malicious AI.
- Score: 2.5991265608180396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) makes decisions impacting our daily lives in an
increasingly autonomous manner. Their actions might cause accidents, harm, or,
more generally, violate regulations. Determining whether an AI caused a
specific event and, if so, what triggered the AI's action, are key forensic
questions. We provide a conceptualization of the problems and strategies for
forensic investigation. We focus on AI that is potentially ``malicious by
design'' and grey box analysis. Our evaluation using convolutional neural
networks illustrates challenges and ideas for identifying malicious AI.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - AI incidents and 'networked trouble': The case for a research agenda [0.0]
I argue for a research agenda focused on AI incidents and how they are constructed in online environments.
I take up the example of an AI incident from September 2020, when a Twitter user created a 'horrible experiment' to demonstrate the racist bias of Twitter's algorithm for cropping images.
I argue that AI incidents like this are a significant means for participating in AI systems that require further research.
arXiv Detail & Related papers (2024-01-07T11:23:13Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - AI Ethics Issues in Real World: Evidence from AI Incident Database [0.6091702876917279]
We identify 13 application areas which often see unethical use of AI, with intelligent service robots, language/vision models and autonomous driving taking the lead.
Ethical issues appear in 8 different forms, from inappropriate use and racial discrimination, to physical safety and unfair algorithm.
arXiv Detail & Related papers (2022-06-15T16:25:57Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Socially Responsible AI Algorithms: Issues, Purposes, and Challenges [31.382000425295885]
Technologists and AI researchers have a responsibility to develop trustworthy AI systems.
To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness.
arXiv Detail & Related papers (2021-01-01T17:34:42Z) - AI Failures: A Review of Underlying Issues [0.0]
We focus on AI failures on account of flaws in conceptualization, design and deployment.
We find that AI systems fail on account of omission and commission errors in the design of the AI system.
An AI system is quite likely to fail in situations where, in effect, it is called upon to deliver moral judgments.
arXiv Detail & Related papers (2020-07-18T15:31:29Z) - Evidence-based explanation to promote fairness in AI systems [3.190891983147147]
People make decisions and usually, they need to explain their decision to others or in some matter.
In order to explain their decisions with AI support, people need to understand how AI is part of that decision.
We have been exploring an evidence-based explanation design approach to 'tell the story of a decision'
arXiv Detail & Related papers (2020-03-03T14:22:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.