Epistemic considerations when AI answers questions for us
- URL: http://arxiv.org/abs/2304.14352v1
- Date: Sun, 23 Apr 2023 08:26:42 GMT
- Title: Epistemic considerations when AI answers questions for us
- Authors: Johan F. Hoorn and Juliet J.-Y. Chen
- Abstract summary: We argue that careless reliance on AI to answer our questions and to judge our output is a violation of Grice's Maxim of Quality and Lemoine's legal Maxim of Innocence.
What is missing in the focus on output and results of AI-generated and AI-evaluated content is, apart from paying proper tribute, the demand to follow a person's thought process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this position paper, we argue that careless reliance on AI to answer our
questions and to judge our output is a violation of Grice's Maxim of Quality as
well as a violation of Lemoine's legal Maxim of Innocence, performing an
(unwarranted) authority fallacy, and while lacking assessment signals,
committing Type II errors that result from fallacies of the inverse. What is
missing in the focus on output and results of AI-generated and AI-evaluated
content is, apart from paying proper tribute, the demand to follow a person's
thought process (or a machine's decision processes). In deliberately avoiding
Neural Networks that cannot explain how they come to their conclusions, we
introduce logic-symbolic inference to handle any possible epistemics any human
or artificial information processor may have. Our system can deal with various
belief systems and shows how decisions may differ for what is true, false,
realistic, unrealistic, literal, or anomalous. As is, stota AI such as ChatGPT
is a sorcerer's apprentice.
Related papers
- On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Some Critical and Ethical Perspectives on the Empirical Turn of AI
Interpretability [0.0]
We consider two issues currently faced by Artificial Intelligence development: the lack of ethics and interpretability of AI decisions.
We experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power.
We propose two scenarios for the future development of ethical AI: more external regulation or more liberalization of AI explanations.
arXiv Detail & Related papers (2021-09-20T14:41:50Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Playing the Blame Game with Robots [0.0]
We find that people are willing to ascribe moral blame to AI systems in contexts of recklessness.
The higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.
arXiv Detail & Related papers (2021-02-08T20:53:42Z) - Towards AI Forensics: Did the Artificial Intelligence System Do It? [2.5991265608180396]
We focus on AI that is potentially malicious by design'' and grey box analysis.
Our evaluation using convolutional neural networks illustrates challenges and ideas for identifying malicious AI.
arXiv Detail & Related papers (2020-05-27T20:28:19Z) - Self-explaining AI as an alternative to interpretable AI [0.0]
Double descent indicates that deep neural networks operate by smoothly interpolating between data points.
Neural networks trained on complex real world data are inherently hard to interpret and prone to failure if asked to extrapolate.
Self-explaining AIs are capable of providing a human-understandable explanation along with confidence levels for both the decision and explanation.
arXiv Detail & Related papers (2020-02-12T18:50:11Z) - Artificial Artificial Intelligence: Measuring Influence of AI
'Assessments' on Moral Decision-Making [48.66982301902923]
We examined the effect of feedback from false AI on moral decision-making about donor kidney allocation.
We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI.
arXiv Detail & Related papers (2020-01-13T14:15:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.