The Language Labyrinth: Constructive Critique on the Terminology Used in
the AI Discourse
- URL: http://arxiv.org/abs/2307.10292v1
- Date: Tue, 18 Jul 2023 14:32:21 GMT
- Title: The Language Labyrinth: Constructive Critique on the Terminology Used in
the AI Discourse
- Authors: Rainer Rehak
- Abstract summary: This paper claims, that AI debates are still characterised by a lack of critical distance to metaphors like 'training', 'learning' or 'deciding'
As consequence, reflections regarding responsibility or potential use-cases are greatly distorted.
It is a conceptual work at the intersection of critical computer science and philosophy of language.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the interdisciplinary field of artificial intelligence (AI) the problem of
clear terminology is especially momentous. This paper claims, that AI debates
are still characterised by a lack of critical distance to metaphors like
'training', 'learning' or 'deciding'. As consequence, reflections regarding
responsibility or potential use-cases are greatly distorted. Yet, if relevant
decision-makers are convinced that AI can develop an 'understanding' or
properly 'interpret' issues, its regular use for sensitive tasks like deciding
about social benefits or judging court cases looms. The chapter argues its
claim by analysing central notions of the AI debate and tries to contribute by
proposing more fitting terminology and hereby enabling more fruitful debates.
It is a conceptual work at the intersection of critical computer science and
philosophy of language.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Position Paper: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience [4.524832437237367]
Inner Interpretability is a promising field tasked with uncovering the inner mechanisms of AI systems.
Recent critiques raise issues that question its usefulness to advance the broader goals of AI.
Here we draw the relevant connections and highlight lessons that can be transferred productively between fields.
arXiv Detail & Related papers (2024-06-03T14:16:56Z) - Contestable AI needs Computational Argumentation [15.15970495693702]
State-of-the-art approaches predominantly neglect the need for AI systems to be contestable.
We argue that contestable AI requires dynamic (human-machine and/or machine-machine) explainability and decision-making processes.
arXiv Detail & Related papers (2024-05-17T12:23:18Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Expanding the Set of Pragmatic Considerations in Conversational AI [0.26206189324400636]
We discuss several pragmatic limitations of current conversational AI systems.
We label our complaints as "Turing Test Triggers" (TTTs)
We develop a taxonomy of pragmatic considerations intended to identify what pragmatic competencies a conversational AI system requires.
arXiv Detail & Related papers (2023-10-27T19:21:50Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Discourse over Discourse: The Need for an Expanded Pragmatic Focus in
Conversational AI [0.5884031187931463]
We discuss several challenges in both summarization of conversations and other conversational AI applications.
We illustrate the importance of pragmatics with so-called star sentences.
Because the baseline for quality of AI is indistinguishability from human behavior, we label our complaints as "Turing Test Triggers"
arXiv Detail & Related papers (2023-04-27T21:51:42Z) - The Debate Over Understanding in AI's Large Language Models [0.18275108630751835]
We survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to "understand" language.
We argue that a new science of intelligence can be developed that will provide insight into distinct modes of understanding.
arXiv Detail & Related papers (2022-10-14T17:04:29Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.