The Language Labyrinth: Constructive Critique on the Terminology Used in
the AI Discourse
- URL: http://arxiv.org/abs/2307.10292v1
- Date: Tue, 18 Jul 2023 14:32:21 GMT
- Title: The Language Labyrinth: Constructive Critique on the Terminology Used in
the AI Discourse
- Authors: Rainer Rehak
- Abstract summary: This paper claims, that AI debates are still characterised by a lack of critical distance to metaphors like 'training', 'learning' or 'deciding'
As consequence, reflections regarding responsibility or potential use-cases are greatly distorted.
It is a conceptual work at the intersection of critical computer science and philosophy of language.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the interdisciplinary field of artificial intelligence (AI) the problem of
clear terminology is especially momentous. This paper claims, that AI debates
are still characterised by a lack of critical distance to metaphors like
'training', 'learning' or 'deciding'. As consequence, reflections regarding
responsibility or potential use-cases are greatly distorted. Yet, if relevant
decision-makers are convinced that AI can develop an 'understanding' or
properly 'interpret' issues, its regular use for sensitive tasks like deciding
about social benefits or judging court cases looms. The chapter argues its
claim by analysing central notions of the AI debate and tries to contribute by
proposing more fitting terminology and hereby enabling more fruitful debates.
It is a conceptual work at the intersection of critical computer science and
philosophy of language.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Do great minds think alike? Investigating Human-AI Complementarity in Question Answering with CAIMIRA [43.116608441891096]
Humans outperform AI systems in knowledge-grounded abductive and conceptual reasoning.
State-of-the-art LLMs like GPT-4 and LLaMA show superior performance on targeted information retrieval.
arXiv Detail & Related papers (2024-10-09T03:53:26Z) - AI Thinking: A framework for rethinking artificial intelligence in practice [2.9805831933488127]
A growing range of disciplines are now involved in studying, developing, and assessing the use of AI in practice.
New, interdisciplinary approaches are needed to bridge competing conceptualisations of AI in practice.
I propose a novel conceptual framework called AI Thinking, which models key decisions and considerations involved in AI use across disciplinary perspectives.
arXiv Detail & Related papers (2024-08-26T04:41:21Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Expanding the Set of Pragmatic Considerations in Conversational AI [0.26206189324400636]
We discuss several pragmatic limitations of current conversational AI systems.
We label our complaints as "Turing Test Triggers" (TTTs)
We develop a taxonomy of pragmatic considerations intended to identify what pragmatic competencies a conversational AI system requires.
arXiv Detail & Related papers (2023-10-27T19:21:50Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Discourse over Discourse: The Need for an Expanded Pragmatic Focus in
Conversational AI [0.5884031187931463]
We discuss several challenges in both summarization of conversations and other conversational AI applications.
We illustrate the importance of pragmatics with so-called star sentences.
Because the baseline for quality of AI is indistinguishability from human behavior, we label our complaints as "Turing Test Triggers"
arXiv Detail & Related papers (2023-04-27T21:51:42Z) - The Debate Over Understanding in AI's Large Language Models [0.18275108630751835]
We survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to "understand" language.
We argue that a new science of intelligence can be developed that will provide insight into distinct modes of understanding.
arXiv Detail & Related papers (2022-10-14T17:04:29Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.