Human Heuristics for AI-Generated Language Are Flawed
- URL: http://arxiv.org/abs/2206.07271v3
- Date: Mon, 7 Nov 2022 09:47:29 GMT
- Title: Human Heuristics for AI-Generated Language Are Flawed
- Authors: Maurice Jakesch, Jeffrey Hancock, Mor Naaman
- Abstract summary: We study whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI.
We experimentally demonstrate that these wordings make human judgment of AI-generated language predictable and manipulable.
We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI.
- Score: 8.465228064780744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human communication is increasingly intermixed with language generated by AI.
Across chat, email, and social media, AI systems produce smart replies,
autocompletes, and translations. AI-generated language is often not identified
as such but presented as language written by humans, raising concerns about
novel forms of deception and manipulation. Here, we study how humans discern
whether verbal self-presentations, one of the most personal and consequential
forms of language, were generated by AI. In six experiments, participants (N =
4,600) were unable to detect self-presentations generated by state-of-the-art
AI language models in professional, hospitality, and dating contexts. A
computational analysis of language features shows that human judgments of
AI-generated language are handicapped by intuitive but flawed heuristics such
as associating first-person pronouns, spontaneous wording, or family topics
with human-written language. We experimentally demonstrate that these
heuristics make human judgment of AI-generated language predictable and
manipulable, allowing AI systems to produce language perceived as more human
than human. We discuss solutions, such as AI accents, to reduce the deceptive
potential of language generated by AI, limiting the subversion of human
intuition.
Related papers
- Generative AI, Pragmatics, and Authenticity in Second Language Learning [0.0]
There are obvious benefits to integrating generative AI (artificial intelligence) into language learning and teaching.
However, due to how AI systems under-stand human language, they lack the lived experience to be able to use language with the same social awareness as humans.
There are built-in linguistic and cultural biases based on their training data which is mostly in English and predominantly from Western sources.
arXiv Detail & Related papers (2024-10-18T11:58:03Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Empirical evidence of Large Language Model's influence on human spoken communication [25.09136621615789]
Artificial Intelligence (AI) agents now interact with billions of humans in natural language.
This raises the question of whether AI has the potential to shape a fundamental aspect of human culture: the way we speak.
Recent analyses revealed that scientific publications already exhibit evidence of AI-specific language.
arXiv Detail & Related papers (2024-09-03T10:01:51Z) - Distributed agency in second language learning and teaching through generative AI [0.0]
ChatGPT can provide informal second language practice through chats in written or voice forms.
Instructors can use AI to build learning and assessment materials in a variety of media.
arXiv Detail & Related papers (2024-03-29T14:55:40Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Can Machines Imitate Humans? Integrative Turing Tests for Vision and Language Demonstrate a Narrowing Gap [45.6806234490428]
We benchmark current AIs in their abilities to imitate humans in three language tasks and three vision tasks.
Experiments involved 549 human agents plus 26 AI agents for dataset creation, and 1,126 human judges plus 10 AI judges.
Results reveal that current AIs are not far from being able to impersonate humans in complex language and vision challenges.
arXiv Detail & Related papers (2022-11-23T16:16:52Z) - What Artificial Neural Networks Can Tell Us About Human Language
Acquisition [47.761188531404066]
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language.
To increase the relevance of learnability results from computational models, we need to train model learners without significant advantages over humans.
arXiv Detail & Related papers (2022-08-17T00:12:37Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Human Evaluation of Interpretability: The Case of AI-Generated Music
Knowledge [19.508678969335882]
We focus on evaluating AI-discovered knowledge/rules in the arts and humanities.
We present an experimental procedure to collect and assess human-generated verbal interpretations of AI-generated music theory/rules rendered as sophisticated symbolic/numeric objects.
arXiv Detail & Related papers (2020-04-15T06:03:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.