When AI companions become witty: Can human brain recognize AI-generated irony?
- URL: http://arxiv.org/abs/2510.17168v1
- Date: Mon, 20 Oct 2025 05:15:00 GMT
- Title: When AI companions become witty: Can human brain recognize AI-generated irony?
- Authors: Xiaohui Rao, Hanlin Wu, Zhenguang G. Cai,
- Abstract summary: This study investigates whether people adopt the intentional stance, attributing mental states to explain behavior,toward AI during irony comprehension.<n>We compared behavioral and neural responses to ironic statements from AI versus human sources.<n>Results demonstrate that people do not fully adopt the intentional stance toward AI-generated irony.
- Score: 2.859021383061256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As Large Language Models (LLMs) are increasingly deployed as social agents and trained to produce humor and irony, a question emerges: when encountering witty AI remarks, do people interpret these as intentional communication or mere computational output? This study investigates whether people adopt the intentional stance, attributing mental states to explain behavior,toward AI during irony comprehension. Irony provides an ideal paradigm because it requires distinguishing intentional contradictions from unintended errors through effortful semantic reanalysis. We compared behavioral and neural responses to ironic statements from AI versus human sources using established ERP components: P200 reflecting early incongruity detection and P600 indexing cognitive efforts in reinterpreting incongruity as deliberate irony. Results demonstrate that people do not fully adopt the intentional stance toward AI-generated irony. Behaviorally, participants attributed incongruity to deliberate communication for both sources, though significantly less for AI than human, showing greater tendency to interpret AI incongruities as computational errors. Neural data revealed attenuated P200 and P600 effects for AI-generated irony, suggesting reduced effortful detection and reanalysis consistent with diminished attribution of communicative intent. Notably, people who perceived AI as more sincere showed larger P200 and P600 effects for AI-generated irony, suggesting that intentional stance adoption is calibrated by specific mental models of artificial agents. These findings reveal that source attribution shapes neural processing of social-communicative phenomena. Despite current LLMs' linguistic sophistication, achieving genuine social agency requires more than linguistic competence, it necessitates a shift in how humans perceive and attribute intentionality to artificial agents.
Related papers
- Explainable AI as a Double-Edged Sword in Dermatology: The Impact on Clinicians versus The Public [46.86429592892395]
explainable AI (XAI) addresses this by providing AI decision-making insight.<n>We present results from two large-scale experiments combining a fairness-based diagnosis AI model and different XAI explanations.
arXiv Detail & Related papers (2025-12-14T00:06:06Z) - A perceptual bias of AI Logical Argumentation Ability in Writing [3.1238547837436115]
The ability of logical reasoning like humans is often used as a criterion to assess whether a machine can think.<n>This study explores whether human biases influence evaluations of the reasoning abilities of AI.
arXiv Detail & Related papers (2025-11-27T06:39:11Z) - Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence [31.666988490509237]
We show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI.<n>We find that models are highly sycophantic, affirming users' actions 50% more than humans do.<n>Participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again.
arXiv Detail & Related papers (2025-10-01T19:26:01Z) - A funny companion: Distinct neural responses to perceived AI- versus human-generated humor [2.859021383061256]
This study used electroencephalography (EEG) to compare how people process humor from AI versus human sources.<n>Results: AI humor elicited a smaller N400 effect, suggesting reduced cognitive effort during the processing of incongruity.<n>These findings indicate that the brain responds to AI humor with surprisingly positive and intense reactions.
arXiv Detail & Related papers (2025-09-13T15:05:57Z) - Hallucination vs interpretation: rethinking accuracy and precision in AI-assisted data extraction for knowledge synthesis [0.9898534984111934]
We developed an extraction platform using large language models (LLMs) to automate data extraction.<n>We compared AI to human responses across 187 publications and 17 extraction questions from a published scoping review.<n>Findings suggest AI variability depends more on interpretability than hallucination.
arXiv Detail & Related papers (2025-08-13T03:33:30Z) - Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing [55.2480439325792]
This study systematically evaluations twelve state-of-the-art AI-text detectors using our AI-Polished-Text Evaluation dataset.<n>Our findings reveal that detectors frequently flag even minimally polished text as AI-generated, struggle to differentiate between degrees of AI involvement, and exhibit biases against older and smaller models.
arXiv Detail & Related papers (2025-02-21T18:45:37Z) - Human Bias in the Face of AI: Examining Human Judgment Against Text Labeled as AI Generated [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.<n>We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Can Machines Imitate Humans? Integrative Turing-like tests for Language and Vision Demonstrate a Narrowing Gap [56.611702960809644]
We benchmark AI's ability to imitate humans in three language tasks and three vision tasks.<n>Next, we conducted 72,191 Turing-like tests with 1,916 human judges and 10 AI judges.<n>Imitation ability showed minimal correlation with conventional AI performance metrics.
arXiv Detail & Related papers (2022-11-23T16:16:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.