React to This (RTT): A Nonverbal Turing Test for Embodied AI
- URL: http://arxiv.org/abs/2507.10812v1
- Date: Mon, 14 Jul 2025 21:16:12 GMT
- Title: React to This (RTT): A Nonverbal Turing Test for Embodied AI
- Authors: Chuxuan Zhang, Yasaman Etesam, Angelica Lim,
- Abstract summary: We propose an approach to test embodied AI agents for interaction awareness and believability.<n>We introduce the React to This (RTT) test for nonverbal behaviors, presenting results from an initial experiment.
- Score: 0.7373617024876725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an approach to test embodied AI agents for interaction awareness and believability, particularly in scenarios where humans push them to their limits. Turing introduced the Imitation Game as a way to explore the question: "Can machines think?" The Total Turing Test later expanded this concept beyond purely verbal communication, incorporating perceptual and physical interaction. Building on this, we propose a new guiding question: "Can machines react?" and introduce the React to This (RTT) test for nonverbal behaviors, presenting results from an initial experiment.
Related papers
- Playpen: An Environment for Exploring Learning Through Conversational Interaction [81.67330926729015]
We investigate whether Dialogue Games can also serve as a source of feedback signals for learning.<n>We introduce Playpen, an environment for off- and online learning through Dialogue Game self-play.<n>We find that imitation learning through SFT improves performance on unseen instances, but negatively impacts other skills.
arXiv Detail & Related papers (2025-04-11T14:49:33Z) - HERO: Human Reaction Generation from Videos [54.602947113980655]
HERO is a framework for Human rEaction geneRation from videOs.<n> HERO considers both global and frame-level local representations of the video to extract the interaction intention.<n>Local visual representations are continuously injected into the model to maximize the exploitation of the dynamic properties inherent in videos.
arXiv Detail & Related papers (2025-03-11T10:39:32Z) - Talking Turns: Benchmarking Audio Foundation Models on Turn-Taking Dynamics [54.03209351287654]
We propose a novel evaluation protocol that can assess spoken dialog system's turn-taking capabilities.<n>We present the first comprehensive user study that evaluates existing spoken dialogue systems on their ability to perform turn-taking events.<n>We will open source our evaluation platform to promote the development of advanced conversational AI systems.
arXiv Detail & Related papers (2025-03-03T04:46:04Z) - Beyond Turn-taking: Introducing Text-based Overlap into Human-LLM Interactions [16.854609012936155]
We propose a novel approach that incorporates overlapping messages, mirroring natural human conversations.<n>Our user study revealed that OverlapBot was perceived as more communicative and immersive than traditional turn-taking chatbots.<n>We provide recommendations for implementing overlap-capable AI interactions to enhance the fluidity and engagement of text-based conversations.
arXiv Detail & Related papers (2025-01-30T03:01:01Z) - X-TURING: Towards an Enhanced and Efficient Turing Test for Long-Term Dialogue Agents [56.64615470513102]
The Turing test examines whether AIs exhibit human-like behaviour in natural language conversations.<n>Traditional setting limits each participant to one message at a time and requires constant human participation.<n>This paper proposes textbftextscX-Turing, which enhances the original test with a textitburst dialogue pattern.
arXiv Detail & Related papers (2024-08-19T09:57:28Z) - Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Can I say, now machines can think? [0.0]
We analyzed and explored the capabilities of artificial intelligence-enabled machines.
Turing Test is a critical aspect of evaluating machines' ability.
There are other aspects of intelligence too, and AI machines exhibit most of these aspects.
arXiv Detail & Related papers (2023-07-11T11:44:09Z) - The Human-or-Machine Matter: Turing-Inspired Reflections on an Everyday
Issue [4.309879785418976]
We sidestep the question of whether a machine can be labeled intelligent, or can be said to match human capabilities in a given context.
We first draw attention to the seemingly simpler question a person may ask themselves in an everyday interaction: Am I interacting with a human or with a machine?''
arXiv Detail & Related papers (2023-05-07T15:41:11Z) - e-Inu: Simulating A Quadruped Robot With Emotional Sentience [4.15623340386296]
This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions.
We use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions.
The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%.
arXiv Detail & Related papers (2023-01-03T06:28:45Z) - Can Machines Imitate Humans? Integrative Turing Tests for Vision and Language Demonstrate a Narrowing Gap [45.6806234490428]
We benchmark current AIs in their abilities to imitate humans in three language tasks and three vision tasks.
Experiments involved 549 human agents plus 26 AI agents for dataset creation, and 1,126 human judges plus 10 AI judges.
Results reveal that current AIs are not far from being able to impersonate humans in complex language and vision challenges.
arXiv Detail & Related papers (2022-11-23T16:16:52Z) - The Meta-Turing Test [17.68987003293372]
We propose an alternative to the Turing test that removes the inherent asymmetry between humans and machines.
In this new test, both humans and machines judge each other.
arXiv Detail & Related papers (2022-05-11T04:54:14Z) - EmpBot: A T5-based Empathetic Chatbot focusing on Sentiments [75.11753644302385]
Empathetic conversational agents should not only understand what is being discussed, but also acknowledge the implied feelings of the conversation partner.
We propose a method based on a transformer pretrained language model (T5)
We evaluate our model on the EmpatheticDialogues dataset using both automated metrics and human evaluation.
arXiv Detail & Related papers (2021-10-30T19:04:48Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.