Human or Machine? A Preliminary Turing Test for Speech-to-Speech Interaction
- URL: http://arxiv.org/abs/2602.24080v2
- Date: Mon, 02 Mar 2026 08:18:24 GMT
- Title: Human or Machine? A Preliminary Turing Test for Speech-to-Speech Interaction
- Authors: Xiang Li, Jiabao Gao, Sipei Lin, Xuan Zhou, Chi Zhang, Bo Cheng, Jiale Han, Benyou Wang,
- Abstract summary: We conduct the first Turing test for S2S systems, collecting 2,968 human judgments on dialogues between 9 state-of-the-art S2S systems and 28 human participants.<n>No existing evaluated S2S system passes the test, revealing a significant gap in human-likeness.<n>We develop a fine-grained taxonomy of 18 human-likeness dimensions and crowd-annotate our collected dialogues accordingly.
- Score: 32.28977425466535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The pursuit of human-like conversational agents has long been guided by the Turing test. For modern speech-to-speech (S2S) systems, a critical yet unanswered question is whether they can converse like humans. To tackle this, we conduct the first Turing test for S2S systems, collecting 2,968 human judgments on dialogues between 9 state-of-the-art S2S systems and 28 human participants. Our results deliver a clear finding: no existing evaluated S2S system passes the test, revealing a significant gap in human-likeness. To diagnose this failure, we develop a fine-grained taxonomy of 18 human-likeness dimensions and crowd-annotate our collected dialogues accordingly. Our analysis shows that the bottleneck is not semantic understanding but stems from paralinguistic features, emotional expressivity, and conversational persona. Furthermore, we find that off-the-shelf AI models perform unreliably as Turing test judges. In response, we propose an interpretable model that leverages the fine-grained human-likeness ratings and delivers accurate and transparent human-vs-machine discrimination, offering a powerful tool for automatic human-likeness evaluation. Our work establishes the first human-likeness evaluation for S2S systems and moves beyond binary outcomes to enable detailed diagnostic insights, paving the way for human-like improvements in conversational AI systems.
Related papers
- Stephanie2: Thinking, Waiting, and Making Decisions Like Humans in Step-by-Step AI Social Chat [60.51107098103245]
Stephanie2 is a novel next-generation step-wise decision-making dialogue agent.<n>With active waiting and message-pace adaptation, Stephanie2 explicitly decides at each step whether to send or wait.<n> Experiments show that Stephanie2 clearly outperforms Stephanie1 on metrics such as naturalness and engagement.
arXiv Detail & Related papers (2026-01-09T09:27:17Z) - The ICASSP 2026 HumDial Challenge: Benchmarking Human-like Spoken Dialogue Systems in the LLM Era [95.35748535806744]
We launch the first Human-like Spoken Dialogue Systems Challenge (HumDial) at ICASSP 2026.<n>This paper summarizes the dataset, track configurations, and the final results.
arXiv Detail & Related papers (2026-01-09T06:32:30Z) - ESPnet-SDS: Unified Toolkit and Demo for Spoken Dialogue Systems [57.806797579986075]
We introduce an open-source, user-friendly toolkit to build unified web interfaces for various cascaded and E2E spoken dialogue systems.<n>Using the evaluation metrics, we compare various cascaded and E2E spoken dialogue systems with a human-human conversation dataset as a proxy.<n>Our analysis demonstrates that the toolkit allows researchers to effortlessly compare and contrast different technologies.
arXiv Detail & Related papers (2025-03-11T15:24:02Z) - Pragmatic Embodied Spoken Instruction Following in Human-Robot Collaboration with Theory of Mind [51.45478233267092]
We present a cognitively inspired neurosymbolic model, Spoken Instruction Following through Theory of Mind (SIFToM)<n>SIFToM uses a Vision-Language Model with model-based mental inference to enable robots to pragmatically follow human instructions under diverse speech conditions.<n>Results show that SIFToM can significantly improve the performance of a lightweight base VLM (Gemini 2.5 Flash), outperforming state-of-the-art VLMs (Gemini 2.5 Pro) and approaching human-level accuracy on challenging spoken instruction following tasks.
arXiv Detail & Related papers (2024-09-17T02:36:10Z) - People cannot distinguish GPT-4 from a human in a Turing test [0.913127392774573]
GPT-4 was judged to be a human 54% of the time, outperforming ELIZA (22%) but lagging behind actual humans (67%)
Results have implications for debates around machine intelligence and, more urgently, suggest that deception by current AI systems may go undetected.
arXiv Detail & Related papers (2024-05-09T04:14:09Z) - A Vector Quantized Approach for Text to Speech Synthesis on Real-World
Spontaneous Speech [94.64927912924087]
We train TTS systems using real-world speech from YouTube and podcasts.
Recent Text-to-Speech architecture is designed for multiple code generation and monotonic alignment.
We show thatRecent Text-to-Speech architecture outperforms existing TTS systems in several objective and subjective measures.
arXiv Detail & Related papers (2023-02-08T17:34:32Z) - Can Machines Imitate Humans? Integrative Turing-like tests for Language and Vision Demonstrate a Narrowing Gap [56.611702960809644]
We benchmark AI's ability to imitate humans in three language tasks and three vision tasks.<n>Next, we conducted 72,191 Turing-like tests with 1,916 human judges and 10 AI judges.<n>Imitation ability showed minimal correlation with conventional AI performance metrics.
arXiv Detail & Related papers (2022-11-23T16:16:52Z) - Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in
Dialog Systems [64.10696852552103]
Highly anthropomorphic responses might make users uncomfortable or implicitly deceive them into thinking they are interacting with a human.
We collect human ratings on the feasibility of approximately 900 two-turn dialogs sampled from 9 diverse data sources.
arXiv Detail & Related papers (2022-10-22T12:10:44Z) - The R-U-A-Robot Dataset: Helping Avoid Chatbot Deception by Detecting
User Questions About Human or Non-Human Identity [41.43519695929595]
We aim to understand how system designers might allow their systems to confirm its non-human identity.
We collect over 2,500 phrasings related to the intent of Are you a robot?"
We compare classifiers to recognize the intent and discuss the precision/recall and model complexity tradeoffs.
arXiv Detail & Related papers (2021-06-04T20:04:33Z) - Can you hear me $\textit{now}$? Sensitive comparisons of human and
machine perception [3.8580784887142774]
We explore how this asymmetry can cause comparisons to misestimate the overlap in human and machine perception.
In five experiments, we adapt task designs from the human psychophysics literature to show that even when subjects cannot freely transcribe such speech commands, they often can demonstrate other forms of understanding.
We recommend the adoption of such "sensitive tests" when comparing human and machine perception.
arXiv Detail & Related papers (2020-03-27T16:24:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.