What Questions Should Robots Be Able to Answer? A Dataset of User Questions for Explainable Robotics
- URL: http://arxiv.org/abs/2510.16435v1
- Date: Sat, 18 Oct 2025 10:16:45 GMT
- Title: What Questions Should Robots Be Able to Answer? A Dataset of User Questions for Explainable Robotics
- Authors: Lennart Wachowiak, Andrew Coles, Gerard Canal, Oya Celiktutan,
- Abstract summary: We introduce a dataset of 1,893 user questions for household robots.<n>Most work in explainable robotics focuses on why-questions.<n>We find that users who identify as novices in robotics ask different questions than more experienced users.
- Score: 6.292766967410994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the growing use of large language models and conversational interfaces in human-robot interaction, robots' ability to answer user questions is more important than ever. We therefore introduce a dataset of 1,893 user questions for household robots, collected from 100 participants and organized into 12 categories and 70 subcategories. Most work in explainable robotics focuses on why-questions. In contrast, our dataset provides a wide variety of questions, from questions about simple execution details to questions about how the robot would act in hypothetical scenarios -- thus giving roboticists valuable insights into what questions their robot needs to be able to answer. To collect the dataset, we created 15 video stimuli and 7 text stimuli, depicting robots performing varied household tasks. We then asked participants on Prolific what questions they would want to ask the robot in each portrayed situation. In the final dataset, the most frequent categories are questions about task execution details (22.5%), the robot's capabilities (12.7%), and performance assessments (11.3%). Although questions about how robots would handle potentially difficult scenarios and ensure correct behavior are less frequent, users rank them as the most important for robots to be able to answer. Moreover, we find that users who identify as novices in robotics ask different questions than more experienced users. Novices are more likely to inquire about simple facts, such as what the robot did or the current state of the environment. As robots enter environments shared with humans and language becomes central to giving instructions and interaction, this dataset provides a valuable foundation for (i) identifying the information robots need to log and expose to conversational interfaces, (ii) benchmarking question-answering modules, and (iii) designing explanation strategies that align with user expectations.
Related papers
- Who Owns The Robot?: Four Ethical and Socio-technical Questions about Wellbeing Robots in the Real World through Community Engagement [9.005689230245432]
We undertake a community-centered investigation to examine three different communities' perspectives on using robotic wellbeing coaches in real-world environments.<n>We conducted workshops with three communities who are under-represented in robotics development.<n>We identify four themes regarding key ethical and socio-technical questions about the real-world use of wellbeing robots.
arXiv Detail & Related papers (2025-09-01T13:38:50Z) - A roadmap for AI in robotics [55.87087746398059]
We are witnessing growing excitement in robotics at the prospect of leveraging the potential of AI to tackle some of the outstanding barriers to the full deployment of robots in our daily lives.<n>This article offers an assessment of what AI for robotics has achieved since the 1990s and proposes a short- and medium-term research roadmap listing challenges and promises.
arXiv Detail & Related papers (2025-07-26T15:18:28Z) - Existential Crisis: A Social Robot's Reason for Being [0.0]
This study aims to investigate how the user perception of robots is influenced by displays of personality.<n>Using LLMs and speech to text technology, we designed a within-subject study to compare two conditions.
arXiv Detail & Related papers (2025-01-06T20:30:15Z) - $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Robotics Meets Software Engineering: A First Look at the Robotics Discussions on Stackoverflow [0.0]
This study seeks to identify the challenges encountered by robot developers by analyzing questions posted on StackOverflow.
We created a filtered dataset of 500 robotics-related questions and examined their characteristics.
We identified 11 major themes, with questions about robot movement being the most frequent.
arXiv Detail & Related papers (2024-10-05T23:03:56Z) - Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.<n>Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.<n>We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - Quriosity: Analyzing Human Questioning Behavior and Causal Inquiry through Curiosity-Driven Queries [92.1651731484397]
We present Quriosity, a collection of 13.5K naturally occurring questions from three diverse sources.<n>Our analysis reveals a significant presence of causal questions (up to 42%) in the dataset.
arXiv Detail & Related papers (2024-05-30T17:55:28Z) - Learning to Summarize and Answer Questions about a Virtual Robot's Past
Actions [3.088519122619879]
We demonstrate the task of learning to summarize and answer questions about a robot agent's past actions using natural language alone.
A single system with a large language model at its core is trained to both summarize and answer questions about action sequences given ego-centric video frames of a virtual robot and a question prompt.
arXiv Detail & Related papers (2023-06-16T15:47:24Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Semantics for Robotic Mapping, Perception and Interaction: A Survey [93.93587844202534]
Study of understanding dictates what does the world "mean" to a robot.
With humans and robots increasingly operating in the same world, the prospects of human-robot interaction also bring semantics into the picture.
Driven by need, as well as by enablers like increasing availability of training data and computational resources, semantics is a rapidly growing research area in robotics.
arXiv Detail & Related papers (2021-01-02T12:34:39Z) - Model Elicitation through Direct Questioning [22.907680615911755]
We show how a robot can interact to localize the human model from a set of models.
We show how to generate questions to refine the robot's understanding of the teammate's model.
arXiv Detail & Related papers (2020-11-24T18:17:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.