Enabling a Social Robot to Process Social Cues to Detect when to Help a
User
- URL: http://arxiv.org/abs/2110.11075v1
- Date: Mon, 18 Oct 2021 22:45:31 GMT
- Title: Enabling a Social Robot to Process Social Cues to Detect when to Help a
User
- Authors: Jason R. Wilson, Phyo Thuta Aung, Isabelle Boucher
- Abstract summary: Social robots need to be able to recognize human needs in a real-time manner so that they can provide timely assistance.
We propose an architecture that uses social cues to determine when a robot should provide assistance.
- Score: 0.3867363075280543
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: It is important for socially assistive robots to be able to recognize when a
user needs and wants help. Such robots need to be able to recognize human needs
in a real-time manner so that they can provide timely assistance. We propose an
architecture that uses social cues to determine when a robot should provide
assistance. Based on a multimodal fusion approach upon eye gaze and language
modalities, our architecture is trained and evaluated on data collected in a
robot-assisted Lego building task. By focusing on social cues, our architecture
has minimal dependencies on the specifics of a given task, enabling it to be
applied in many different contexts. Enabling a social robot to recognize a
user's needs through social cues can help it to adapt to user behaviors and
preferences, which in turn will lead to improved user experiences.
Related papers
- Reimagining Social Robots as Recommender Systems: Foundations, Framework, and Applications [10.149175659152474]
Personalization in social robots refers to the ability of the robot to meet the needs and/or preferences of an individual user.<n>Existing approaches fall short of comprehensively capturing user preferences.<n>We propose drawing on recommender systems (RSs), which specialize in modeling user preferences and providing personalized recommendations.
arXiv Detail & Related papers (2026-01-27T16:25:56Z) - Towards Multimodal Social Conversations with Robots: Using Vision-Language Models [0.034530027457861996]
We argue that vision-language models are able to process this wide range of visual information in a sufficiently general manner for autonomous social robots.<n>We describe how to adapt them to this setting, which technical challenges remain, and briefly discuss evaluation practices.
arXiv Detail & Related papers (2025-07-25T12:06:53Z) - Building Knowledge from Interactions: An LLM-Based Architecture for Adaptive Tutoring and Social Reasoning [42.09560737219404]
Large Language Models show promise in human-like communication, but their standalone use is hindered by memory constraints and contextual incoherence.
This work presents a multimodal, cognitively inspired framework that enhances LLM-based autonomous decision-making in social and task-oriented Human-Robot Interaction.
To further enhance autonomy and personalization, we introduce a memory system for selecting, storing and retrieving experiences.
arXiv Detail & Related papers (2025-04-02T10:45:41Z) - Project Report: Requirements for a Social Robot as an Information Provider in the Public Sector [0.0]
We have devised an application scenario for integrating a humanoid social robot into an official environment.
We developed a corresponding robot application and carried out initial tests and evaluations in a project together with the Kiel City Council.
One of the most important insights gained in the project was that a humanoid robot with natural language processing capabilities proved to be much more preferred by users.
We propose a connection of the ACT-R cognitive architecture with the robot, where an ACT-R model is used in interaction with the robot application to cognitively process and enhance a dialogue between human and robot.
arXiv Detail & Related papers (2024-12-06T13:07:06Z) - Socially Pertinent Robots in Gerontological Healthcare [78.35311825198136]
This paper is an attempt to partially answer the question, via two waves of experiments with patients and companions in a day-care gerontological facility in Paris with a full-sized humanoid robot endowed with social and conversational interaction capabilities.
Overall, the users are receptive to this technology, especially when the robot perception and action skills are robust to environmental clutter and flexible to handle a plethora of different interactions.
arXiv Detail & Related papers (2024-04-11T08:43:37Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Developing Social Robots with Empathetic Non-Verbal Cues Using Large
Language Models [2.5489046505746704]
We design and label four types of empathetic non-verbal cues, abbreviated as SAFE: Speech, Action (gesture), Facial expression, and Emotion, in a social robot.
Preliminary results show distinct patterns in the robot's responses, such as a preference for calm and positive social emotions like 'joy' and 'lively', and frequent nodding gestures.
Our work lays the groundwork for future studies on human-robot interactions, emphasizing the essential role of both verbal and non-verbal cues in creating social and empathetic robots.
arXiv Detail & Related papers (2023-08-31T08:20:04Z) - Proceeding of the 1st Workshop on Social Robots Personalisation At the
crossroads between engineering and humanities (CONCATENATE) [37.838596863193565]
This workshop aims to raise an interdisciplinary discussion on personalisation in robotics.
It aims at bringing researchers from different fields together to propose guidelines for personalisation.
arXiv Detail & Related papers (2023-07-10T11:11:24Z) - CASPER: Cognitive Architecture for Social Perception and Engagement in
Robots [0.5918643136095765]
We present CASPER: a symbolic cognitive architecture that uses qualitative spatial reasoning to anticipate the pursued goal of another agent and to calculate the best collaborative behavior.
We have tested this architecture in a simulated kitchen environment and the results we have collected show that the robot is able to both recognize an ongoing goal and to properly collaborate towards its achievement.
arXiv Detail & Related papers (2022-09-01T10:15:03Z) - A ROS Architecture for Personalised HRI with a Bartender Social Robot [61.843727637976045]
BRILLO project has the overall goal of creating an autonomous robotic bartender that can interact with customers while accomplishing its bartending tasks.
We present the developed three-layers ROS architecture integrating a perception layer managing the processing of different social signals, a decision-making layer for handling multi-party interactions, and an execution layer controlling the behaviour of a complex robot composed of arms and a face.
arXiv Detail & Related papers (2022-03-13T11:33:06Z) - A MultiModal Social Robot Toward Personalized Emotion Interaction [1.2183405753834562]
This study demonstrates a multimodal human-robot interaction (HRI) framework with reinforcement learning to enhance the robotic interaction policy.
The goal is to apply this framework in social scenarios that can let the robots generate a more natural and engaging HRI framework.
arXiv Detail & Related papers (2021-10-08T00:35:44Z) - Learning and Executing Re-usable Behaviour Trees from Natural Language
Instruction [1.4824891788575418]
Behaviour trees can be used in conjunction with natural language instruction to provide a robust and modular control architecture.
We show how behaviour trees generated using our approach can be generalised to novel scenarios.
We validate this work against an existing corpus of natural language instructions.
arXiv Detail & Related papers (2021-06-03T07:47:06Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - A Reference Software Architecture for Social Robots [64.86618385090416]
We propose a series of principles that social robots may benefit from.
These principles lay also the foundations for the design of a reference software architecture for Social Robots.
arXiv Detail & Related papers (2020-07-09T17:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.