CASPER: Cognitive Architecture for Social Perception and Engagement in
Robots
- URL: http://arxiv.org/abs/2209.01012v1
- Date: Thu, 1 Sep 2022 10:15:03 GMT
- Title: CASPER: Cognitive Architecture for Social Perception and Engagement in
Robots
- Authors: Samuele Vinanzi and Angelo Cangelosi
- Abstract summary: We present CASPER: a symbolic cognitive architecture that uses qualitative spatial reasoning to anticipate the pursued goal of another agent and to calculate the best collaborative behavior.
We have tested this architecture in a simulated kitchen environment and the results we have collected show that the robot is able to both recognize an ongoing goal and to properly collaborate towards its achievement.
- Score: 0.5918643136095765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Our world is being increasingly pervaded by intelligent robots with varying
degrees of autonomy. To seamlessly integrate themselves in our society, these
machines should possess the ability to navigate the complexities of our daily
routines even in the absence of a human's direct input. In other words, we want
these robots to understand the intentions of their partners with the purpose of
predicting the best way to help them. In this paper, we present CASPER
(Cognitive Architecture for Social Perception and Engagement in Robots): a
symbolic cognitive architecture that uses qualitative spatial reasoning to
anticipate the pursued goal of another agent and to calculate the best
collaborative behavior. This is performed through an ensemble of parallel
processes that model a low-level action recognition and a high-level goal
understanding, both of which are formally verified. We have tested this
architecture in a simulated kitchen environment and the results we have
collected show that the robot is able to both recognize an ongoing goal and to
properly collaborate towards its achievement. This demonstrates a new use of
Qualitative Spatial Relations applied to the problem of intention reading in
the domain of human-robot interaction.
Related papers
- HARMONIC: Cognitive and Control Collaboration in Human-Robotic Teams [0.0]
We demonstrate a cognitive strategy for robots in human-robot teams that incorporates metacognition, natural language communication, and explainability.
The system is embodied using the HARMONIC architecture that flexibly integrates cognitive and control capabilities.
arXiv Detail & Related papers (2024-09-26T16:48:21Z) - Nadine: An LLM-driven Intelligent Social Robot with Affective Capabilities and Human-like Memory [3.3906920519220054]
We describe our approach to developing an intelligent and robust social robotic system for the Nadine platform.
We achieve this by integrating Large Language Models (LLMs) and skilfully leveraging the powerful reasoning and instruction-following capabilities of these types of models.
This approach is novel compared to the current state-of-the-art LLM-based agents which do not implement human-like long-term memory or sophisticated emotional appraisal.
arXiv Detail & Related papers (2024-05-30T15:55:41Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - CoGrasp: 6-DoF Grasp Generation for Human-Robot Collaboration [0.0]
We propose a novel, deep neural network-based method called CoGrasp that generates human-aware robot grasps.
In real robot experiments, our method achieves about 88% success rate in producing stable grasps.
Our approach enables a safe, natural, and socially-aware human-robot objects' co-grasping experience.
arXiv Detail & Related papers (2022-10-06T19:23:25Z) - Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and
Robotics Together [68.44697646919515]
This paper presents several human-robot systems that utilize spatial computing to enable novel robot use cases.
The combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning.
arXiv Detail & Related papers (2022-02-03T10:04:26Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.