Interação entre robôs humanoides: desenvolvendo a colaboração e comunicação autônoma
- URL: http://arxiv.org/abs/2410.17450v2
- Date: Thu, 24 Oct 2024 17:38:44 GMT
- Title: Interação entre robôs humanoides: desenvolvendo a colaboração e comunicação autônoma
- Authors: Moraes Pablo, Rodríguez Mónica, Peters Christopher, Sodre Hiago, Mazondo Ahilen, Sandin Vincent, Barcelona Sebastian, Moraes William, Fernández Santiago, Assunção Nathalie, de Vargas Bruna, Dörnbach Tobias, Kelbouscas André, Grando Ricardo,
- Abstract summary: The study investigates the interaction between humanoid robots NAO and Pepper, emphasizing their potential applications in educational settings.
Through a series of programmed interactions, the robots demonstrated their ability to communicate and coordinate actions autonomously.
- Score: 0.0
- License:
- Abstract: This study investigates the interaction between humanoid robots NAO and Pepper, emphasizing their potential applications in educational settings. NAO, widely used in education, and Pepper, designed for social interactions, of er new opportunities for autonomous communication and collaboration. Through a series of programmed interactions, the robots demonstrated their ability to communicate and coordinate actions autonomously, highlighting their potential as tools for enhancing learning environments. The research also explores the integration of emerging technologies, such as artificial intelligence, into these systems, allowing robots to learn from each other and adapt their behavior. The findings suggest that NAO and Pepper can significantly contribute to both technical learning and the development of social and emotional skills in students, of ering innovative pedagogical approaches through the use of humanoid robotics.
Related papers
- Human-Robot Mutual Learning through Affective-Linguistic Interaction and Differential Outcomes Training [Pre-Print] [0.3811184252495269]
We test how affective-linguistic communication, in combination with differential outcomes training, affects mutual learning in a human-robot context.
Taking inspiration from child- caregiver dynamics, our human-robot interaction setup consists of a (simulated) robot attempting to learn how best to communicate internal, homeostatically-controlled needs.
arXiv Detail & Related papers (2024-07-01T13:35:08Z) - A Human-Robot Mutual Learning System with Affect-Grounded Language
Acquisition and Differential Outcomes Training [0.1812164955222814]
The paper presents a novel human-robot interaction setup for identifying robot homeostatic needs.
We adopted a differential outcomes training protocol whereby the robot provides feedback specific to its internal needs.
We found evidence that DOT can enhance the human's learning efficiency, which in turn enables more efficient robot language acquisition.
arXiv Detail & Related papers (2023-10-20T09:41:31Z) - A Sign Language Recognition System with Pepper, Lightweight-Transformer,
and LLM [0.9775599530257609]
This research explores using lightweight deep neural network architectures to enable the humanoid robot Pepper to understand American Sign Language (ASL)
We introduce a lightweight and efficient model for ASL understanding optimized for embedded systems, ensuring rapid sign recognition while conserving computational resources.
We tailor interactions to allow the Pepper Robot to generate natural Co-Speech Gesture responses, laying the foundation for more organic and intuitive humanoid-robot dialogues.
arXiv Detail & Related papers (2023-09-28T23:54:41Z) - World Models and Predictive Coding for Cognitive and Developmental
Robotics: Frontiers and Challenges [51.92834011423463]
We focus on the two concepts of world models and predictive coding.
In neuroscience, predictive coding proposes that the brain continuously predicts its inputs and adapts to model its own dynamics and control behavior in its environment.
arXiv Detail & Related papers (2023-01-14T06:38:14Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and
Robotics Together [68.44697646919515]
This paper presents several human-robot systems that utilize spatial computing to enable novel robot use cases.
The combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning.
arXiv Detail & Related papers (2022-02-03T10:04:26Z) - A MultiModal Social Robot Toward Personalized Emotion Interaction [1.2183405753834562]
This study demonstrates a multimodal human-robot interaction (HRI) framework with reinforcement learning to enhance the robotic interaction policy.
The goal is to apply this framework in social scenarios that can let the robots generate a more natural and engaging HRI framework.
arXiv Detail & Related papers (2021-10-08T00:35:44Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.