iCub! Do you recognize what I am doing?: multimodal human action
recognition on multisensory-enabled iCub robot
- URL: http://arxiv.org/abs/2212.08859v1
- Date: Sat, 17 Dec 2022 12:40:54 GMT
- Title: iCub! Do you recognize what I am doing?: multimodal human action
recognition on multisensory-enabled iCub robot
- Authors: Kas Kniesmeijer and Murat Kirtay
- Abstract summary: We show that the proposed multimodal ensemble learning leverages complementary characteristics of three color cameras and one depth sensor.
The results indicate that the proposed models can be deployed on the iCub robot that requires multimodal action recognition.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This study uses multisensory data (i.e., color and depth) to recognize human
actions in the context of multimodal human-robot interaction. Here we employed
the iCub robot to observe the predefined actions of the human partners by using
four different tools on 20 objects. We show that the proposed multimodal
ensemble learning leverages complementary characteristics of three color
cameras and one depth sensor that improves, in most cases, recognition accuracy
compared to the models trained with a single modality. The results indicate
that the proposed models can be deployed on the iCub robot that requires
multimodal action recognition, including social tasks such as partner-specific
adaptation, and contextual behavior understanding, to mention a few.
Related papers
- A Multi-Modal Explainability Approach for Human-Aware Robots in Multi-Party Conversation [39.87346821309096]
We present an addressee estimation model with improved performance in comparison with the previous SOTA.
We also propose several ways to incorporate explainability and transparency in the aforementioned architecture.
arXiv Detail & Related papers (2024-05-20T13:09:32Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Application-Driven AI Paradigm for Human Action Recognition [2.0342996661888995]
This paper presents a unified human action recognition framework composed of two modules, i.e., multi-form human detection and corresponding action classification.
Some experimental results show that the unified framework is effective for various application scenarios.
arXiv Detail & Related papers (2022-09-30T07:22:01Z) - Continuous ErrP detections during multimodal human-robot interaction [2.5199066832791535]
We implement a multimodal human-robot interaction (HRI) scenario, in which a simulated robot communicates with its human partner through speech and gestures.
The human partner, in turn, evaluates whether the robot's verbal announcement (intention) matches the action (pointing gesture) chosen by the robot.
In intrinsic evaluations of robot actions by humans, evident in the EEG, were recorded in real time, continuously segmented online and classified asynchronously.
arXiv Detail & Related papers (2022-07-25T15:39:32Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - A robot that counts like a child: a developmental model of counting and
pointing [69.26619423111092]
A novel neuro-robotics model capable of counting real items is introduced.
The model allows us to investigate the interaction between embodiment and numerical cognition.
The trained model is able to count a set of items and at the same time points to them.
arXiv Detail & Related papers (2020-08-05T21:06:27Z) - Gesture Recognition for Initiating Human-to-Robot Handovers [2.1614262520734595]
It is important to recognize when a human intends to initiate handovers, so that the robot does not try to take objects from humans when a handover is not intended.
We pose the handover gesture recognition as a binary classification problem in a single RGB image.
Our results show that the handover gestures are correctly identified with an accuracy of over 90%.
arXiv Detail & Related papers (2020-07-20T08:49:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.