Human Haptic Gesture Interpretation for Robotic Systems
- URL: http://arxiv.org/abs/2012.01959v3
- Date: Wed, 10 Mar 2021 02:43:43 GMT
- Title: Human Haptic Gesture Interpretation for Robotic Systems
- Authors: Elizabeth Bibit Bianchini, Prateek Verma and Kenneth Salisbury
- Abstract summary: Physical human-robot interactions (pHRI) are less efficient and communicative than human-human interactions.
A key reason is a lack of informative sense of touch in robotic systems.
This work presents four proposed touch gesture classes that cover the majority of the gesture characteristics identified in the literature.
- Score: 3.888848425698769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physical human-robot interactions (pHRI) are less efficient and communicative
than human-human interactions, and a key reason is a lack of informative sense
of touch in robotic systems. Interpreting human touch gestures is a nuanced,
challenging task with extreme gaps between human and robot capability. Among
prior works that demonstrate human touch recognition capability, differences in
sensors, gesture classes, feature sets, and classification algorithms yield a
conglomerate of non-transferable results and a glaring lack of a standard. To
address this gap, this work presents 1) four proposed touch gesture classes
that cover the majority of the gesture characteristics identified in the
literature, 2) the collection of an extensive force dataset on a common pHRI
robotic arm with only its internal wrist force-torque sensor, and 3) an
exhaustive performance comparison of combinations of feature sets and
classification algorithms on this dataset. We demonstrate high classification
accuracies among our proposed gesture definitions on a test set, emphasizing
that neural network classifiers on the raw data outperform other combinations
of feature sets and algorithms.
Related papers
- The Role of Functional Muscle Networks in Improving Hand Gesture Perception for Human-Machine Interfaces [2.367412330421982]
Surface electromyography (sEMG) has been explored for its rich informational context and accessibility.
This paper proposes the decoding of muscle synchronization rather than individual muscle activation.
It achieves an accuracy of 85.1%, demonstrating improved performance compared to existing methods.
arXiv Detail & Related papers (2024-08-05T15:17:34Z) - Interactive Multi-Robot Flocking with Gesture Responsiveness and Musical Accompaniment [0.7659052547635159]
This work presents a compelling multi-robot task in which the main aim is to enthrall and interest.
In this task, the goal is for a human to be drawn to move alongside and participate in a dynamic, expressive robot flock.
Towards this aim, the research team created algorithms for robot movements and engaging interaction modes such as gestures and sound.
arXiv Detail & Related papers (2024-03-30T18:16:28Z) - Inter-X: Towards Versatile Human-Human Interaction Analysis [100.254438708001]
We propose Inter-X, a dataset with accurate body movements and diverse interaction patterns.
The dataset includes 11K interaction sequences and more than 8.1M frames.
We also equip Inter-X with versatile annotations of more than 34K fine-grained human part-level textual descriptions.
arXiv Detail & Related papers (2023-12-26T13:36:05Z) - HODN: Disentangling Human-Object Feature for HOI Detection [51.48164941412871]
We propose a Human and Object Disentangling Network (HODN) to model the Human-Object Interaction (HOI) relationships explicitly.
Considering that human features are more contributive to interaction, we propose a Human-Guide Linking method to make sure the interaction decoder focuses on the human-centric regions.
Our proposed method achieves competitive performance on both the V-COCO and the HICO-Det Linking datasets.
arXiv Detail & Related papers (2023-08-20T04:12:50Z) - Online Recognition of Incomplete Gesture Data to Interface Collaborative
Robots [0.0]
This paper introduces an HRI framework to classify large vocabularies of interwoven static gestures (SGs) and dynamic gestures (DGs) captured with wearable sensors.
The recognized gestures are used to teleoperate a robot in a collaborative process that consists of preparing a breakfast meal.
arXiv Detail & Related papers (2023-04-13T18:49:08Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - TactileSGNet: A Spiking Graph Neural Network for Event-based Tactile
Object Recognition [17.37142241982902]
New advances in flexible, event-driven, electronic skins may soon endow robots with touch perception capabilities similar to humans.
These unique features may render current deep learning approaches such as convolutional feature extractors unsuitable for tactile learning.
We propose a novel spiking graph neural network for event-based tactile object recognition.
arXiv Detail & Related papers (2020-08-01T03:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.