Quantitative analysis of robot gesticulation behavior
- URL: http://arxiv.org/abs/2010.11614v1
- Date: Thu, 22 Oct 2020 11:17:18 GMT
- Title: Quantitative analysis of robot gesticulation behavior
- Authors: Unai Zabala, Igor Rodriguez, Jos\'e Mar\'ia Mart\'inez-Otzeta, Itziar
Irigoien, Elena Lazkano
- Abstract summary: The aim is to measure characteristics such as fidelity to the original training data, but at the same time keep track of the degree of originality of the produced gestures.
A new Fr'echet Gesture Distance is proposed by adapting the Fr'echet Inception Distance to gestures.
- Score: 2.9048924265579124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social robot capabilities, such as talking gestures, are best produced using
data driven approaches to avoid being repetitive and to show trustworthiness.
However, there is a lack of robust quantitative methods that allow to compare
such methods beyond visual evaluation. In this paper a quantitative analysis is
performed that compares two Generative Adversarial Networks based gesture
generation approaches. The aim is to measure characteristics such as fidelity
to the original training data, but at the same time keep track of the degree of
originality of the produced gestures. Principal Coordinate Analysis and
procrustes statistics are performed and a new Fr\'echet Gesture Distance is
proposed by adapting the Fr\'echet Inception Distance to gestures. These three
techniques are taken together to asses the fidelity/originality of the
generated gestures.
Related papers
- Wearable Sensor-Based Few-Shot Continual Learning on Hand Gestures for Motor-Impaired Individuals via Latent Embedding Exploitation [6.782362178252351]
We introduce the Latent Embedding Exploitation (LEE) mechanism in our replay-based Few-Shot Continual Learning framework.
Our method produces a diversified latent feature space by leveraging a preserved latent embedding known as gesture prior knowledge.
Our method helps motor-impaired persons leverage wearable devices, and their unique styles of movement can be learned and applied.
arXiv Detail & Related papers (2024-05-14T21:20:27Z) - AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech
Gesture Synthesis [0.0]
We present an approach to pre-train partial gesture sequences using a generative adversarial network with a quantization pipeline.
By learning the mapping of a latent space representation as opposed to directly mapping it to a vector representation, this framework facilitates the generation of highly realistic and expressive gestures.
arXiv Detail & Related papers (2023-05-02T07:59:38Z) - Variational Voxel Pseudo Image Tracking [127.46919555100543]
Uncertainty estimation is an important task for critical problems, such as robotics and autonomous driving.
We propose a Variational Neural Network-based version of a Voxel Pseudo Image Tracking (VPIT) method for 3D Single Object Tracking.
arXiv Detail & Related papers (2023-02-12T13:34:50Z) - An Omnidirectional Approach to Touch-based Continuous Authentication [6.83780085440235]
This paper focuses on how touch interactions on smartphones can provide a continuous user authentication service through behaviour captured by a touchscreen.
We present an omnidirectional approach which outperforms the traditional method independent of the touch direction.
We find that the TouchAlytics feature set outperforms others when using our approach when combining three or more strokes.
arXiv Detail & Related papers (2023-01-13T13:58:06Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Contact-Aware Retargeting of Skinned Motion [49.71236739408685]
This paper introduces a motion estimation method that preserves self-contacts and prevents interpenetration.
The method identifies self-contacts and ground contacts in the input motion, and optimize the motion to apply to the output skeleton.
In experiments, our results quantitatively outperform previous methods and we conduct a user study where our retargeted motions are rated as higher-quality than those produced by recent works.
arXiv Detail & Related papers (2021-09-15T17:05:02Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - High-Robustness, Low-Transferability Fingerprinting of Neural Networks [78.2527498858308]
This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks.
It features high-robustness to the base model against model pruning as well as low-transferability to unassociated models.
arXiv Detail & Related papers (2021-05-14T21:48:23Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Decoupling entrainment from consistency using deep neural networks [14.823143667165382]
Isolating the effect of consistency, i.e., speakers adhering to their individual styles, is a critical part of the analysis of entrainment.
We propose to treat speakers' initial vocal features as confounds for the prediction of subsequent outputs.
Using two existing neural approaches to deconfounding, we define new measures of entrainment that control for consistency.
arXiv Detail & Related papers (2020-11-03T17:30:05Z) - Moving fast and slow: Analysis of representations and post-processing in
speech-driven automatic gesture generation [7.6857153840014165]
We extend recent deep-learning-based, data-driven methods for speech-driven gesture generation by incorporating representation learning.
Our model takes speech as input and produces gestures as output, in the form of a sequence of 3D coordinates.
We conclude that it is important to take both motion representation and post-processing into account when designing an automatic gesture-production method.
arXiv Detail & Related papers (2020-07-16T07:32:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.