Signs of Language: Embodied Sign Language Fingerspelling Acquisition
from Demonstrations for Human-Robot Interaction
- URL: http://arxiv.org/abs/2209.05135v3
- Date: Mon, 5 Jun 2023 12:56:14 GMT
- Title: Signs of Language: Embodied Sign Language Fingerspelling Acquisition
from Demonstrations for Human-Robot Interaction
- Authors: Federico Tavella and Aphrodite Galata and Angelo Cangelosi
- Abstract summary: We propose an approach for learning dexterous motor imitation from video examples without additional information.
We first build a URDF model of a robotic hand with a single actuator for each joint.
We then leverage pre-trained deep vision models to extract the 3D pose of the hand from RGB videos.
- Score: 1.0166477175169308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning fine-grained movements is a challenging topic in robotics,
particularly in the context of robotic hands. One specific instance of this
challenge is the acquisition of fingerspelling sign language in robots. In this
paper, we propose an approach for learning dexterous motor imitation from video
examples without additional information. To achieve this, we first build a URDF
model of a robotic hand with a single actuator for each joint. We then leverage
pre-trained deep vision models to extract the 3D pose of the hand from RGB
videos. Next, using state-of-the-art reinforcement learning algorithms for
motion imitation (namely, proximal policy optimization and soft actor-critic),
we train a policy to reproduce the movement extracted from the demonstrations.
We identify the optimal set of hyperparameters for imitation based on a
reference motion. Finally, we demonstrate the generalizability of our approach
by testing it on six different tasks, corresponding to fingerspelled letters.
Our results show that our approach is able to successfully imitate these
fine-grained movements without additional information, highlighting its
potential for real-world applications in robotics.
Related papers
- OKAMI: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation [35.97702591413093]
We introduce OKAMI, a method that generates a manipulation plan from a single RGB-D video.
OKAMI uses open-world vision models to identify task-relevant objects and retarget the body motions and hand poses separately.
arXiv Detail & Related papers (2024-10-15T17:17:54Z) - Latent Action Pretraining from Videos [156.88613023078778]
We introduce Latent Action Pretraining for general Action models (LAPA)
LAPA is an unsupervised method for pretraining Vision-Language-Action (VLA) models without ground-truth robot action labels.
We propose a method to learn from internet-scale videos that do not have robot action labels.
arXiv Detail & Related papers (2024-10-15T16:28:09Z) - DemoStart: Demonstration-led auto-curriculum applied to sim-to-real with multi-fingered robots [15.034811470942962]
We present DemoStart, a novel auto-curriculum reinforcement learning method capable of learning complex manipulation behaviors on an arm equipped with a three-fingered robotic hand.
Learning from simulation drastically reduces the development cycle of behavior generation, and domain randomization techniques are leveraged to achieve successful zero-shot sim-to-real transfer.
arXiv Detail & Related papers (2024-09-10T16:05:25Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training [69.54948297520612]
Learning a generalist embodied agent poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets.
We introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-02-22T09:48:47Z) - VoxPoser: Composable 3D Value Maps for Robotic Manipulation with
Language Models [38.503337052122234]
Large language models (LLMs) are shown to possess a wealth of actionable knowledge that can be extracted for robot manipulation.
We aim to synthesize robot trajectories for a variety of manipulation tasks given an open-set of instructions and an open-set of objects.
We demonstrate how the proposed framework can benefit from online experiences by efficiently learning a dynamics model for scenes that involve contact-rich interactions.
arXiv Detail & Related papers (2023-07-12T07:40:48Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Cross-Domain Transfer via Semantic Skill Imitation [49.83150463391275]
We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g. human videos, to accelerate reinforcement learning (RL)
Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove"
arXiv Detail & Related papers (2022-12-14T18:46:14Z) - From One Hand to Multiple Hands: Imitation Learning for Dexterous
Manipulation from Single-Camera Teleoperation [26.738893736520364]
We introduce a novel single-camera teleoperation system to collect the 3D demonstrations efficiently with only an iPad and a computer.
We construct a customized robot hand for each user in the physical simulator, which is a manipulator resembling the same kinematics structure and shape of the operator's hand.
With imitation learning using our data, we show large improvement over baselines with multiple complex manipulation tasks.
arXiv Detail & Related papers (2022-04-26T17:59:51Z) - DexMV: Imitation Learning for Dexterous Manipulation from Human Videos [11.470141313103465]
We propose a new platform and pipeline, DexMV, for imitation learning to bridge the gap between computer vision and robot learning.
We design a platform with: (i) a simulation system for complex dexterous manipulation tasks with a multi-finger robot hand and (ii) a computer vision system to record large-scale demonstrations of a human hand conducting the same tasks.
We show that the demonstrations can indeed improve robot learning by a large margin and solve the complex tasks which reinforcement learning alone cannot solve.
arXiv Detail & Related papers (2021-08-12T17:51:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.