Style-transfer based Speech and Audio-visual Scene Understanding for
Robot Action Sequence Acquisition from Videos
- URL: http://arxiv.org/abs/2306.15644v1
- Date: Tue, 27 Jun 2023 17:37:53 GMT
- Title: Style-transfer based Speech and Audio-visual Scene Understanding for
Robot Action Sequence Acquisition from Videos
- Authors: Chiori Hori, Puyuan Peng, David Harwath, Xinyu Liu, Kei Ota, Siddarth
Jain, Radu Corcodel, Devesh Jha, Diego Romeres, Jonathan Le Roux
- Abstract summary: This paper introduces a method for robot action sequence generation from instruction videos.
We built a system that accomplishes various cooking actions, where an arm robot executes a sequence acquired from a cooking video.
- Score: 40.012813353904875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To realize human-robot collaboration, robots need to execute actions for new
tasks according to human instructions given finite prior knowledge. Human
experts can share their knowledge of how to perform a task with a robot through
multi-modal instructions in their demonstrations, showing a sequence of
short-horizon steps to achieve a long-horizon goal. This paper introduces a
method for robot action sequence generation from instruction videos using (1)
an audio-visual Transformer that converts audio-visual features and instruction
speech to a sequence of robot actions called dynamic movement primitives (DMPs)
and (2) style-transfer-based training that employs multi-task learning with
video captioning and weakly-supervised learning with a semantic classifier to
exploit unpaired video-action data. We built a system that accomplishes various
cooking actions, where an arm robot executes a DMP sequence acquired from a
cooking video using the audio-visual Transformer. Experiments with
Epic-Kitchen-100, YouCookII, QuerYD, and in-house instruction video datasets
show that the proposed method improves the quality of DMP sequences by 2.3
times the METEOR score obtained with a baseline video-to-action Transformer.
The model achieved 32% of the task success rate with the task knowledge of the
object.
Related papers
- Chain-of-Modality: Learning Manipulation Programs from Multimodal Human Videos with Vision-Language-Models [49.4824734958566]
Chain-of-Modality (CoM) enables Vision Language Models to reason about multimodal human demonstration data.
CoM refines a task plan and generates detailed control parameters, enabling robots to perform manipulation tasks based on a single multimodal human video prompt.
arXiv Detail & Related papers (2025-04-17T21:31:23Z) - VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation [53.63540587160549]
VidBot is a framework enabling zero-shot robotic manipulation using learned 3D affordance from in-the-wild monocular RGB-only human videos.
VidBot paves the way for leveraging everyday human videos to make robot learning more scalable.
arXiv Detail & Related papers (2025-03-10T10:04:58Z) - Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers [36.497624484863785]
We introduce Vid2Robot, an end-to-end video-conditioned policy that takes human videos demonstrating manipulation tasks as input and produces robot actions.
Our model is trained with a large dataset of prompt video-robot trajectory pairs to learn unified representations of human and robot actions from videos.
We evaluate Vid2Robot on real-world robots and observe over 20% improvement over BC-Z when using human prompt videos.
arXiv Detail & Related papers (2024-03-19T17:47:37Z) - Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training [69.54948297520612]
Learning a generalist embodied agent poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets.
We introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-02-22T09:48:47Z) - Learning to Act from Actionless Videos through Dense Correspondences [87.1243107115642]
We present an approach to construct a video-based robot policy capable of reliably executing diverse tasks across different robots and environments.
Our method leverages images as a task-agnostic representation, encoding both the state and action information, and text as a general representation for specifying robot goals.
We demonstrate the efficacy of our approach in learning policies on table-top manipulation and navigation tasks.
arXiv Detail & Related papers (2023-10-12T17:59:23Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Perceive, Represent, Generate: Translating Multimodal Information to
Robotic Motion Trajectories [1.0499611180329804]
Perceive-Represent-Generate (PRG) is a framework that maps perceptual information of different modalities to an adequate sequence of movements to be executed by a robot.
We evaluate our pipeline in the context of a novel robotic handwriting task, where the robot receives as input a word through different perceptual modalities (e.g., image, sound) and generates the corresponding motion trajectory to write it.
arXiv Detail & Related papers (2022-04-06T19:31:18Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Understanding Action Sequences based on Video Captioning for
Learning-from-Observation [14.467714234267307]
We propose a Learning-from-Observation framework that splits and understands a video of a human demonstration with verbal instructions to extract accurate action sequences.
The splitting is done based on local minimum points of the hand velocity, which align human daily-life actions with object-centered face contact transitions.
We match the motion descriptions with the verbal instructions to understand the correct human intent and ignore the unintended actions inside the video.
arXiv Detail & Related papers (2020-12-09T05:22:01Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - Caption Generation of Robot Behaviors based on Unsupervised Learning of
Action Segments [10.356412004005767]
Bridging robot action sequences and their natural language captions is an important task to increase explainability of human assisting robots.
In this paper, we propose a system for generating natural language captions that describe behaviors of human assisting robots.
arXiv Detail & Related papers (2020-03-23T03:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.