Human-oriented Representation Learning for Robotic Manipulation
- URL: http://arxiv.org/abs/2310.03023v1
- Date: Wed, 4 Oct 2023 17:59:38 GMT
- Title: Human-oriented Representation Learning for Robotic Manipulation
- Authors: Mingxiao Huo, Mingyu Ding, Chenfeng Xu, Thomas Tian, Xinghao Zhu, Yao
Mu, Lingfeng Sun, Masayoshi Tomizuka, Wei Zhan
- Abstract summary: Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
- Score: 64.59499047836637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans inherently possess generalizable visual representations that empower
them to efficiently explore and interact with the environments in manipulation
tasks. We advocate that such a representation automatically arises from
simultaneously learning about multiple simple perceptual skills that are
critical for everyday scenarios (e.g., hand detection, state estimate, etc.)
and is better suited for learning robot manipulation policies compared to
current state-of-the-art visual representations purely based on self-supervised
objectives. We formalize this idea through the lens of human-oriented
multi-task fine-tuning on top of pre-trained visual encoders, where each task
is a perceptual skill tied to human-environment interactions. We introduce Task
Fusion Decoder as a plug-and-play embedding translator that utilizes the
underlying relationships among these perceptual skills to guide the
representation learning towards encoding meaningful structure for what's
important for all perceptual skills, ultimately empowering learning of
downstream robotic manipulation tasks. Extensive experiments across a range of
robotic tasks and embodiments, in both simulations and real-world environments,
show that our Task Fusion Decoder consistently improves the representation of
three state-of-the-art visual encoders including R3M, MVP, and EgoVLP, for
downstream manipulation policy-learning. Project page:
https://sites.google.com/view/human-oriented-robot-learning
Related papers
- HRP: Human Affordances for Robotic Pre-Training [15.92416819748365]
We present a framework for pre-training representations on hand, object, and contact.
We experimentally demonstrate (using 3000+ robot trials) that this affordance pre-training scheme boosts performance by a minimum of 15% on 5 real-world tasks.
arXiv Detail & Related papers (2024-07-26T17:59:52Z) - A Backpack Full of Skills: Egocentric Video Understanding with Diverse
Task Perspectives [5.515192437680944]
We seek for a unified approach to video understanding which combines shared temporal modelling of human actions with minimal overhead.
We propose EgoPack, a solution that creates a collection of task perspectives that can be carried across downstream tasks and used as a potential source of additional insights.
We demonstrate the effectiveness and efficiency of our approach on four Ego4D benchmarks, outperforming current state-of-the-art methods.
arXiv Detail & Related papers (2024-03-05T15:18:02Z) - The Power of the Senses: Generalizable Manipulation from Vision and
Touch through Masked Multimodal Learning [60.91637862768949]
We propose Masked Multimodal Learning (M3L) to fuse visual and tactile information in a reinforcement learning setting.
M3L learns a policy and visual-tactile representations based on masked autoencoding.
We evaluate M3L on three simulated environments with both visual and tactile observations.
arXiv Detail & Related papers (2023-11-02T01:33:00Z) - RH20T: A Comprehensive Robotic Dataset for Learning Diverse Skills in
One-Shot [56.130215236125224]
A key challenge in robotic manipulation in open domains is how to acquire diverse and generalizable skills for robots.
Recent research in one-shot imitation learning has shown promise in transferring trained policies to new tasks based on demonstrations.
This paper aims to unlock the potential for an agent to generalize to hundreds of real-world skills with multi-modal perception.
arXiv Detail & Related papers (2023-07-02T15:33:31Z) - Language-Driven Representation Learning for Robotics [115.93273609767145]
Recent work in visual representation learning for robotics demonstrates the viability of learning from large video datasets of humans performing everyday tasks.
We introduce a framework for language-driven representation learning from human videos and captions.
We find that Voltron's language-driven learning outperform the prior-of-the-art, especially on targeted problems requiring higher-level control.
arXiv Detail & Related papers (2023-02-24T17:29:31Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human
Videos [59.58105314783289]
Domain-agnostic Video Discriminator (DVD) learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task.
DVD can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos.
DVD can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo.
arXiv Detail & Related papers (2021-03-31T05:25:05Z) - Learning by Watching: Physical Imitation of Manipulation Skills from
Human Videos [28.712673809577076]
We present an approach for physical imitation from human videos for robot manipulation tasks.
We design a perception module that learns to translate human videos to the robot domain followed by unsupervised keypoint detection.
We evaluate the effectiveness of our approach on five robot manipulation tasks, including reaching, pushing, sliding, coffee making, and drawer closing.
arXiv Detail & Related papers (2021-01-18T18:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.