Aligning Robot Representations with Humans
- URL: http://arxiv.org/abs/2205.07882v1
- Date: Sun, 15 May 2022 15:51:05 GMT
- Title: Aligning Robot Representations with Humans
- Authors: Andreea Bobu, Andi Peng
- Abstract summary: Key question is how to best transfer knowledge learned in one environment to another, where shifting constraints and human preferences render adaptation challenging.
We postulate that because humans will be the ultimate evaluator of system success in the world, they are best suited to communicating the aspects of the tasks that matter to the robot.
We highlight three areas where we can use this approach to build interactive systems and offer future directions of work to better create advanced collaborative robots.
- Score: 5.482532589225552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As robots are increasingly deployed in real-world scenarios, a key question
is how to best transfer knowledge learned in one environment to another, where
shifting constraints and human preferences render adaptation challenging. A
central challenge remains that often, it is difficult (perhaps even impossible)
to capture the full complexity of the deployment environment, and therefore the
desired tasks, at training time. Consequently, the representation, or
abstraction, of the tasks the human hopes for the robot to perform in one
environment may be misaligned with the representation of the tasks that the
robot has learned in another. We postulate that because humans will be the
ultimate evaluator of system success in the world, they are best suited to
communicating the aspects of the tasks that matter to the robot. Our key
insight is that effective learning from human input requires first explicitly
learning good intermediate representations and then using those representations
for solving downstream tasks. We highlight three areas where we can use this
approach to build interactive systems and offer future directions of work to
better create advanced collaborative robots.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Grounding Robot Policies with Visuomotor Language Guidance [15.774237279917594]
We propose an agent-based framework for grounding robot policies to the current context.
The proposed framework is composed of a set of conversational agents designed for specific roles.
We demonstrate that our approach can effectively guide manipulation policies to achieve significantly higher success rates.
arXiv Detail & Related papers (2024-10-09T02:00:37Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Extended Reality for Enhanced Human-Robot Collaboration: a Human-in-the-Loop Approach [2.336967926255341]
Human-robot collaboration attempts to tackle these challenges by combining the strength and precision of machines with human ingenuity and perceptual understanding.
We propose an implementation framework for an autonomous, machine learning-based manipulator that incorporates human-in-the-loop principles.
The conceptual framework foresees human involvement directly in the robot learning process, resulting in higher adaptability and task generalization.
arXiv Detail & Related papers (2024-03-21T17:50:22Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Semantic-Aware Environment Perception for Mobile Human-Robot Interaction [2.309914459672557]
We present a vision-based system for mobile robots to enable a semantic-aware environment without additional a-priori knowledge.
We deploy our system on a mobile humanoid robot that enables us to test our methods in real-world applications.
arXiv Detail & Related papers (2022-11-07T08:49:45Z) - Dual-Arm Adversarial Robot Learning [0.6091702876917281]
We propose dual-arm settings as platforms for robot learning.
We will discuss the potential benefits of this setup as well as the challenges and research directions that can be pursued.
arXiv Detail & Related papers (2021-10-15T12:51:57Z) - Auditing Robot Learning for Safety and Compliance during Deployment [4.742825811314168]
We study how best to audit robot learning algorithms for checking their compatibility with humans.
We believe that this is a challenging problem that will require efforts from the entire robot learning community.
arXiv Detail & Related papers (2021-10-12T02:40:11Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.