Aligning Robot and Human Representations
- URL: http://arxiv.org/abs/2302.01928v2
- Date: Sun, 28 Jan 2024 15:33:23 GMT
- Title: Aligning Robot and Human Representations
- Authors: Andreea Bobu, Andi Peng, Pulkit Agrawal, Julie Shah, Anca D. Dragan
- Abstract summary: We argue that current representation learning approaches in robotics should be studied from the perspective of how well they accomplish the objective of representation alignment.
We mathematically define the problem, identify its key desiderata, and situate current methods within this formalism.
- Score: 50.070982136315784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To act in the world, robots rely on a representation of salient task aspects:
for example, to carry a coffee mug, a robot may consider movement efficiency or
mug orientation in its behavior. However, if we want robots to act for and with
people, their representations must not be just functional but also reflective
of what humans care about, i.e. they must be aligned. We observe that current
learning approaches suffer from representation misalignment, where the robot's
learned representation does not capture the human's representation. We suggest
that because humans are the ultimate evaluator of robot performance, we must
explicitly focus our efforts on aligning learned representations with humans,
in addition to learning the downstream task. We advocate that current
representation learning approaches in robotics should be studied from the
perspective of how well they accomplish the objective of representation
alignment. We mathematically define the problem, identify its key desiderata,
and situate current methods within this formalism. We conclude by suggesting
future directions for exploring open challenges.
Related papers
- HRP: Human Affordances for Robotic Pre-Training [15.92416819748365]
We present a framework for pre-training representations on hand, object, and contact.
We experimentally demonstrate (using 3000+ robot trials) that this affordance pre-training scheme boosts performance by a minimum of 15% on 5 real-world tasks.
arXiv Detail & Related papers (2024-07-26T17:59:52Z) - What Matters to You? Towards Visual Representation Alignment for Robot
Learning [81.30964736676103]
When operating in service of people, robots need to optimize rewards aligned with end-user preferences.
We propose Representation-Aligned Preference-based Learning (RAPL), a method for solving the visual representation alignment problem.
arXiv Detail & Related papers (2023-10-11T23:04:07Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - A Review of Scene Representations for Robot Manipulators [0.0]
We focus on representations which are built from real world sensing and are used to inform some downstream task.
Scene representations vary widely depending on the type of robot, the sensing modality, and the task that the robot is designed to do.
arXiv Detail & Related papers (2022-12-22T20:32:19Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Reasoning about Counterfactuals to Improve Human Inverse Reinforcement
Learning [5.072077366588174]
Humans naturally infer other agents' beliefs and desires by reasoning about their observable behavior.
We propose to incorporate the learner's current understanding of the robot's decision making into our model of human IRL.
We also propose a novel measure for estimating the difficulty for a human to predict instances of a robot's behavior in unseen environments.
arXiv Detail & Related papers (2022-03-03T17:06:37Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z) - Quantifying Hypothesis Space Misspecification in Learning from
Human-Robot Demonstrations and Physical Corrections [34.53709602861176]
Recent work focuses on how robots can use such input to learn intended objectives.
We demonstrate our method on a 7 degree-of-freedom robot manipulator in learning from two important types of human input.
arXiv Detail & Related papers (2020-02-03T18:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.