Projection Mapping Implementation: Enabling Direct Externalization of
Perception Results and Action Intent to Improve Robot Explainability
- URL: http://arxiv.org/abs/2010.02263v3
- Date: Wed, 4 Nov 2020 16:06:52 GMT
- Title: Projection Mapping Implementation: Enabling Direct Externalization of
Perception Results and Action Intent to Improve Robot Explainability
- Authors: Zhao Han, Alexander Wilkinson, Jenna Parrillo, Jordan Allspaw, Holly
A. Yanco
- Abstract summary: Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot's internal states.
Projecting the states directly onto a robot's operating environment has the advantages of being direct, accurate, and more salient.
- Score: 62.03014078810652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not
accurately present a robot's internal states such as perception results and
action intent. Projecting the states directly onto a robot's operating
environment has the advantages of being direct, accurate, and more salient,
eliminating mental inference about the robot's intention. However, there is a
lack of tools for projection mapping in robotics, compared to established
motion planning libraries (e.g., MoveIt). In this paper, we detail the
implementation of projection mapping to enable researchers and practitioners to
push the boundaries for better interaction between robots and humans. We also
provide practical documentation and code for a sample manipulation projection
mapping on GitHub: https://github.com/uml-robotics/projection_mapping.
Related papers
- Polaris: Open-ended Interactive Robotic Manipulation via Syn2Real Visual Grounding and Large Language Models [53.22792173053473]
We introduce an interactive robotic manipulation framework called Polaris.
Polaris integrates perception and interaction by utilizing GPT-4 alongside grounded vision models.
We propose a novel Synthetic-to-Real (Syn2Real) pose estimation pipeline.
arXiv Detail & Related papers (2024-08-15T06:40:38Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - What Matters to You? Towards Visual Representation Alignment for Robot
Learning [81.30964736676103]
When operating in service of people, robots need to optimize rewards aligned with end-user preferences.
We propose Representation-Aligned Preference-based Learning (RAPL), a method for solving the visual representation alignment problem.
arXiv Detail & Related papers (2023-10-11T23:04:07Z) - ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space [9.806227900768926]
This paper introduces a novel deep-learning approach for human-to-robot motion.
Our method does not require paired human-to-robot data, which facilitates its translation to new robots.
Our model outperforms existing works regarding human-to-robot similarity in terms of efficiency and precision.
arXiv Detail & Related papers (2023-09-11T08:55:04Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Reasoning with Scene Graphs for Robot Planning under Partial
Observability [7.121002367542985]
We develop an algorithm called scene analysis for robot planning (SARP) that enables robots to reason with visual contextual information.
Experiments have been conducted using multiple 3D environments in simulation, and a dataset collected by a real robot.
arXiv Detail & Related papers (2022-02-21T18:45:56Z) - Learning User-Preferred Mappings for Intuitive Robot Control [28.183430654834307]
We propose a method for learning the human's preferred or preconceived mapping from a few robot queries.
We make this approach data-efficient by recognizing that human mappings have strong priors.
Our simulated and experimental results suggest that learning the mapping between inputs and robot actions improves objective and subjective performance.
arXiv Detail & Related papers (2020-07-22T18:54:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.