A Review of Scene Representations for Robot Manipulators
- URL: http://arxiv.org/abs/2301.11275v1
- Date: Thu, 22 Dec 2022 20:32:19 GMT
- Title: A Review of Scene Representations for Robot Manipulators
- Authors: Carter Sifferman
- Abstract summary: We focus on representations which are built from real world sensing and are used to inform some downstream task.
Scene representations vary widely depending on the type of robot, the sensing modality, and the task that the robot is designed to do.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For a robot to act intelligently, it needs to sense the world around it.
Increasingly, robots build an internal representation of the world from sensor
readings. This representation can then be used to inform downstream tasks, such
as manipulation, collision avoidance, or human interaction. In practice, scene
representations vary widely depending on the type of robot, the sensing
modality, and the task that the robot is designed to do. This review provides
an overview of the scene representations used for robot manipulators (robot
arms). We focus primarily on representations which are built from real world
sensing and are used to inform some downstream robotics task.
Related papers
- HuBo-VLM: Unified Vision-Language Model designed for HUman roBOt
interaction tasks [5.057755436092344]
Human robot interaction is an exciting task, which aimed to guide robots following instructions from human.
HuBo-VLM is proposed to tackle perception tasks associated with human robot interaction.
arXiv Detail & Related papers (2023-08-24T03:47:27Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations [16.321053835017942]
We present a system for automatically generating executable robot control programs from human task demonstrations in virtual reality (VR)
We leverage common-sense knowledge and game engine-based physics to semantically interpret human VR demonstrations.
We demonstrate our approach in the context of force-sensitive fetch-and-place for a robotic shopping assistant.
arXiv Detail & Related papers (2023-06-05T09:37:53Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Zero-Shot Robot Manipulation from Passive Human Videos [59.193076151832145]
We develop a framework for extracting agent-agnostic action representations from human videos.
Our framework is based on predicting plausible human hand trajectories.
We deploy the trained model zero-shot for physical robot manipulation tasks.
arXiv Detail & Related papers (2023-02-03T21:39:52Z) - Aligning Robot and Human Representations [50.070982136315784]
We argue that current representation learning approaches in robotics should be studied from the perspective of how well they accomplish the objective of representation alignment.
We mathematically define the problem, identify its key desiderata, and situate current methods within this formalism.
arXiv Detail & Related papers (2023-02-03T18:59:55Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Explain yourself! Effects of Explanations in Human-Robot Interaction [10.389325878657697]
Explanations of robot decisions could affect user perceptions, justify their reliability, and increase trust.
The effects on human perceptions of robots that explain their decisions have not been studied thoroughly.
This study demonstrates the need for and potential of explainable human-robot interaction.
arXiv Detail & Related papers (2022-04-09T15:54:27Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Intelligent Motion Planning for a Cost-effective Object Follower Mobile
Robotic System with Obstacle Avoidance [0.2062593640149623]
We propose a robotic system which uses robot vision and deep learning to get the required linear and angular velocities.
The novel methodology that we are proposing is accurate in detecting the position of the unique coloured object in any kind of lighting.
arXiv Detail & Related papers (2021-09-06T19:19:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.