Single RGB-D Camera Teleoperation for General Robotic Manipulation
- URL: http://arxiv.org/abs/2106.14396v2
- Date: Wed, 30 Jun 2021 22:18:50 GMT
- Title: Single RGB-D Camera Teleoperation for General Robotic Manipulation
- Authors: Quan Vuong, Yuzhe Qin, Runlin Guo, Xiaolong Wang, Hao Su, Henrik
Christensen
- Abstract summary: We propose a teleoperation system that uses a single RGB-D camera as the human motion capture device.
Our system can perform general manipulation tasks such as cloth folding, hammering and 3mm clearance peg in hole.
- Score: 25.345197924615793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a teleoperation system that uses a single RGB-D camera as the
human motion capture device. Our system can perform general manipulation tasks
such as cloth folding, hammering and 3mm clearance peg in hole. We propose the
use of non-Cartesian oblique coordinate frame, dynamic motion scaling and
reposition of operator frames to increase the flexibility of our teleoperation
system. We hypothesize that lowering the barrier of entry to teleoperation will
allow for wider deployment of supervised autonomy system, which will in turn
generates realistic datasets that unlock the potential of machine learning for
robotic manipulation.
Related papers
- Sitcom-Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes [83.55301458112672]
Sitcom-Crafter is a system for human motion generation in 3D space.
Central to the function generation modules is our novel 3D scene-aware human-human interaction module.
Augmentation modules encompass plot comprehension for command generation, motion synchronization for seamless integration of different motion types.
arXiv Detail & Related papers (2024-10-14T17:56:19Z) - Zero-Cost Whole-Body Teleoperation for Mobile Manipulation [8.71539730969424]
MoMa-Teleop is a novel teleoperation method that delegates the base motions to a reinforcement learning agent.
We demonstrate that our approach results in a significant reduction in task completion time across a variety of robots and tasks.
arXiv Detail & Related papers (2024-09-23T15:09:45Z) - ACE: A Cross-Platform Visual-Exoskeletons System for Low-Cost Dexterous Teleoperation [25.679146657293778]
Building efficient teleoperation systems across diverse robot platforms has become more crucial than ever.
We develop ACE, a cross-platform visual-exoskeleton system for low-cost dexterous teleoperation.
Compared to previous systems, our single system can generalize to humanoid hands, arm-hands, arm-gripper, and quadruped-gripper systems with high-precision teleoperation.
arXiv Detail & Related papers (2024-08-21T17:48:31Z) - Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - Open-TeleVision: Teleoperation with Immersive Active Visual Feedback [17.505318269362512]
Open-TeleVision allows operators to actively perceive the robot's surroundings in a stereoscopic manner.
The system mirrors the operator's arm and hand movements on the robot, creating an immersive experience.
We validate the effectiveness of our system by collecting data and training imitation learning policies on four long-horizon, precise tasks.
arXiv Detail & Related papers (2024-07-01T17:55:35Z) - AnyTeleop: A General Vision-Based Dexterous Robot Arm-Hand Teleoperation System [51.48191418148764]
Vision-based teleoperation can endow robots with human-level intelligence to interact with the environment.
Current vision-based teleoperation systems are designed and engineered towards a particular robot model and deploy environment.
We propose AnyTeleop, a unified and general teleoperation system to support multiple different arms, hands, realities, and camera configurations within a single system.
arXiv Detail & Related papers (2023-07-10T14:11:07Z) - A Flexible Framework for Virtual Omnidirectional Vision to Improve
Operator Situation Awareness [2.817412580574242]
We present a flexible framework for virtual projections to increase situation awareness based on a novel method to fuse multiple cameras mounted anywhere on the robot.
We propose a complementary approach to improve scene understanding by fusing camera images and geometric 3D Lidar data to obtain a colorized point cloud.
arXiv Detail & Related papers (2023-02-01T10:40:05Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - RGB-Only Reconstruction of Tabletop Scenes for Collision-Free
Manipulator Control [71.51781695764872]
We present a system for collision-free control of a robot manipulator that uses only RGB views of the world.
Perceptual input of a tabletop scene is provided by multiple images of an RGB camera that is either handheld or mounted on the robot end effector.
A NeRF-like process is used to reconstruct the 3D geometry of the scene, from which the Euclidean full signed distance function (ESDF) is computed.
A model predictive control algorithm is then used to control the manipulator to reach a desired pose while avoiding obstacles in the ESDF.
arXiv Detail & Related papers (2022-10-21T01:45:08Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU [25.451864296962288]
We present a novel vision-based hand pose regression network (Transteleop) and an IMU-based arm tracking method.
Transteleop observes the human hand through a low-cost depth camera and generates depth images of paired robot hand poses.
A wearable camera holder enables simultaneous hand-arm control and facilitates the mobility of the whole teleoperation system.
arXiv Detail & Related papers (2020-03-11T10:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.