A Vision-Based Shared-Control Teleoperation Scheme for Controlling the Robotic Arm of a Four-Legged Robot
- URL: http://arxiv.org/abs/2508.14994v2
- Date: Sat, 11 Oct 2025 16:33:28 GMT
- Title: A Vision-Based Shared-Control Teleoperation Scheme for Controlling the Robotic Arm of a Four-Legged Robot
- Authors: Murilo Vinicius da Silva, Matheus Hipolito Carvalho, Juliano Negri, Thiago Segreto, Gustavo J. G. Lahr, Ricardo V. Godoy, Marcelo Becker,
- Abstract summary: This work proposes an intuitive remote control by leveraging a vision-based pose estimation pipeline.<n>The system maps these wrist movements into robotic arm commands to control the robot's arm in real-time.<n>A trajectory planner ensures safe teleoperation by detecting and preventing collisions with both obstacles and the robotic arm itself.
- Score: 0.9699673328328621
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In hazardous and remote environments, robotic systems perform critical tasks demanding improved safety and efficiency. Among these, quadruped robots with manipulator arms offer mobility and versatility for complex operations. However, teleoperating quadruped robots is challenging due to the lack of integrated obstacle detection and intuitive control methods for the robotic arm, increasing collision risks in confined or dynamically changing workspaces. Teleoperation via joysticks or pads can be non-intuitive and demands a high level of expertise due to its complexity, culminating in a high cognitive load on the operator. To address this challenge, a teleoperation approach that directly maps human arm movements to the robotic manipulator offers a simpler and more accessible solution. This work proposes an intuitive remote control by leveraging a vision-based pose estimation pipeline that utilizes an external camera with a machine learning-based model to detect the operator's wrist position. The system maps these wrist movements into robotic arm commands to control the robot's arm in real-time. A trajectory planner ensures safe teleoperation by detecting and preventing collisions with both obstacles and the robotic arm itself. The system was validated on the real robot, demonstrating robust performance in real-time control. This teleoperation approach provides a cost-effective solution for industrial applications where safety, precision, and ease of use are paramount, ensuring reliable and intuitive robotic control in high-risk environments.
Related papers
- Interpretable Multimodal Gesture Recognition for Drone and Mobile Robot Teleoperation via Log-Likelihood Ratio Fusion [14.332919759770645]
Vision-based gesture recognition has been explored as one method for hands-free teleoperation.<n>We propose a multimodal gesture recognition framework that integrates inertial data from Apple Watches on both wrists with capacitive sensing signals from custom gloves.<n>We show that our framework achieves performance comparable to a state-of-the-art vision-based baseline.
arXiv Detail & Related papers (2026-02-27T05:52:04Z) - HACTS: a Human-As-Copilot Teleoperation System for Robot Learning [47.9126187195398]
We introduce HACTS (Human-As-Copilot Teleoperation System), a novel system that establishes bilateral, real-time joint synchronization between a robot arm and teleoperation hardware.<n>This simple yet effective feedback mechanism, akin to a steering wheel in autonomous vehicles, enables the human copilot to intervene seamlessly while collecting action-correction data for future learning.
arXiv Detail & Related papers (2025-03-31T13:28:13Z) - Whole-body End-Effector Pose Tracking [10.426087117345096]
We introduce a whole-body RL formulation for end-effector pose tracking in a large workspace on rough, unstructured terrains.<n>Our proposed method involves a terrain-aware sampling strategy for the robot's initial configuration and end-effector pose commands.<n>On deployment, it achieves a pose-tracking error of 2.64 cm and 3.64 degrees, outperforming existing competitive baselines.
arXiv Detail & Related papers (2024-09-24T12:51:32Z) - Controlling diverse robots by inferring Jacobian fields with deep networks [48.279199537720714]
Mirroring the complex structures and diverse functions of natural organisms is a long-standing challenge in robotics.<n>We introduce a method that uses deep neural networks to map a video stream of a robot to its visuomotor Jacobian field.<n>Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Learning Variable Compliance Control From a Few Demonstrations for Bimanual Robot with Haptic Feedback Teleoperation System [5.497832119577795]
dexterous, contact-rich manipulation tasks using rigid robots is a significant challenge in robotics.
Compliance control schemes have been introduced to mitigate these issues by controlling forces via external sensors.
Learning from Demonstrations offers an intuitive alternative, allowing robots to learn manipulations through observed actions.
arXiv Detail & Related papers (2024-06-21T09:03:37Z) - AnyTeleop: A General Vision-Based Dexterous Robot Arm-Hand Teleoperation System [51.48191418148764]
Vision-based teleoperation can endow robots with human-level intelligence to interact with the environment.
Current vision-based teleoperation systems are designed and engineered towards a particular robot model and deploy environment.
We propose AnyTeleop, a unified and general teleoperation system to support multiple different arms, hands, realities, and camera configurations within a single system.
arXiv Detail & Related papers (2023-07-10T14:11:07Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Vision-Based Safety System for Barrierless Human-Robot Collaboration [0.0]
This paper proposes a safety system that implements Speed and Separation Monitoring (SSM) type of operation.
A deep learning-based computer vision system detects, tracks, and estimates the 3D position of operators close to the robot.
Three different operation modes in which the human and robot interact are presented.
arXiv Detail & Related papers (2022-08-03T12:31:03Z) - SERA: Safe and Efficient Reactive Obstacle Avoidance for Collaborative
Robotic Planning in Unstructured Environments [1.5229257192293197]
We propose a novel methodology for reactive whole-body obstacle avoidance.
Our approach allows a robotic arm to proactively avoid obstacles of arbitrary 3D shapes without direct contact.
Our methodology provides a robust and effective solution for safe human-robot collaboration in non-stationary environments.
arXiv Detail & Related papers (2022-03-24T21:11:43Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.