6N-DoF Pose Tracking for Tensegrity Robots
- URL: http://arxiv.org/abs/2205.14764v1
- Date: Sun, 29 May 2022 20:55:29 GMT
- Title: 6N-DoF Pose Tracking for Tensegrity Robots
- Authors: Shiyang Lu, William R. Johnson III, Kun Wang, Xiaonan Huang, Joran
Booth, Rebecca Kramer-Bottiglio, Kostas Bekris
- Abstract summary: Tensegrity robots are composed of rigid compressive elements (rods) and flexible tensile elements (e.g., cables)
This work aims to address the pose tracking of tensegrity robots through a markerless, vision-based method.
An iterative optimization process is proposed to estimate the 6-DoF poses of each rigid element of a tensegrity robot from an RGB-D video.
- Score: 5.398092221687385
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensegrity robots, which are composed of rigid compressive elements (rods)
and flexible tensile elements (e.g., cables), have a variety of advantages,
including flexibility, light weight, and resistance to mechanical impact.
Nevertheless, the hybrid soft-rigid nature of these robots also complicates the
ability to localize and track their state. This work aims to address what has
been recognized as a grand challenge in this domain, i.e., the pose tracking of
tensegrity robots through a markerless, vision-based method, as well as novel,
onboard sensors that can measure the length of the robot's cables. In
particular, an iterative optimization process is proposed to estimate the 6-DoF
poses of each rigid element of a tensegrity robot from an RGB-D video as well
as endcap distance measurements from the cable sensors. To ensure the pose
estimates of rigid elements are physically feasible, i.e., they are not
resulting in collisions between rods or with the environment, physical
constraints are introduced during the optimization. Real-world experiments are
performed with a 3-bar tensegrity robot, which performs locomotion gaits. Given
ground truth data from a motion capture system, the proposed method achieves
less than 1 cm translation error and 3 degrees rotation error, which
significantly outperforms alternatives. At the same time, the approach can
provide pose estimates throughout the robot's motion, while motion capture
often fails due to occlusions.
Related papers
- Robot See Robot Do: Imitating Articulated Object Manipulation with Monocular 4D Reconstruction [51.49400490437258]
This work develops a method for imitating articulated object manipulation from a single monocular RGB human demonstration.
We first propose 4D Differentiable Part Models (4D-DPM), a method for recovering 3D part motion from a monocular video.
Given this 4D reconstruction, the robot replicates object trajectories by planning bimanual arm motions that induce the demonstrated object part motion.
We evaluate 4D-DPM's 3D tracking accuracy on ground truth annotated 3D part trajectories and RSRD's physical execution performance on 9 objects across 10 trials each on a bimanual YuMi robot.
arXiv Detail & Related papers (2024-09-26T17:57:16Z) - Exploring 3D Human Pose Estimation and Forecasting from the Robot's Perspective: The HARPER Dataset [52.22758311559]
We introduce HARPER, a novel dataset for 3D body pose estimation and forecast in dyadic interactions between users and Spot.
The key-novelty is the focus on the robot's perspective, i.e., on the data captured by the robot's sensors.
The scenario underlying HARPER includes 15 actions, of which 10 involve physical contact between the robot and users.
arXiv Detail & Related papers (2024-03-21T14:53:50Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Real-time Holistic Robot Pose Estimation with Unknown States [30.41806081818826]
Estimating robot pose from RGB images is a crucial problem in computer vision and robotics.
Previous methods presume full knowledge of robot internal states, e.g. ground-truth robot joint angles.
This work introduces an efficient framework for real-time robot pose estimation from RGB images without requiring known robot states.
arXiv Detail & Related papers (2024-02-08T13:12:50Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Image-based Pose Estimation and Shape Reconstruction for Robot
Manipulators and Soft, Continuum Robots via Differentiable Rendering [20.62295718847247]
State estimation from measured data is crucial for robotic applications as autonomous systems rely on sensors to capture the motion and localize in the 3D world.
In this work, we achieve image-based robot pose estimation and shape reconstruction from camera images.
We demonstrate that our method of using geometrical shape primitives can achieve high accuracy in shape reconstruction for a soft continuum robot and pose estimation for a robot manipulator.
arXiv Detail & Related papers (2023-02-27T18:51:29Z) - Real2Sim2Real Transfer for Control of Cable-driven Robots via a
Differentiable Physics Engine [9.268539775233346]
Tensegrity robots exhibit high strength-to-weight ratios and significant deformations.
They are hard to control, however, due to high dimensionality, complex dynamics, and a coupled architecture.
This paper describes a Real2Sim2Real (R2S2R) strategy for modeling tensegrity robots.
arXiv Detail & Related papers (2022-09-13T18:51:26Z) - Regularized Deep Signed Distance Fields for Reactive Motion Generation [30.792481441975585]
Distance-based constraints are fundamental for enabling robots to plan their actions and act safely.
We propose Regularized Deep Signed Distance Fields (ReDSDF), a single neural implicit function that can compute smooth distance fields at any scale.
We demonstrate the effectiveness of our approach in representative simulated tasks for whole-body control (WBC) and safe Human-Robot Interaction (HRI) in shared workspaces.
arXiv Detail & Related papers (2022-03-09T14:21:32Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Autonomous Navigation of Underactuated Bipedal Robots in
Height-Constrained Environments [20.246040671823554]
This paper presents an end-to-end autonomous navigation framework for bipedal robots.
A vertically-actuated Spring-Loaded Inverted Pendulum (vSLIP) model is introduced to capture the robot's coupled dynamics of planar walking and vertical walking height.
A variable walking height controller is leveraged to enable the bipedal robot to maintain stable periodic walking gaits while following the planned trajectory.
arXiv Detail & Related papers (2021-09-13T05:36:14Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.