Kinematically-Decoupled Impedance Control for Fast Object Visual
Servoing and Grasping on Quadruped Manipulators
- URL: http://arxiv.org/abs/2307.04918v1
- Date: Mon, 10 Jul 2023 21:51:06 GMT
- Title: Kinematically-Decoupled Impedance Control for Fast Object Visual
Servoing and Grasping on Quadruped Manipulators
- Authors: Riccardo Parosi, Mattia Risiglione, Darwin G. Caldwell, Claudio
Semini, Victor Barasuol
- Abstract summary: We propose a control pipeline for SAG (Searching, Approaching, and Grasping) of objects, based on a decoupled arm kinematic chain and impedance control.
The kinematic decoupling allows for fast end-effector motions and recovery that leads to robust visual servoing.
We demonstrate the performance and robustness of the proposed approach with various experiments on our 140 kg HyQReal quadruped robot equipped with a 7-DoF manipulator arm.
- Score: 18.279073092727025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a control pipeline for SAG (Searching, Approaching, and Grasping)
of objects, based on a decoupled arm kinematic chain and impedance control,
which integrates image-based visual servoing (IBVS). The kinematic decoupling
allows for fast end-effector motions and recovery that leads to robust visual
servoing. The whole approach and pipeline can be generalized for any mobile
platform (wheeled or tracked vehicles), but is most suitable for dynamically
moving quadruped manipulators thanks to their reactivity against disturbances.
The compliance of the impedance controller makes the robot safer for
interactions with humans and the environment. We demonstrate the performance
and robustness of the proposed approach with various experiments on our 140 kg
HyQReal quadruped robot equipped with a 7-DoF manipulator arm. The experiments
consider dynamic locomotion, tracking under external disturbances, and fast
motions of the target object.
Related papers
- Sitcom-Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes [83.55301458112672]
Sitcom-Crafter is a system for human motion generation in 3D space.
Central to the function generation modules is our novel 3D scene-aware human-human interaction module.
Augmentation modules encompass plot comprehension for command generation, motion synchronization for seamless integration of different motion types.
arXiv Detail & Related papers (2024-10-14T17:56:19Z) - Vision Transformers for End-to-End Vision-Based Quadrotor Obstacle Avoidance [13.467819526775472]
We demonstrate the capabilities of an attention-based end-to-end approach for high-speed vision-based quadrotor obstacle avoidance.
We train and compare convolutional, U-Net, and recurrent architectures against vision transformer (ViT) models for depth image-to-control in high-fidelity simulation.
arXiv Detail & Related papers (2024-05-16T18:36:43Z) - Visual Whole-Body Control for Legged Loco-Manipulation [22.50054654508986]
We study the problem of mobile manipulation using legged robots equipped with an arm.
We propose a framework that can conduct the whole-body control autonomously with visual observations.
arXiv Detail & Related papers (2024-03-25T17:26:08Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - TLControl: Trajectory and Language Control for Human Motion Synthesis [68.09806223962323]
We present TLControl, a novel method for realistic human motion synthesis.
It incorporates both low-level Trajectory and high-level Language semantics controls.
It is practical for interactive and high-quality animation generation.
arXiv Detail & Related papers (2023-11-28T18:54:16Z) - Learning Low-Frequency Motion Control for Robust and Dynamic Robot
Locomotion [10.838285018473725]
We demonstrate robust and dynamic locomotion with a learned motion controller executing at as low as 8 Hz on a real ANYmal C quadruped.
The robot is able to robustly and repeatably achieve a high heading velocity of 1.5 m/s, traverse uneven terrain, and resist unexpected external perturbations.
arXiv Detail & Related papers (2022-09-29T15:55:33Z) - Unified Control Framework for Real-Time Interception and Obstacle Avoidance of Fast-Moving Objects with Diffusion Variational Autoencoder [2.5642257132861923]
Real-time interception of fast-moving objects by robotic arms in dynamic environments poses a formidable challenge.
This paper introduces a unified control framework to address the challenge by simultaneously intercepting dynamic objects and avoiding moving obstacles.
arXiv Detail & Related papers (2022-09-27T18:46:52Z) - QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars [80.05743236282564]
Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
arXiv Detail & Related papers (2022-09-20T00:25:54Z) - VAE-Loco: Versatile Quadruped Locomotion by Learning a Disentangled Gait
Representation [78.92147339883137]
We show that it is pivotal in increasing controller robustness by learning a latent space capturing the key stance phases constituting a particular gait.
We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, footstep height and full stance duration.
The use of a generative model facilitates the detection and mitigation of disturbances to provide a versatile and robust planning framework.
arXiv Detail & Related papers (2022-05-02T19:49:53Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - On robot compliance. A cerebellar control approach [0.0]
The work presented here is a novel biological approach for the compliant control of a robotic arm in real time (RT)
We integrate a spiking cerebellar network at the core of a feedback control loop performing torque-driven control.
We prove that our compliant approach outperforms the accuracy of the default factory-installed position control in a set of tasks used for addressing cerebellar motor behavior.
arXiv Detail & Related papers (2020-03-02T17:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.