Catch the Ball: Accurate High-Speed Motions for Mobile Manipulators via
Inverse Dynamics Learning
- URL: http://arxiv.org/abs/2003.07489v1
- Date: Tue, 17 Mar 2020 01:33:07 GMT
- Title: Catch the Ball: Accurate High-Speed Motions for Mobile Manipulators via
Inverse Dynamics Learning
- Authors: Ke Dong, Karime Pereida, Florian Shkurti, Angela P. Schoellig
- Abstract summary: Mobile manipulators are deployed in slow-motion collaborative robot scenarios.
In this paper, we consider scenarios where accurate high-speed motions are required.
We introduce a framework for this regime of tasks including two main components.
- Score: 20.655003319777368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile manipulators consist of a mobile platform equipped with one or more
robot arms and are of interest for a wide array of challenging tasks because of
their extended workspace and dexterity. Typically, mobile manipulators are
deployed in slow-motion collaborative robot scenarios. In this paper, we
consider scenarios where accurate high-speed motions are required. We introduce
a framework for this regime of tasks including two main components: (i) a
bi-level motion optimization algorithm for real-time trajectory generation,
which relies on Sequential Quadratic Programming (SQP) and Quadratic
Programming (QP), respectively; and (ii) a learning-based controller optimized
for precise tracking of high-speed motions via a learned inverse dynamics
model. We evaluate our framework with a mobile manipulator platform through
numerous high-speed ball catching experiments, where we show a success rate of
85.33%. To the best of our knowledge, this success rate exceeds the reported
performance of existing related systems and sets a new state of the art.
Related papers
- KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills [50.34487144149439]
This paper presents a physics-based humanoid control framework, aiming to master highly-dynamic human behaviors such as Kungfu and dancing.<n>For motion processing, we design a pipeline to extract, filter out, correct, and retarget motions, while ensuring compliance with physical constraints.<n>For motion imitation, we formulate a bi-level optimization problem to dynamically adjust the tracking accuracy tolerance.<n>In experiments, we train whole-body control policies to imitate a set of highly-dynamic motions.
arXiv Detail & Related papers (2025-06-15T13:58:53Z) - SAIL: Faster-than-Demonstration Execution of Imitation Learning Policies [9.945756965776932]
offline Imitation Learning (IL) methods are effective at acquiring complex robotic manipulation skills.<n>Existing IL-trained policies are confined to executing the task at the same speed as shown in demonstration data.<n>We introduce and formalize the novel problem of enabling faster-than-demonstration execution of visuomotor policies.
arXiv Detail & Related papers (2025-06-13T16:58:20Z) - PALo: Learning Posture-Aware Locomotion for Quadruped Robots [29.582249837902427]
We propose an end-to-end deep reinforcement learning framework for posture-aware locomotion named PALo.
PALo handles simultaneous linear and angular velocity tracking and real-time adjustments of body height, pitch, and roll angles.
PALo achieves agile posture-aware locomotion control in simulated environments and successfully transfers to real-world settings without fine-tuning.
arXiv Detail & Related papers (2025-03-06T14:13:59Z) - Self-Supervised Learning of Grasping Arbitrary Objects On-the-Move [8.445514342786579]
This study introduces three fully convolutional neural network (FCN) models to predict static grasp primitive, dynamic grasp primitive, and residual moving velocity error from visual inputs.
The proposed method achieved the highest grasping accuracy and pick-and-place efficiency.
arXiv Detail & Related papers (2024-11-15T02:59:16Z) - Learning to enhance multi-legged robot on rugged landscapes [7.956679144631909]
Multi-legged robots offer a promising solution forNavigating rugged landscapes.
Recent studies have shown that a linear controller can ensure reliable mobility on challenging terrains.
We develop a MuJoCo-based simulator tailored to this robotic platform and use the simulation to develop a reinforcement learning-based control framework.
arXiv Detail & Related papers (2024-09-14T15:53:08Z) - Guided Decoding for Robot On-line Motion Generation and Adaption [44.959409835754634]
We present a novel motion generation approach for robot arms, with high degrees of freedom, in complex settings that can adapt online to obstacles or new via points.
We train a transformer architecture, based on conditional variational autoencoder, on a large dataset of simulated trajectories used as demonstrations.
We show that our model successfully generates motion from different initial and target points and that is capable of generating trajectories that navigate complex tasks across different robotic platforms.
arXiv Detail & Related papers (2024-03-22T14:32:27Z) - TLControl: Trajectory and Language Control for Human Motion Synthesis [68.09806223962323]
We present TLControl, a novel method for realistic human motion synthesis.
It incorporates both low-level Trajectory and high-level Language semantics controls.
It is practical for interactive and high-quality animation generation.
arXiv Detail & Related papers (2023-11-28T18:54:16Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - Simultaneous Contact-Rich Grasping and Locomotion via Distributed
Optimization Enabling Free-Climbing for Multi-Limbed Robots [60.06216976204385]
We present an efficient motion planning framework for simultaneously solving locomotion, grasping, and contact problems.
We demonstrate our proposed framework in the hardware experiments, showing that the multi-limbed robot is able to realize various motions including free-climbing at a slope angle 45deg with a much shorter planning time.
arXiv Detail & Related papers (2022-07-04T13:52:10Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Consolidating Kinematic Models to Promote Coordinated Mobile
Manipulations [96.03270112422514]
We construct a Virtual Kinematic Chain (VKC) that consolidates the kinematics of the mobile base, the arm, and the object to be manipulated in mobile manipulations.
A mobile manipulation task is represented by altering the state of the constructed VKC, which can be converted to a motion planning problem.
arXiv Detail & Related papers (2021-08-03T02:59:41Z) - Success Weighted by Completion Time: A Dynamics-Aware Evaluation
Criteria for Embodied Navigation [42.978177196888225]
We present Success weighted by Completion Time (SCT), a new metric for evaluating navigation performance for mobile robots.
We also present RRT*-Unicycle, an algorithm for unicycle dynamics that estimates the fastest collision-free path and completion time.
arXiv Detail & Related papers (2021-03-14T20:13:06Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.