Consolidating Kinematic Models to Promote Coordinated Mobile
Manipulations
- URL: http://arxiv.org/abs/2108.01264v3
- Date: Tue, 10 Aug 2021 07:57:11 GMT
- Title: Consolidating Kinematic Models to Promote Coordinated Mobile
Manipulations
- Authors: Ziyuan Jiao, Zeyu Zhang, Xin Jiang, David Han, Song-Chun Zhu, Yixin
Zhu, Hangxin Liu
- Abstract summary: We construct a Virtual Kinematic Chain (VKC) that consolidates the kinematics of the mobile base, the arm, and the object to be manipulated in mobile manipulations.
A mobile manipulation task is represented by altering the state of the constructed VKC, which can be converted to a motion planning problem.
- Score: 96.03270112422514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We construct a Virtual Kinematic Chain (VKC) that readily consolidates the
kinematics of the mobile base, the arm, and the object to be manipulated in
mobile manipulations. Accordingly, a mobile manipulation task is represented by
altering the state of the constructed VKC, which can be converted to a motion
planning problem, formulated, and solved by trajectory optimization. This new
VKC perspective of mobile manipulation allows a service robot to (i) produce
well-coordinated motions, suitable for complex household environments, and (ii)
perform intricate multi-step tasks while interacting with multiple objects
without an explicit definition of intermediate goals. In simulated experiments,
we validate these advantages by comparing the VKC-based approach with baselines
that solely optimize individual components. The results manifest that VKC-based
joint modeling and planning promote task success rates and produce more
efficient trajectories.
Related papers
- CAS-ViT: Convolutional Additive Self-attention Vision Transformers for Efficient Mobile Applications [59.193626019860226]
Vision Transformers (ViTs) mark a revolutionary advance in neural networks with their token mixer's powerful global context capability.
We introduce CAS-ViT: Convolutional Additive Self-attention Vision Transformers.
We show that CAS-ViT achieves a competitive performance when compared to other state-of-the-art backbones.
arXiv Detail & Related papers (2024-08-07T11:33:46Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Efficient Task Planning for Mobile Manipulation: a Virtual Kinematic
Chain Perspective [88.25410628450453]
We present a Virtual Kinematic Chain perspective to improve task planning efficacy for mobile manipulation.
By consolidating the kinematics of the mobile base, the arm, and the object being manipulated collectively as a whole, this novel VKC perspective naturally defines abstract actions.
In experiments, we implement a task planner using Domain Planning Definition Language (PDDL) with VKC.
arXiv Detail & Related papers (2021-08-03T02:49:18Z) - EAN: Event Adaptive Network for Enhanced Action Recognition [66.81780707955852]
We propose a unified action recognition framework to investigate the dynamic nature of video content.
First, when extracting local cues, we generate the spatial-temporal kernels of dynamic-scale to adaptively fit the diverse events.
Second, to accurately aggregate these cues into a global video representation, we propose to mine the interactions only among a few selected foreground objects by a Transformer.
arXiv Detail & Related papers (2021-07-22T15:57:18Z) - Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile
Manipulation [16.79185733369416]
We propose a two-stage architecture for autonomous interaction with large articulated objects in unknown environments.
The first stage uses a learned model to estimate the articulated model of a target object from an RGB-D input and predicts an action-conditional sequence of states for interaction.
The second stage comprises of a whole-body motion controller to manipulate the object along the generated kinematic plan.
arXiv Detail & Related papers (2021-03-18T21:32:18Z) - Meta-Reinforcement Learning for Adaptive Motor Control in Changing Robot
Dynamics and Environments [3.5309638744466167]
This work developed a meta-learning approach that adapts the control policy on the fly to different changing conditions for robust locomotion.
The proposed method constantly updates the interaction model, samples feasible sequences of actions of estimated the state-action trajectories, and then applies the optimal actions to maximize the reward.
arXiv Detail & Related papers (2021-01-19T12:57:12Z) - Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill
Primitives [89.34229413345541]
We propose a conditioning scheme which avoids pitfalls by learning the controller and its conditioning in an end-to-end manner.
Our model predicts complex action sequences based directly on a dynamic image representation of the robot motion.
We report significant improvements in task success over representative MPC and IL baselines.
arXiv Detail & Related papers (2020-03-19T15:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.