Active-Perceptive Motion Generation for Mobile Manipulation
- URL: http://arxiv.org/abs/2310.00433v2
- Date: Mon, 4 Mar 2024 12:31:46 GMT
- Title: Active-Perceptive Motion Generation for Mobile Manipulation
- Authors: Snehal Jauhri, Sophie Lueth, Georgia Chalvatzaki
- Abstract summary: We introduce an active perception pipeline for mobile manipulators to generate motions that are informative toward manipulation tasks.
Our proposed approach, ActPerMoMa, generates robot paths in a receding horizon fashion by sampling paths and computing path-wise utilities.
We show the efficacy of our method in simulated experiments with a dual-arm TIAGo++ MoMa robot performing mobile grasping in cluttered scenes with obstacles.
- Score: 6.952045528182883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile Manipulation (MoMa) systems incorporate the benefits of mobility and
dexterity, due to the enlarged space in which they can move and interact with
their environment. However, even when equipped with onboard sensors, e.g., an
embodied camera, extracting task-relevant visual information in unstructured
and cluttered environments, such as households, remains challenging. In this
work, we introduce an active perception pipeline for mobile manipulators to
generate motions that are informative toward manipulation tasks, such as
grasping in unknown, cluttered scenes. Our proposed approach, ActPerMoMa,
generates robot paths in a receding horizon fashion by sampling paths and
computing path-wise utilities. These utilities trade-off maximizing the visual
Information Gain (IG) for scene reconstruction and the task-oriented objective,
e.g., grasp success, by maximizing grasp reachability. We show the efficacy of
our method in simulated experiments with a dual-arm TIAGo++ MoMa robot
performing mobile grasping in cluttered scenes with obstacles. We empirically
analyze the contribution of various utilities and parameters, and compare
against representative baselines both with and without active perception
objectives. Finally, we demonstrate the transfer of our mobile grasping
strategy to the real world, indicating a promising direction for
active-perceptive MoMa.
Related papers
- Zero-Cost Whole-Body Teleoperation for Mobile Manipulation [8.71539730969424]
MoMa-Teleop is a novel teleoperation method that delegates the base motions to a reinforcement learning agent.
We demonstrate that our approach results in a significant reduction in task completion time across a variety of robots and tasks.
arXiv Detail & Related papers (2024-09-23T15:09:45Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Contrastive Learning for Enhancing Robust Scene Transfer in Vision-based
Agile Flight [21.728935597793473]
This work proposes an adaptive multi-pair contrastive learning strategy for visual representation learning that enables zero-shot scene transfer and real-world deployment.
We demonstrate the performance of our approach on the task of agile, vision-based quadrotor flight.
arXiv Detail & Related papers (2023-09-18T15:25:59Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - Causal Policy Gradient for Whole-Body Mobile Manipulation [39.3461626518495]
We introduce Causal MoMa, a new reinforcement learning framework to train policies for typical MoMa tasks.
We evaluate the performance of Causal MoMa on three types of simulated robots across different MoMa tasks.
arXiv Detail & Related papers (2023-05-04T23:23:47Z) - N$^2$M$^2$: Learning Navigation for Arbitrary Mobile Manipulation
Motions in Unseen and Dynamic Environments [9.079709086741987]
We introduce Neural Navigation for Mobile Manipulation (N$2$M$2$) which extends this decomposition to complex obstacle environments.
The resulting approach can perform unseen, long-horizon tasks in unexplored environments while instantly reacting to dynamic obstacles and environmental changes.
We demonstrate the capabilities of our proposed approach in extensive simulation and real-world experiments on multiple kinematically diverse mobile manipulators.
arXiv Detail & Related papers (2022-06-17T12:52:41Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Property-Aware Robot Object Manipulation: a Generative Approach [57.70237375696411]
In this work, we focus on how to generate robot motion adapted to the hidden properties of the manipulated objects.
We explore the possibility of leveraging Generative Adversarial Networks to synthesize new actions coherent with the properties of the object.
Our results show that Generative Adversarial Nets can be a powerful tool for the generation of novel and meaningful transportation actions.
arXiv Detail & Related papers (2021-06-08T14:15:36Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - Affective Movement Generation using Laban Effort and Shape and Hidden
Markov Models [6.181642248900806]
This paper presents an approach for automatic affective movement generation that makes use of two movement abstractions: 1) Laban movement analysis (LMA), and 2) hidden Markov modeling.
The LMA provides a systematic tool for an abstract representation of the kinematic and expressive characteristics of movements.
An HMM abstraction of the identified movements is obtained and used with the desired motion path to generate a novel movement that conveys the target emotion.
The efficacy of the proposed approach in generating movements with recognizable target emotions is assessed using a validated automatic recognition model and a user study.
arXiv Detail & Related papers (2020-06-10T21:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.