ManipLLM: Embodied Multimodal Large Language Model for Object-Centric
Robotic Manipulation
- URL: http://arxiv.org/abs/2312.16217v1
- Date: Sun, 24 Dec 2023 06:38:11 GMT
- Title: ManipLLM: Embodied Multimodal Large Language Model for Object-Centric
Robotic Manipulation
- Authors: Xiaoqi Li, Mingxu Zhang, Yiran Geng, Haoran Geng, Yuxing Long, Yan
Shen, Renrui Zhang, Jiaming Liu, Hao Dong
- Abstract summary: We introduce an innovative approach for robot manipulation that leverages the robust reasoning capabilities of Multimodal Large Language Models (MLLMs)
By fine-tuning the injected adapters, we preserve the inherent common sense and reasoning ability of the MLLMs while equipping them with the ability for manipulation.
Experiments in simulator and real-world show the promising performance of ManipLLM.
- Score: 22.071450379253235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robot manipulation relies on accurately predicting contact points and
end-effector directions to ensure successful operation. However, learning-based
robot manipulation, trained on a limited category within a simulator, often
struggles to achieve generalizability, especially when confronted with
extensive categories. Therefore, we introduce an innovative approach for robot
manipulation that leverages the robust reasoning capabilities of Multimodal
Large Language Models (MLLMs) to enhance the stability and generalization of
manipulation. By fine-tuning the injected adapters, we preserve the inherent
common sense and reasoning ability of the MLLMs while equipping them with the
ability for manipulation. The fundamental insight lies in the introduced
fine-tuning paradigm, encompassing object category understanding, affordance
prior reasoning, and object-centric pose prediction to stimulate the reasoning
ability of MLLM in manipulation. During inference, our approach utilizes an RGB
image and text prompt to predict the end effector's pose in chain of thoughts.
After the initial contact is established, an active impedance adaptation policy
is introduced to plan the upcoming waypoints in a closed-loop manner. Moreover,
in real world, we design a test-time adaptation (TTA) strategy for manipulation
to enable the model better adapt to the current real-world scene configuration.
Experiments in simulator and real-world show the promising performance of
ManipLLM. More details and demonstrations can be found at
https://sites.google.com/view/manipllm.
Related papers
- Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Self-Corrected Multimodal Large Language Model for End-to-End Robot Manipulation [30.54275273155153]
Multimodal Large Language Models (MLLMs) have shown promise in visual instruction following.
We introduce a Self-Corrected (SC)-MLLM, equipping our model not only to predict end-effector poses but also to autonomously recognize and correct failure actions.
SC-MLLM agent significantly improve manipulation accuracy compared to previous state-of-the-art robotic MLLM (ManipLLM)
arXiv Detail & Related papers (2024-05-27T17:58:48Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Diverse Zero-shot Robot Manipulation [65.46610405509338]
Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We use these 2D track predictions to infer a sequence of rigid transforms of the object to be manipulated, and obtain robot end-effector poses.
We show that this approach of combining scalably learned track prediction with a residual policy enables zero-shot robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - MOKA: Open-Vocabulary Robotic Manipulation through Mark-Based Visual
Prompting [106.53784213239479]
We present MOKA (Marking Open-vocabulary Keypoint Affordances), an approach that employs vision language models to solve robotic manipulation tasks.
At the heart of our approach is a compact point-based representation of affordance and motion that bridges the VLM's predictions on RGB images and the robot's motions in the physical world.
We evaluate and analyze MOKA's performance on a variety of manipulation tasks specified by free-form language descriptions.
arXiv Detail & Related papers (2024-03-05T18:08:45Z) - VoxPoser: Composable 3D Value Maps for Robotic Manipulation with
Language Models [38.503337052122234]
Large language models (LLMs) are shown to possess a wealth of actionable knowledge that can be extracted for robot manipulation.
We aim to synthesize robot trajectories for a variety of manipulation tasks given an open-set of instructions and an open-set of objects.
We demonstrate how the proposed framework can benefit from online experiences by efficiently learning a dynamics model for scenes that involve contact-rich interactions.
arXiv Detail & Related papers (2023-07-12T07:40:48Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Programmatically Grounded, Compositionally Generalizable Robotic
Manipulation [35.12811184353626]
We show that the conventional pretraining-finetuning pipeline for integrating semantic representations entangles the learning of domain-specific action information.
We propose a modular approach to better leverage pretrained models by exploiting the syntactic and semantic structures of language instructions.
Our model successfully disentangles action and perception, translating to improved zero-shot and compositional generalization in a variety of manipulation behaviors.
arXiv Detail & Related papers (2023-04-26T20:56:40Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.