Adaptive Articulated Object Manipulation On The Fly with Foundation Model Reasoning and Part Grounding
- URL: http://arxiv.org/abs/2507.18276v1
- Date: Thu, 24 Jul 2025 10:25:58 GMT
- Title: Adaptive Articulated Object Manipulation On The Fly with Foundation Model Reasoning and Part Grounding
- Authors: Xiaojie Zhang, Yuanfei Wang, Ruihai Wu, Kunqi Xu, Yu Li, Liuyu Xiang, Hao Dong, Zhaofeng He,
- Abstract summary: Articulated objects pose diverse manipulation challenges for robots.<n>Since their internal structures are not directly observable, robots must adaptively explore and refine actions to generate successful manipulation trajectories.<n>AdaRPG is a novel framework that leverages foundation models to extract object parts, which exhibit greater local geometric similarity than entire objects.
- Score: 18.52792284421002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Articulated objects pose diverse manipulation challenges for robots. Since their internal structures are not directly observable, robots must adaptively explore and refine actions to generate successful manipulation trajectories. While existing works have attempted cross-category generalization in adaptive articulated object manipulation, two major challenges persist: (1) the geometric diversity of real-world articulated objects complicates visual perception and understanding, and (2) variations in object functions and mechanisms hinder the development of a unified adaptive manipulation strategy. To address these challenges, we propose AdaRPG, a novel framework that leverages foundation models to extract object parts, which exhibit greater local geometric similarity than entire objects, thereby enhancing visual affordance generalization for functional primitive skills. To support this, we construct a part-level affordance annotation dataset to train the affordance model. Additionally, AdaRPG utilizes the common knowledge embedded in foundation models to reason about complex mechanisms and generate high-level control codes that invoke primitive skill functions based on part affordance inference. Simulation and real-world experiments demonstrate AdaRPG's strong generalization ability across novel articulated object categories.
Related papers
- Is an object-centric representation beneficial for robotic manipulation ? [45.75998994869714]
Object-centric representation (OCR) has recently become a subject of interest in the computer vision community for learning a structured representation of images and videos.<n>We evaluate one classical object-centric method across several generalization scenarios and compare its results against several state-of-the-art hollistic representations.<n>Our results exhibit that existing methods are prone to failure in difficult scenarios involving complex scene structures, whereas object-centric methods help overcome these challenges.
arXiv Detail & Related papers (2025-06-24T08:23:55Z) - Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control [72.00655365269]
We present RoboMaster, a novel framework that models inter-object dynamics through a collaborative trajectory formulation.<n>Unlike prior methods that decompose objects, our core is to decompose the interaction process into three sub-stages: pre-interaction, interaction, and post-interaction.<n>Our method outperforms existing approaches, establishing new state-of-the-art performance in trajectory-controlled video generation for robotic manipulation.
arXiv Detail & Related papers (2025-06-02T17:57:06Z) - IAAO: Interactive Affordance Learning for Articulated Objects in 3D Environments [56.85804719947]
We present IAAO, a framework that builds an explicit 3D model for intelligent agents to gain understanding of articulated objects in their environment through interaction.<n>We first build hierarchical features and label fields for each object state using 3D Gaussian Splatting (3DGS) by distilling mask features and view-consistent labels from multi-view images.<n>We then perform object- and part-level queries on the 3D Gaussian primitives to identify static and articulated elements, estimating global transformations and local articulation parameters along with affordances.
arXiv Detail & Related papers (2025-04-09T12:36:48Z) - ArtGS: Building Interactable Replicas of Complex Articulated Objects via Gaussian Splatting [66.29782808719301]
Building articulated objects is a key challenge in computer vision.<n>Existing methods often fail to effectively integrate information across different object states.<n>We introduce ArtGS, a novel approach that leverages 3D Gaussians as a flexible and efficient representation.
arXiv Detail & Related papers (2025-02-26T10:25:32Z) - AdaManip: Adaptive Articulated Object Manipulation Environments and Policy Learning [25.331956706253614]
Articulated object manipulation is a critical capability for robots to perform various tasks in real-world scenarios.<n>Previous datasets and simulation environments for articulated objects have primarily focused on simple manipulation mechanisms.<n>We build a novel articulated object manipulation environment and equip it with 9 categories of objects.<n>Based on the environment and objects, we propose an adaptive demonstration collection and 3D visual diffusion-based imitation learning pipeline.
arXiv Detail & Related papers (2025-02-16T13:45:10Z) - GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation [11.880519765681408]
This paper introduces a large-scale part-centric dataset for articulated object manipulation.<n>It features photo-realistic material randomization and detailed annotations of part-oriented, scene-level actionable interaction poses.<n>We propose a novel modular framework that delivers superior and robust performance for generalizable articulated object manipulation.
arXiv Detail & Related papers (2024-11-27T12:11:23Z) - Kinematic-aware Prompting for Generalizable Articulated Object
Manipulation with LLMs [53.66070434419739]
Generalizable articulated object manipulation is essential for home-assistant robots.
We propose a kinematic-aware prompting framework that prompts Large Language Models with kinematic knowledge of objects to generate low-level motion waypoints.
Our framework outperforms traditional methods on 8 categories seen and shows a powerful zero-shot capability for 8 unseen articulated object categories.
arXiv Detail & Related papers (2023-11-06T03:26:41Z) - GAMMA: Generalizable Articulation Modeling and Manipulation for
Articulated Objects [53.965581080954905]
We propose a novel framework of Generalizable Articulation Modeling and Manipulating for Articulated Objects (GAMMA)
GAMMA learns both articulation modeling and grasp pose affordance from diverse articulated objects with different categories.
Results show that GAMMA significantly outperforms SOTA articulation modeling and manipulation algorithms in unseen and cross-category articulated objects.
arXiv Detail & Related papers (2023-09-28T08:57:14Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.<n>Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.<n>Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.