DreamArt: Generating Interactable Articulated Objects from a Single Image
- URL: http://arxiv.org/abs/2507.05763v1
- Date: Tue, 08 Jul 2025 08:06:51 GMT
- Title: DreamArt: Generating Interactable Articulated Objects from a Single Image
- Authors: Ruijie Lu, Yu Liu, Jiaxiang Tang, Junfeng Ni, Yuxiang Wang, Diwen Wan, Gang Zeng, Yixin Chen, Siyuan Huang,
- Abstract summary: We introduce DreamArt, a novel framework for generating high-fidelity, interactable articulated assets from single-view images.<n>DreamArt employs a three-stage pipeline: it reconstructs part-segmented and complete 3D object meshes through a combination of image-to-3D generation, mask-prompted 3D segmentation, and part amodal completion.<n> Experimental results demonstrate that DreamArt effectively generates high-quality articulated objects, possessing accurate part shape, high appearance fidelity, and plausible articulation.
- Score: 40.66232231077524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating articulated objects, such as laptops and microwaves, is a crucial yet challenging task with extensive applications in Embodied AI and AR/VR. Current image-to-3D methods primarily focus on surface geometry and texture, neglecting part decomposition and articulation modeling. Meanwhile, neural reconstruction approaches (e.g., NeRF or Gaussian Splatting) rely on dense multi-view or interaction data, limiting their scalability. In this paper, we introduce DreamArt, a novel framework for generating high-fidelity, interactable articulated assets from single-view images. DreamArt employs a three-stage pipeline: firstly, it reconstructs part-segmented and complete 3D object meshes through a combination of image-to-3D generation, mask-prompted 3D segmentation, and part amodal completion. Second, we fine-tune a video diffusion model to capture part-level articulation priors, leveraging movable part masks as prompt and amodal images to mitigate ambiguities caused by occlusion. Finally, DreamArt optimizes the articulation motion, represented by a dual quaternion, and conducts global texture refinement and repainting to ensure coherent, high-quality textures across all parts. Experimental results demonstrate that DreamArt effectively generates high-quality articulated objects, possessing accurate part shape, high appearance fidelity, and plausible articulation, thereby providing a scalable solution for articulated asset generation. Our project page is available at https://dream-art-0.github.io/DreamArt/.
Related papers
- ArtLLM: Generating Articulated Assets via 3D LLM [19.814132638278547]
ArtLLM is a novel framework for generating high-quality articulated assets directly from complete 3D meshes.<n>At its core is a 3D multimodal large language model trained on a large-scale articulation dataset.<n> Experiments show that ArtLLM significantly outperforms state-of-the-art methods in both part layout accuracy and joint prediction.
arXiv Detail & Related papers (2026-03-01T15:07:46Z) - UniArt: Unified 3D Representation for Generating 3D Articulated Objects with Open-Set Articulation [14.687459506970301]
UniArt is a diffusion-based framework that synthesizes fully articulated 3D objects from a single image in an end-to-end manner.<n>We introduce a reversible joint-to-voxel embedding, which spatially aligns articulation features with volumetric geometry.<n>Experiments on the PartNet-Mobility benchmark demonstrate that UniArt achieves state-of-the-art mesh quality and articulation accuracy.
arXiv Detail & Related papers (2025-11-26T20:09:11Z) - HiScene: Creating Hierarchical 3D Scenes with Isometric View Generation [50.206100327643284]
HiScene is a novel hierarchical framework that bridges the gap between 2D image generation and 3D object generation.<n>We generate 3D content that aligns with 2D representations while maintaining compositional structure.
arXiv Detail & Related papers (2025-04-17T16:33:39Z) - DecompDreamer: Advancing Structured 3D Asset Generation with Multi-Object Decomposition and Gaussian Splatting [24.719972380079405]
DecompDreamer is a training routine designed to generate high-quality 3D compositions.<n>It decomposes scenes into structured components and their relationships.<n>It effectively generates intricate 3D compositions with superior object disentanglement.
arXiv Detail & Related papers (2025-03-15T03:37:25Z) - PartGen: Part-level 3D Generation and Reconstruction with Multi-View Diffusion Models [63.1432721793683]
We introduce PartGen, a novel approach that generates 3D objects composed of meaningful parts starting from text, an image, or an unstructured 3D object.<n>We evaluate our method on generated and real 3D assets and show that it outperforms segmentation and part-extraction baselines by a large margin.
arXiv Detail & Related papers (2024-12-24T18:59:43Z) - MTFusion: Reconstructing Any 3D Object from Single Image Using Multi-word Textual Inversion [10.912989885886617]
We propose MTFusion, which leverages both image data and textual descriptions for high-fidelity 3D reconstruction.
Our approach consists of two stages. First, we adopt a novel multi-word textual inversion technique to extract a detailed text description.
Then, we use this description and the image to generate a 3D model with FlexiCubes.
arXiv Detail & Related papers (2024-11-19T03:29:18Z) - HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a
Single Image [94.11473240505534]
We introduce HyperDreamer, a tool for creating 3D content from a single image.
It is hyper-realistic enough for post-generation usage, as users cannot view, render and edit the resulting 3D content from a full range.
We demonstrate the effectiveness of HyperDreamer in modeling region-aware materials with high-resolution textures and enabling user-friendly editing.
arXiv Detail & Related papers (2023-12-07T18:58:09Z) - IPDreamer: Appearance-Controllable 3D Object Generation with Complex Image Prompts [90.49024750432139]
We present IPDreamer, a novel method that captures intricate appearance features from complex $textbfI$mage $textbfP$rompts and aligns the synthesized 3D object with these extracted features.
Our experiments demonstrate that IPDreamer consistently generates high-quality 3D objects that align with both the textual and complex image prompts.
arXiv Detail & Related papers (2023-10-09T03:11:08Z) - NAP: Neural 3D Articulation Prior [31.875925637190328]
We propose Neural 3D Articulation Prior (NAP), the first 3D deep generative model to synthesize 3D articulated object models.
To generate articulated objects, we first design a novel articulation tree/graph parameterization and then apply a diffusion-denoising probabilistic model over this representation.
In order to capture both the geometry and the motion structure whose distribution will affect each other, we design a graph-attention denoising network for learning the reverse diffusion process.
arXiv Detail & Related papers (2023-05-25T17:59:35Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.