Neural Shape Compiler: A Unified Framework for Transforming between
Text, Point Cloud, and Program
- URL: http://arxiv.org/abs/2212.12952v2
- Date: Thu, 6 Apr 2023 20:39:16 GMT
- Title: Neural Shape Compiler: A Unified Framework for Transforming between
Text, Point Cloud, and Program
- Authors: Tiange Luo, Honglak Lee, Justin Johnson
- Abstract summary: This paper presents a unified framework to translate between pairs of shape abstractions.
We propose $textbfNeural Shape Compiler$ to model the abstraction transformation as a conditional generation process.
On Text2Shape, ShapeGlot, ABO, Genre, and Program Synthetic datasets, Neural Shape Compiler shows strengths.
- Score: 68.41406204648273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D shapes have complementary abstractions from low-level geometry to
part-based hierarchies to languages, which convey different levels of
information. This paper presents a unified framework to translate between pairs
of shape abstractions: $\textit{Text}$ $\Longleftrightarrow$ $\textit{Point
Cloud}$ $\Longleftrightarrow$ $\textit{Program}$. We propose $\textbf{Neural
Shape Compiler}$ to model the abstraction transformation as a conditional
generation process. It converts 3D shapes of three abstract types into unified
discrete shape code, transforms each shape code into code of other abstract
types through the proposed $\textit{ShapeCode Transformer}$, and decodes them
to output the target shape abstraction. Point Cloud code is obtained in a
class-agnostic way by the proposed $\textit{Point}$VQVAE. On Text2Shape,
ShapeGlot, ABO, Genre, and Program Synthetic datasets, Neural Shape Compiler
shows strengths in $\textit{Text}$ $\Longrightarrow$ $\textit{Point Cloud}$,
$\textit{Point Cloud}$ $\Longrightarrow$ $\textit{Text}$, $\textit{Point
Cloud}$ $\Longrightarrow$ $\textit{Program}$, and Point Cloud Completion tasks.
Additionally, Neural Shape Compiler benefits from jointly training on all
heterogeneous data and tasks.
Related papers
- RefComp: A Reference-guided Unified Framework for Unpaired Point Cloud Completion [53.28542050638217]
The unpaired point cloud completion task aims to complete a partial point cloud by using models trained with no ground truth.
Existing unpaired point cloud completion methods are class-aware, i.e., a separate model is needed for each object class.
We propose a novel unpaired point cloud completion framework, namely the Reference-guided Completion (RefComp) framework.
arXiv Detail & Related papers (2025-04-18T16:40:16Z) - HyperSDFusion: Bridging Hierarchical Structures in Language and Geometry for Enhanced 3D Text2Shape Generation [55.95329424826433]
We propose HyperSDFusion, a dual-branch diffusion model that generates 3D shapes from a given text.
We learn the hierarchical representations of text and 3D shapes in hyperbolic space.
Our method is the first to explore the hyperbolic hierarchical representation for text-to-shape generation.
arXiv Detail & Related papers (2024-03-01T08:57:28Z) - NAISR: A 3D Neural Additive Model for Interpretable Shape Representation [10.284366517948929]
We propose a 3D Neural Additive Model for Interpretable Shape Representation ($textt NAISR$) for scientific shape discovery.
Our approach captures shape population trends and allows for patient-specific predictions through shape transfer.
Our experiments demonstrate that $textitStarman$ achieves excellent shape reconstruction performance while retaining interpretability.
arXiv Detail & Related papers (2023-03-16T11:18:04Z) - PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval
and Deformation [59.70430570779819]
We introduce a data-driven shape completion approach that focuses on completing geometric details of missing regions of 3D shapes.
Our key insight is to copy and deform patches from the partial input to complete missing regions.
We leverage repeating patterns by retrieving patches from the partial input, and learn global structural priors by using a neural network to guide the retrieval and deformation steps.
arXiv Detail & Related papers (2022-07-24T18:59:09Z) - ShapeCrafter: A Recursive Text-Conditioned 3D Shape Generation Model [16.431391515731367]
Existing methods to generate text-conditioned 3D shapes consume an entire text prompt to generate a 3D shape in a single step.
We introduce a method to generate a 3D shape distribution conditioned on an initial phrase, that gradually evolves as more phrases are added.
Results show that our method can generate shapes consistent with text descriptions, and shapes evolve gradually as more phrases are added.
arXiv Detail & Related papers (2022-07-19T17:59:01Z) - Towards Implicit Text-Guided 3D Shape Generation [81.22491096132507]
This work explores the challenging task of generating 3D shapes from text.
We propose a new approach for text-guided 3D shape generation, capable of producing high-fidelity shapes with colors that match the given text description.
arXiv Detail & Related papers (2022-03-28T10:20:03Z) - SPAGHETTI: Editing Implicit Shapes Through Part Aware Generation [85.09014441196692]
We introduce a method for $mathbfE$diting $mathbfI$mplicit $mathbfS$hapes $mathbfT$hrough.
Our architecture allows for manipulation of implicit shapes by means of transforming, interpolating and combining shape segments together.
arXiv Detail & Related papers (2022-01-31T12:31:41Z) - Parts2Words: Learning Joint Embedding of Point Clouds and Texts by
Bidirectional Matching between Parts and Words [32.47815081044594]
We propose to learn joint embedding of point clouds and texts by bidirectional matching between parts from shapes and words from texts.
Specifically, we first segment the point clouds into parts, and then leverage optimal transport method to match parts and words in an optimized feature space.
Experiments demonstrate that our method achieves a significant improvement in accuracy over the SOTAs on multi-modal retrieval tasks.
arXiv Detail & Related papers (2021-07-05T08:55:34Z) - ShapeAssembly: Learning to Generate Programs for 3D Shape Structure
Synthesis [38.27280837835169]
We propose ShapeAssembly, a domain-specific "assembly-language" for 3D shape structures.
We show how to extract ShapeAssembly programs from existing shape structures in the PartNet dataset.
We evaluate our approach by comparing shapes output by our generated programs to those from other recent shape structure models.
arXiv Detail & Related papers (2020-09-17T02:26:45Z) - PT2PC: Learning to Generate 3D Point Cloud Shapes from Part Tree
Conditions [66.87405921626004]
This paper investigates the novel problem of generating 3D shape point cloud geometry from a symbolic part tree representation.
We propose a conditional GAN "part tree"-to-"point cloud" model (PT2PC) that disentangles the structural and geometric factors.
arXiv Detail & Related papers (2020-03-19T08:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.