Shape Inference and Grammar Induction for Example-based Procedural
Generation
- URL: http://arxiv.org/abs/2109.10217v1
- Date: Tue, 21 Sep 2021 14:41:56 GMT
- Title: Shape Inference and Grammar Induction for Example-based Procedural
Generation
- Authors: Gillis Hermans, Thomas Winters, Luc De Raedt
- Abstract summary: We propose SIGI, a novel method for inferring shapes and inducing a shape grammar from grid-based 3D building examples.
Applied to Minecraft buildings, we show how the shape grammar can be used to automatically generate new buildings in a similar style.
- Score: 12.789308303237277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designers increasingly rely on procedural generation for automatic generation
of content in various industries. These techniques require extensive knowledge
of the desired content, and about how to actually implement such procedural
methods. Algorithms for learning interpretable generative models from example
content could alleviate both difficulties. We propose SIGI, a novel method for
inferring shapes and inducing a shape grammar from grid-based 3D building
examples. This interpretable grammar is well-suited for co-creative design.
Applied to Minecraft buildings, we show how the shape grammar can be used to
automatically generate new buildings in a similar style.
Related papers
- Instruct-SCTG: Guiding Sequential Controlled Text Generation through
Instructions [42.67608830386934]
Instruct-SCTG is a sequential framework that harnesses instruction-tuned language models to generate structurally coherent text.
Our framework generates articles in a section-by-section manner, aligned with the desired human structure using natural language instructions.
arXiv Detail & Related papers (2023-12-19T16:20:49Z) - EXIM: A Hybrid Explicit-Implicit Representation for Text-Guided 3D Shape
Generation [124.27302003578903]
This paper presents a new text-guided technique for generating 3D shapes.
We leverage a hybrid 3D representation, namely EXIM, combining the strengths of explicit and implicit representations.
We demonstrate the applicability of our approach to generate indoor scenes with consistent styles using text-induced 3D shapes.
arXiv Detail & Related papers (2023-11-03T05:01:51Z) - ZeroForge: Feedforward Text-to-Shape Without 3D Supervision [24.558721379714694]
We present ZeroForge, an approach for zero-shot text-to-shape generation that avoids both pitfalls.
To achieve open-vocabulary shape generation, we require careful architectural adaptation of existing feed-forward approaches.
arXiv Detail & Related papers (2023-06-14T00:38:14Z) - DreamStone: Image as Stepping Stone for Text-Guided 3D Shape Generation [105.97545053660619]
We present a new text-guided 3D shape generation approach DreamStone.
It uses images as a stepping stone to bridge the gap between text and shape modalities for generating 3D shapes without requiring paired text and 3D data.
Our approach is generic, flexible, and scalable, and it can be easily integrated with various SVR models to expand the generative space and improve the generative fidelity.
arXiv Detail & Related papers (2023-03-24T03:56:23Z) - TAPS3D: Text-Guided 3D Textured Shape Generation from Pseudo Supervision [114.56048848216254]
We present a novel framework, TAPS3D, to train a text-guided 3D shape generator with pseudo captions.
Based on rendered 2D images, we retrieve relevant words from the CLIP vocabulary and construct pseudo captions using templates.
Our constructed captions provide high-level semantic supervision for generated 3D shapes.
arXiv Detail & Related papers (2023-03-23T13:53:16Z) - What and How of Machine Learning Transparency: Building Bespoke
Explainability Tools with Interoperable Algorithmic Components [77.87794937143511]
This paper introduces a collection of hands-on training materials for explaining data-driven predictive models.
These resources cover the three core building blocks of this technique: interpretable representation composition, data sampling and explanation generation.
arXiv Detail & Related papers (2022-09-08T13:33:25Z) - ShapeCrafter: A Recursive Text-Conditioned 3D Shape Generation Model [16.431391515731367]
Existing methods to generate text-conditioned 3D shapes consume an entire text prompt to generate a 3D shape in a single step.
We introduce a method to generate a 3D shape distribution conditioned on an initial phrase, that gradually evolves as more phrases are added.
Results show that our method can generate shapes consistent with text descriptions, and shapes evolve gradually as more phrases are added.
arXiv Detail & Related papers (2022-07-19T17:59:01Z) - Autoregressive 3D Shape Generation via Canonical Mapping [92.91282602339398]
transformers have shown remarkable performances in a variety of generative tasks such as image, audio, and text generation.
In this paper, we aim to further exploit the power of transformers and employ them for the task of 3D point cloud generation.
Our model can be easily extended to multi-modal shape completion as an application for conditional shape generation.
arXiv Detail & Related papers (2022-04-05T03:12:29Z) - Towards Implicit Text-Guided 3D Shape Generation [81.22491096132507]
This work explores the challenging task of generating 3D shapes from text.
We propose a new approach for text-guided 3D shape generation, capable of producing high-fidelity shapes with colors that match the given text description.
arXiv Detail & Related papers (2022-03-28T10:20:03Z) - ShapeAssembly: Learning to Generate Programs for 3D Shape Structure
Synthesis [38.27280837835169]
We propose ShapeAssembly, a domain-specific "assembly-language" for 3D shape structures.
We show how to extract ShapeAssembly programs from existing shape structures in the PartNet dataset.
We evaluate our approach by comparing shapes output by our generated programs to those from other recent shape structure models.
arXiv Detail & Related papers (2020-09-17T02:26:45Z) - Discovering Textual Structures: Generative Grammar Induction using
Template Trees [17.37350034483191]
We introduce a novel grammar induction algorithm for learning interpretable grammars for generative purposes, called Gitta.
By using existing human-created grammars, we found that the algorithm can reasonably approximate these grammars using only a few examples.
arXiv Detail & Related papers (2020-09-09T19:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.