Building LEGO Using Deep Generative Models of Graphs
- URL: http://arxiv.org/abs/2012.11543v1
- Date: Mon, 21 Dec 2020 18:24:40 GMT
- Title: Building LEGO Using Deep Generative Models of Graphs
- Authors: Rylee Thompson, Elahe Ghalebi, Terrance DeVries, Graham W. Taylor
- Abstract summary: We advocate LEGO as a platform for developing generative models of sequential assembly.
We develop a generative model based on graph-structured neural networks that can learn from human-built structures and produce visually compelling designs.
- Score: 22.926487008829668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models are now used to create a variety of high-quality digital
artifacts. Yet their use in designing physical objects has received far less
attention. In this paper, we advocate for the construction toy, LEGO, as a
platform for developing generative models of sequential assembly. We develop a
generative model based on graph-structured neural networks that can learn from
human-built structures and produce visually compelling designs. Our code is
released at: https://github.com/uoguelph-mlrg/GenerativeLEGO.
Related papers
- Shaping Realities: Enhancing 3D Generative AI with Fabrication Constraints [36.65470465480772]
Generative AI tools are becoming more prevalent in 3D modeling, enabling users to manipulate or create new models with text or images as inputs.
These methods focus on the aesthetic quality of the 3D models, refining them to look similar to the prompts provided by the user.
When creating 3D models intended for fabrication, designers need to trade-off the aesthetic qualities of a 3D model with their intended physical properties.
arXiv Detail & Related papers (2024-04-15T21:22:57Z) - 3DGEN: A GAN-based approach for generating novel 3D models from image
data [5.767281919406463]
We present 3DGEN, a model that leverages the recent work on both Neural Radiance Fields for object reconstruction and GAN-based image generation.
We show that the proposed architecture can generate plausible meshes for objects of the same category as the training images and compare the resulting meshes with the state-of-the-art baselines.
arXiv Detail & Related papers (2023-12-13T12:24:34Z) - Breathing New Life into 3D Assets with Generative Repainting [74.80184575267106]
Diffusion-based text-to-image models ignited immense attention from the vision community, artists, and content creators.
Recent works have proposed various pipelines powered by the entanglement of diffusion models and neural fields.
We explore the power of pretrained 2D diffusion models and standard 3D neural radiance fields as independent, standalone tools.
Our pipeline accepts any legacy renderable geometry, such as textured or untextured meshes, and orchestrates the interaction between 2D generative refinement and 3D consistency enforcement tools.
arXiv Detail & Related papers (2023-09-15T16:34:51Z) - RenAIssance: A Survey into AI Text-to-Image Generation in the Era of
Large Model [93.8067369210696]
Text-to-image generation (TTI) refers to the usage of models that could process text input and generate high fidelity images based on text descriptions.
Diffusion models are one prominent type of generative model used for the generation of images through the systematic introduction of noises with repeating steps.
In the era of large models, scaling up model size and the integration with large language models have further improved the performance of TTI models.
arXiv Detail & Related papers (2023-09-02T03:27:20Z) - Towards A Visual Programming Tool to Create Deep Learning Models [15.838427479984926]
DeepBlocks is a visual programming tool that allows Deep Learning developers to design, train, and evaluate models without relying on specific programming languages.
We derived design goals from a 5-participants formative interview, and we validated the first implementation of the tool through a typical use case.
arXiv Detail & Related papers (2023-03-22T16:47:48Z) - Generative Diffusion Models on Graphs: Methods and Applications [50.44334458963234]
Diffusion models, as a novel generative paradigm, have achieved remarkable success in various image generation tasks.
Graph generation is a crucial computational task on graphs with numerous real-world applications.
arXiv Detail & Related papers (2023-02-06T06:58:17Z) - Break and Make: Interactive Structural Understanding Using LEGO Bricks [61.01136603613139]
We build a fully interactive 3D simulator that allows learning agents to assemble, disassemble and manipulate LEGO models.
We take a first step towards solving this problem using sequence-to-sequence models.
arXiv Detail & Related papers (2022-07-27T18:33:09Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Image2Lego: Customized LEGO Set Generation from Images [50.87935634904456]
We implement a system that generates a LEGO brick model from 2D images.
Models are obtained by algorithmic conversion of the 3D voxelized model to bricks.
We generate step-by-step building instructions and animations for LEGO models of objects and human faces.
arXiv Detail & Related papers (2021-08-19T03:42:58Z) - Models Genesis [10.929445262793116]
Transfer learning from natural images to medical images has been established as one of the most practical paradigms in deep learning for medical image analysis.
To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis.
Our experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications.
arXiv Detail & Related papers (2020-04-09T20:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.