Building-GAN: Graph-Conditioned Architectural Volumetric Design
Generation
- URL: http://arxiv.org/abs/2104.13316v1
- Date: Tue, 27 Apr 2021 16:49:34 GMT
- Title: Building-GAN: Graph-Conditioned Architectural Volumetric Design
Generation
- Authors: Kai-Hung Chang, Chin-Yi Cheng, Jieliang Luo, Shingo Murata, Mehdi
Nourbakhsh, Yoshito Tsuji
- Abstract summary: This paper focuses on volumetric design generation conditioned on an input program graph.
Instead of outputting dense 3D voxels, we propose a new 3D representation named voxel graph that is both compact and expressive for building geometries.
Our generator is a cross-modal graph neural network that uses a pointer mechanism to connect the input program graph and the output voxel graph, and the whole pipeline is trained using the adversarial framework.
- Score: 10.024367148266721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Volumetric design is the first and critical step for professional building
design, where architects not only depict the rough 3D geometry of the building
but also specify the programs to form a 2D layout on each floor. Though 2D
layout generation for a single story has been widely studied, there is no
developed method for multi-story buildings. This paper focuses on volumetric
design generation conditioned on an input program graph. Instead of outputting
dense 3D voxels, we propose a new 3D representation named voxel graph that is
both compact and expressive for building geometries. Our generator is a
cross-modal graph neural network that uses a pointer mechanism to connect the
input program graph and the output voxel graph, and the whole pipeline is
trained using the adversarial framework. The generated designs are evaluated
qualitatively by a user study and quantitatively using three metrics: quality,
diversity, and connectivity accuracy. We show that our model generates
realistic 3D volumetric designs and outperforms previous methods and baselines.
Related papers
- InstructLayout: Instruction-Driven 2D and 3D Layout Synthesis with Semantic Graph Prior [23.536285325566013]
Comprehending natural language instructions is a charming property for both 2D and 3D layout synthesis systems.
Existing methods implicitly model object joint distributions and express object relations, hindering generation's controllability synthesis systems.
We introduce Instruct, a novel generative framework that integrates a semantic graph prior and a layout decoder.
arXiv Detail & Related papers (2024-07-10T12:13:39Z) - ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and
Planning [125.90002884194838]
ConceptGraphs is an open-vocabulary graph-structured representation for 3D scenes.
It is built by leveraging 2D foundation models and fusing their output to 3D by multi-view association.
We demonstrate the utility of this representation through a number of downstream planning tasks.
arXiv Detail & Related papers (2023-09-28T17:53:38Z) - Vitruvio: 3D Building Meshes via Single Perspective Sketches [0.8001739956625484]
We introduce the first deep learning method focused only on buildings that aim to convert a single sketch to a 3D building mesh: Vitruvio.
First, it accelerates the inference process by more than 26% (from 0.5s to 0.37s)
Second, it increases the reconstruction accuracy (measured by the Chamfer Distance) by 18%.
arXiv Detail & Related papers (2022-10-24T22:24:58Z) - Geometric Understanding of Sketches [0.0]
I explore two methods that help a system provide a geometric machine-understanding of sketches, and in-turn help a user accomplish a downstream task.
The first work deals with interpretation of a 2D-line drawing as a graph structure, and also illustrates its effectiveness through its physical reconstruction by a robot.
In the second work, we test the 3D-geometric understanding of a sketch-based system without explicit access to the information about 3D-geometry.
arXiv Detail & Related papers (2022-04-13T23:55:51Z) - Hierarchical Graph Networks for 3D Human Pose Estimation [50.600944798627786]
Recent 2D-to-3D human pose estimation works tend to utilize the graph structure formed by the topology of the human skeleton.
We argue that this skeletal topology is too sparse to reflect the body structure and suffer from serious 2D-to-3D ambiguity problem.
We propose a novel graph convolution network architecture, Hierarchical Graph Networks, to overcome these weaknesses.
arXiv Detail & Related papers (2021-11-23T15:09:03Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z) - Dense Graph Convolutional Neural Networks on 3D Meshes for 3D Object
Segmentation and Classification [0.0]
We present new designs of graph convolutional neural networks (GCNs) on 3D meshes for 3D object classification and segmentation.
We use the faces of the mesh as basic processing units and represent a 3D mesh as a graph where each node corresponds to a face.
arXiv Detail & Related papers (2021-06-30T02:17:16Z) - Translational Symmetry-Aware Facade Parsing for 3D Building
Reconstruction [11.263458202880038]
In this paper, we present a novel translational symmetry-based approach to improving the deep neural networks.
We propose a novel scheme to fuse anchor-free detection in a single stage network, which enables the efficient training and better convergence.
We employ an off-the-shelf rendering engine like Blender to reconstruct the realistic high-quality 3D models using procedural modeling.
arXiv Detail & Related papers (2021-06-02T03:10:51Z) - Improved Modeling of 3D Shapes with Multi-view Depth Maps [48.8309897766904]
We present a general-purpose framework for modeling 3D shapes using CNNs.
Using just a single depth image of the object, we can output a dense multi-view depth map representation of 3D objects.
arXiv Detail & Related papers (2020-09-07T17:58:27Z) - Interactive Annotation of 3D Object Geometry using 2D Scribbles [84.51514043814066]
In this paper, we propose an interactive framework for annotating 3D object geometry from point cloud data and RGB imagery.
Our framework targets naive users without artistic or graphics expertise.
arXiv Detail & Related papers (2020-08-24T21:51:29Z) - House-GAN: Relational Generative Adversarial Networks for
Graph-constrained House Layout Generation [59.86153321871127]
The main idea is to encode the constraint into the graph structure of its relational networks.
We have demonstrated the proposed architecture for a new house layout generation problem.
arXiv Detail & Related papers (2020-03-16T03:16:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.