Synthetic 3D Data Generation Pipeline for Geometric Deep Learning in
Architecture
- URL: http://arxiv.org/abs/2104.12564v1
- Date: Mon, 26 Apr 2021 13:32:03 GMT
- Title: Synthetic 3D Data Generation Pipeline for Geometric Deep Learning in
Architecture
- Authors: Stanislava Fedorova, Alberto Tono, Meher Shashwat Nigam, Jiayao Zhang,
Amirhossein Ahmadnia, Cecilia Bolognesi, Dominik L. Michels
- Abstract summary: We create a synthetic data generation pipeline that generates an arbitrary amount of 3D data along with the associated 2D and 3D annotations.
The variety of annotations, the flexibility to customize the generated building and dataset parameters make this framework suitable for multiple deep learning tasks.
All code and data are made publicly available.
- Score: 6.383666639192481
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the growing interest in deep learning algorithms and computational
design in the architectural field, the need for large, accessible and diverse
architectural datasets increases. We decided to tackle this problem by
constructing a field-specific synthetic data generation pipeline that generates
an arbitrary amount of 3D data along with the associated 2D and 3D annotations.
The variety of annotations, the flexibility to customize the generated building
and dataset parameters make this framework suitable for multiple deep learning
tasks, including geometric deep learning that requires direct 3D supervision.
Creating our building data generation pipeline we leveraged architectural
knowledge from experts in order to construct a framework that would be modular,
extendable and would provide a sufficient amount of class-balanced data
samples. Moreover, we purposefully involve the researcher in the dataset
customization allowing the introduction of additional building components,
material textures, building classes, number and type of annotations as well as
the number of views per 3D model sample. In this way, the framework would
satisfy different research requirements and would be adaptable to a large
variety of tasks. All code and data are made publicly available.
Related papers
- ARCH2S: Dataset, Benchmark and Challenges for Learning Exterior Architectural Structures from Point Clouds [0.0]
This paper introduces a semantically-enriched, photo-realistic 3D architectural models dataset and benchmark for semantic segmentation.
It features 4 different building purposes of real-world buildings as well as an open architectural landscape in Hong Kong.
arXiv Detail & Related papers (2024-06-03T14:02:23Z) - Serving Deep Learning Model in Relational Databases [70.53282490832189]
Serving deep learning (DL) models on relational data has become a critical requirement across diverse commercial and scientific domains.
We highlight three pivotal paradigms: The state-of-the-art DL-centric architecture offloads DL computations to dedicated DL frameworks.
The potential UDF-centric architecture encapsulates one or more tensor computations into User Defined Functions (UDFs) within the relational database management system (RDBMS)
arXiv Detail & Related papers (2023-10-07T06:01:35Z) - Robust Geometry-Preserving Depth Estimation Using Differentiable
Rendering [93.94371335579321]
We propose a learning framework that trains models to predict geometry-preserving depth without requiring extra data or annotations.
Comprehensive experiments underscore our framework's superior generalization capabilities.
Our innovative loss functions empower the model to autonomously recover domain-specific scale-and-shift coefficients.
arXiv Detail & Related papers (2023-09-18T12:36:39Z) - A Systematic Survey in Geometric Deep Learning for Structure-based Drug
Design [63.30166298698985]
Structure-based drug design (SBDD) utilizes the three-dimensional geometry of proteins to identify potential drug candidates.
Recent developments in geometric deep learning, focusing on the integration and processing of 3D geometric data, have greatly advanced the field of structure-based drug design.
arXiv Detail & Related papers (2023-06-20T14:21:58Z) - UniG3D: A Unified 3D Object Generation Dataset [75.49544172927749]
UniG3D is a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on ShapeNet datasets.
This pipeline converts each raw 3D model into comprehensive multi-modal data representation.
The selection of data sources for our dataset is based on their scale and quality.
arXiv Detail & Related papers (2023-06-19T07:03:45Z) - FoldingNet Autoencoder model to create a geospatial grouping of CityGML
building dataset [0.0]
This study uses 'FoldingNet,' a 3D autoencoder, to generate the latent representation of each building from the LoD 2 CityGML dataset.
The efficacy of the embeddings is analyzed by dataset reconstruction, latent spread visualization, and hierarchical clustering methods.
A geospatial model is created to iteratively find the geographical groupings of buildings.
arXiv Detail & Related papers (2022-12-28T17:16:23Z) - IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes [79.18349050238413]
Preparation and training of deploy-able deep learning architectures require the models to be suited to different traffic scenarios.
An unstructured and complex driving layout found in several developing countries such as India poses a challenge to these models.
We build a new dataset, IDD-3D, which consists of multi-modal data from multiple cameras and LiDAR sensors with 12k annotated driving LiDAR frames.
arXiv Detail & Related papers (2022-10-23T23:03:17Z) - BuildingNet: Learning to Label 3D Buildings [19.641000866952815]
BuildingNet: (a) large-scale 3D building models whose exteriors consistently labeled, (b) a neural network that labels building analyzing and structural relations of their geometric primitives.
The dataset covers categories, such as houses, churches, skyscrapers, town halls and castles.
arXiv Detail & Related papers (2021-10-11T01:45:26Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - RISA-Net: Rotation-Invariant Structure-Aware Network for Fine-Grained 3D
Shape Retrieval [46.02391761751015]
Fine-grained 3D shape retrieval aims to retrieve 3D shapes similar to a query shape in a repository with models belonging to the same class.
We introduce a novel deep architecture, RISA-Net, which learns rotation invariant 3D shape descriptors.
Our method is able to learn the importance of geometric and structural information of all the parts when generating the final compact latent feature of a 3D shape.
arXiv Detail & Related papers (2020-10-02T13:06:12Z) - Generating synthetic photogrammetric data for training deep learning
based 3D point cloud segmentation models [0.0]
At I/ITSEC 2019, the authors presented a fully-automated workflow to segment 3D photogrammetric point-clouds/meshes and extract object information.
The ultimate goal is to create realistic virtual environments and provide the necessary information for simulation.
arXiv Detail & Related papers (2020-08-21T18:50:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.