ArchShapeNet:An Interpretable 3D-CNN Framework for Evaluating Architectural Shapes
- URL: http://arxiv.org/abs/2506.14832v1
- Date: Sat, 14 Jun 2025 06:43:59 GMT
- Title: ArchShapeNet:An Interpretable 3D-CNN Framework for Evaluating Architectural Shapes
- Authors: Jun Yin, Jing Zhong, Pengyu Zeng, Peilin Li, Zixuan Dai, Miao Zhang, Shuai Lu,
- Abstract summary: ArchForms-4000 is a dataset containing 2,000 architect-designed and 2,000 Evomass-generated 3D forms.<n>Proposed ArchShapeNet is a 3D convolutional neural network tailored for classifying and analyzing architectural forms.<n>Our model outperforms human experts in distinguishing form origins, achieving 94.29% accuracy, 96.2% precision, and 98.51% recall.
- Score: 24.731262578136057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In contemporary architectural design, the growing complexity and diversity of design demands have made generative plugin tools essential for quickly producing initial concepts and exploring novel 3D forms. However, objectively analyzing the differences between human-designed and machine-generated 3D forms remains a challenge, limiting our understanding of their respective strengths and hindering the advancement of generative tools. To address this, we built ArchForms-4000, a dataset containing 2,000 architect-designed and 2,000 Evomass-generated 3D forms; Proposed ArchShapeNet, a 3D convolutional neural network tailored for classifying and analyzing architectural forms, incorporating a saliency module to highlight key spatial features aligned with architectural reasoning; And conducted comparative experiments showing our model outperforms human experts in distinguishing form origins, achieving 94.29% accuracy, 96.2% precision, and 98.51% recall. This study not only highlights the distinctive advantages of human-designed forms in spatial organization, proportional harmony, and detail refinement but also provides valuable insights for enhancing generative design tools in the future.
Related papers
- 3D Shape Generation: A Survey [0.6445605125467574]
Recent advances in deep learning have transformed the field of 3D shape generation.<n>This survey organizes the discussion around three core components: shape representations, generative modeling approaches, and evaluation protocols.<n>We identify open challenges and outline future research directions that could drive progress in controllable, efficient, and high-quality 3D shape generation.
arXiv Detail & Related papers (2025-06-27T23:06:06Z) - DeepWheel: Generating a 3D Synthetic Wheel Dataset for Design and Performance Evaluation [3.3148826359547523]
This study proposes a synthetic design-performance dataset generation framework using generative AI.<n>The framework first generates 2D rendered images using Stable Diffusion, and then reconstructs the 3D geometry through 2.5D depth estimation.<n>The final dataset, named DeepWheel, consists of over 6,000 photo-realistic images and 900 structurally analyzed 3D models.
arXiv Detail & Related papers (2025-04-15T16:20:00Z) - JADE: Joint-aware Latent Diffusion for 3D Human Generative Modeling [62.77347895550087]
We introduce JADE, a generative framework that learns the variations of human shapes with fined-grained control.<n>Our key insight is a joint-aware latent representation that decomposes human bodies into skeleton structures.<n>To generate coherent and plausible human shapes under our proposed decomposition, we also present a cascaded pipeline.
arXiv Detail & Related papers (2024-12-29T14:18:35Z) - DiffDesign: Controllable Diffusion with Meta Prior for Efficient Interior Design Generation [2.806426655599813]
We propose DiffDesign, a controllable diffusion model with meta priors for efficient interior design generation.<n>Specifically, we utilize the generative priors of a 2D diffusion model pre-trained on a large image dataset as our rendering backbone.<n>We further guide the denoising process by disentangling cross-attention control over design attributes, such as appearance, pose, and size, and introduce an optimal transfer-based alignment module to enforce view consistency.
arXiv Detail & Related papers (2024-11-25T11:36:34Z) - Part-aware Shape Generation with Latent 3D Diffusion of Neural Voxel Fields [50.12118098874321]
We introduce a latent 3D diffusion process for neural voxel fields, enabling generation at significantly higher resolutions.<n>A part-aware shape decoder is introduced to integrate the part codes into the neural voxel fields, guiding the accurate part decomposition.<n>The results demonstrate the superior generative capabilities of our proposed method in part-aware shape generation, outperforming existing state-of-the-art methods.
arXiv Detail & Related papers (2024-05-02T04:31:17Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - Human as Points: Explicit Point-based 3D Human Reconstruction from Single-view RGB Images [71.91424164693422]
We introduce an explicit point-based human reconstruction framework called HaP.<n>Our approach is featured by fully-explicit point cloud estimation, manipulation, generation, and refinement in the 3D geometric space.<n>Our results may indicate a paradigm rollback to the fully-explicit and geometry-centric algorithm design.
arXiv Detail & Related papers (2023-11-06T05:52:29Z) - Geometric Deep Learning for Structure-Based Drug Design: A Survey [83.87489798671155]
Structure-based drug design (SBDD) leverages the three-dimensional geometry of proteins to identify potential drug candidates.
Recent advancements in geometric deep learning, which effectively integrate and process 3D geometric data, have significantly propelled the field forward.
arXiv Detail & Related papers (2023-06-20T14:21:58Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - Towards AI-Architecture Liberty: A Comprehensive Survey on Design and Generation of Virtual Architecture by Deep Learning [23.58793497403681]
3D shape generation techniques leveraging deep learning have garnered significant interest from both the computer vision and architectural design communities.
We review 149 related articles covering architectural design, 3D shape techniques, and virtual environments.
We highlight four important enablers of ubiquitous interaction with immersive systems in deep learning-assisted architectural generation.
arXiv Detail & Related papers (2023-04-30T15:38:36Z) - Architext: Language-Driven Generative Architecture Design [1.393683063795544]
Architext enables design generation with only natural language prompts, given to large-scale Language Models, as input.
We conduct a thorough quantitative evaluation of Architext's downstream task performance, focusing on semantic accuracy and diversity for a number of pre-trained language models.
Architext models are able to learn the specific design task, generating valid residential layouts at a near 100% rate.
arXiv Detail & Related papers (2023-03-13T23:11:05Z) - Dynamically Grown Generative Adversarial Networks [111.43128389995341]
We propose a method to dynamically grow a GAN during training, optimizing the network architecture and its parameters together with automation.
The method embeds architecture search techniques as an interleaving step with gradient-based training to periodically seek the optimal architecture-growing strategy for the generator and discriminator.
arXiv Detail & Related papers (2021-06-16T01:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.