Deep Generative Design for Mass Production
- URL: http://arxiv.org/abs/2403.12098v1
- Date: Sat, 16 Mar 2024 01:32:00 GMT
- Title: Deep Generative Design for Mass Production
- Authors: Jihoon Kim, Yongmin Kwon, Namwoo Kang,
- Abstract summary: We introduce an innovative framework addressing manufacturability concerns by integrating constraints pertinent to die casting and injection molding into Generative Design.
This method simplifies intricate 3D geometries into manufacturable profiles, removing unfeasible features such as non-manufacturable overhangs.
We further enhance this approach by adopting an advanced 2D generative model, which offer a more efficient alternative to traditional 3D shape generation methods.
- Score: 17.60251862841578
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Generative Design (GD) has evolved as a transformative design approach, employing advanced algorithms and AI to create diverse and innovative solutions beyond traditional constraints. Despite its success, GD faces significant challenges regarding the manufacturability of complex designs, often necessitating extensive manual modifications due to limitations in standard manufacturing processes and the reliance on additive manufacturing, which is not ideal for mass production. Our research introduces an innovative framework addressing these manufacturability concerns by integrating constraints pertinent to die casting and injection molding into GD, through the utilization of 2D depth images. This method simplifies intricate 3D geometries into manufacturable profiles, removing unfeasible features such as non-manufacturable overhangs and allowing for the direct consideration of essential manufacturing aspects like thickness and rib design. Consequently, designs previously unsuitable for mass production are transformed into viable solutions. We further enhance this approach by adopting an advanced 2D generative model, which offer a more efficient alternative to traditional 3D shape generation methods. Our results substantiate the efficacy of this framework, demonstrating the production of innovative, and, importantly, manufacturable designs. This shift towards integrating practical manufacturing considerations into GD represents a pivotal advancement, transitioning from purely inspirational concepts to actionable, production-ready solutions. Our findings underscore usefulness and potential of GD for broader industry adoption, marking a significant step forward in aligning GD with the demands of manufacturing challenges.
Related papers
- McGAN: Generating Manufacturable Designs by Embedding Manufacturing Rules into Conditional Generative Adversarial Network [9.482982161281999]
We propose a novel Generative Design (GD) approach by using deep neural networks to encode design for manufacturing (DFM) rules.
A conditional generative adversarial neural network (cGAN), Pix2Pix, transforms unmanufacturable subregions into manufacturable subregions.
The results show that McGAN can transform existing unmanufacturable designs to generate their corresponding manufacturable counterparts automatically.
arXiv Detail & Related papers (2024-07-24T02:23:02Z) - Exploring the Potentials and Challenges of Deep Generative Models in Product Design Conception [0.0]
We analyze DGM-families (VAE, GAN, Diffusion, Transformer, Radiance Field), assessing their strengths, weaknesses, and general applicability for product design conception.
Our objective is to provide insights that simplify the decision-making process for engineers, helping them determine which method might be most effective for their specific challenges.
arXiv Detail & Related papers (2024-07-15T14:28:50Z) - GECO: Generative Image-to-3D within a SECOnd [51.20830808525894]
We introduce GECO, a novel method for high-quality 3D generative modeling that operates within a second.
GECO achieves high-quality image-to-3D mesh generation with an unprecedented level of efficiency.
arXiv Detail & Related papers (2024-05-30T17:58:00Z) - SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose Estimation [74.07836010698801]
We propose an SMPL-based Transformer framework (SMPLer) to address this issue.
SMPLer incorporates two key ingredients: a decoupled attention operation and an SMPL-based target representation.
Extensive experiments demonstrate the effectiveness of SMPLer against existing 3D human shape and pose estimation methods.
arXiv Detail & Related papers (2024-04-23T17:59:59Z) - UniDream: Unifying Diffusion Priors for Relightable Text-to-3D Generation [101.2317840114147]
We present UniDream, a text-to-3D generation framework by incorporating unified diffusion priors.
Our approach consists of three main components: (1) a dual-phase training process to get albedo-normal aligned multi-view diffusion and reconstruction models, (2) a progressive generation procedure for geometry and albedo-textures based on Score Distillation Sample (SDS) using the trained reconstruction and diffusion models, and (3) an innovative application of SDS for finalizing PBR generation while keeping a fixed albedo based on Stable Diffusion model.
arXiv Detail & Related papers (2023-12-14T09:07:37Z) - Adversarial Latent Autoencoder with Self-Attention for Structural Image Synthesis [4.619979201312323]
We propose a novel model Self-Attention Adversarial Latent Autoencoder (SA-ALAE), which allows generating feasible design images of complex engineering parts.
With SA-ALAE, users can not only explore novel variants of an existing design, but also control the generation process by operating in latent space.
arXiv Detail & Related papers (2023-07-19T17:50:03Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - Deep Generative Models for Geometric Design Under Uncertainty [8.567987231153966]
We propose a Generative Adversarial Network-based Design under Uncertainty Framework (GAN-DUF)
GAN-DUF contains a deep generative model that simultaneously learns a compact representation of nominal (ideal) designs and the conditional distribution of fabricated designs.
We demonstrated the framework on two real-world engineering design examples and showed its capability of finding the solution that possesses better performances after fabrication.
arXiv Detail & Related papers (2021-12-15T18:00:46Z) - A Generic Approach for Enhancing GANs by Regularized Latent Optimization [79.00740660219256]
We introduce a generic framework called em generative-model inference that is capable of enhancing pre-trained GANs effectively and seamlessly.
Our basic idea is to efficiently infer the optimal latent distribution for the given requirements using Wasserstein gradient flow techniques.
arXiv Detail & Related papers (2021-12-07T05:22:50Z) - Modeling and Optimizing Laser-Induced Graphene [59.8912133964006]
We provide datasets that describe the optimization of the production of laser-induced graphene.
We pose three challenges based on the datasets we provide.
We present illustrative results, along with the code used to generate them, as a starting point for interested users.
arXiv Detail & Related papers (2021-07-29T18:08:24Z) - MO-PaDGAN: Generating Diverse Designs with Multivariate Performance
Enhancement [13.866787416457454]
Deep generative models have proven useful for automatic design synthesis and design space exploration.
They face three challenges when applied to engineering design: 1) generated designs lack diversity, 2) it is difficult to explicitly improve all the performance measures of generated designs, and 3) existing models generally do not generate high-performance novel designs.
We propose MO-PaDGAN, which contains a new Determinantal Point Processes based loss function for probabilistic modeling of diversity and performances.
arXiv Detail & Related papers (2020-07-07T21:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.