McGAN: Generating Manufacturable Designs by Embedding Manufacturing Rules into Conditional Generative Adversarial Network
- URL: http://arxiv.org/abs/2407.16943v1
- Date: Wed, 24 Jul 2024 02:23:02 GMT
- Title: McGAN: Generating Manufacturable Designs by Embedding Manufacturing Rules into Conditional Generative Adversarial Network
- Authors: Zhichao Wang, Xiaoliang Yan, Shreyes Melkote, David Rosen,
- Abstract summary: We propose a novel Generative Design (GD) approach by using deep neural networks to encode design for manufacturing (DFM) rules.
A conditional generative adversarial neural network (cGAN), Pix2Pix, transforms unmanufacturable subregions into manufacturable subregions.
The results show that McGAN can transform existing unmanufacturable designs to generate their corresponding manufacturable counterparts automatically.
- Score: 9.482982161281999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative design (GD) methods aim to automatically generate a wide variety of designs that satisfy functional or aesthetic design requirements. However, research to date generally lacks considerations of manufacturability of the generated designs. To this end, we propose a novel GD approach by using deep neural networks to encode design for manufacturing (DFM) rules, thereby modifying part designs to make them manufacturable by a given manufacturing process. Specifically, a three-step approach is proposed: first, an instance segmentation method, Mask R-CNN, is used to decompose a part design into subregions. Second, a conditional generative adversarial neural network (cGAN), Pix2Pix, transforms unmanufacturable decomposed subregions into manufacturable subregions. The transformed subregions of designs are subsequently reintegrated into a unified manufacturable design. These three steps, Mask-RCNN, Pix2Pix, and reintegration, form the basis of the proposed Manufacturable conditional GAN (McGAN) framework. Experimental results show that McGAN can transform existing unmanufacturable designs to generate their corresponding manufacturable counterparts automatically that realize the specified manufacturing rules in an efficient and robust manner. The effectiveness of McGAN is demonstrated through two-dimensional design case studies of an injection molding process.
Related papers
- Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction [88.65168366064061]
We introduce Discrete Denoising Posterior Prediction (DDPP), a novel framework that casts the task of steering pre-trained MDMs as a problem of probabilistic inference.
Our framework leads to a family of three novel objectives that are all simulation-free, and thus scalable.
We substantiate our designs via wet-lab validation, where we observe transient expression of reward-optimized protein sequences.
arXiv Detail & Related papers (2024-10-10T17:18:30Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - Pre-training Point Cloud Compact Model with Partial-aware Reconstruction [51.403810709250024]
We present a pre-trained Point cloud Compact Model with Partial-aware textbfReconstruction, named Point-CPR.
Our model exhibits strong performance across various tasks, especially surpassing the leading MPM-based model PointGPT-B with only 2% of its parameters.
arXiv Detail & Related papers (2024-07-12T15:18:14Z) - Part-aware Shape Generation with Latent 3D Diffusion of Neural Voxel Fields [50.12118098874321]
We introduce a latent 3D diffusion process for neural voxel fields, enabling generation at significantly higher resolutions.
A part-aware shape decoder is introduced to integrate the part codes into the neural voxel fields, guiding the accurate part decomposition.
The results demonstrate the superior generative capabilities of our proposed method in part-aware shape generation, outperforming existing state-of-the-art methods.
arXiv Detail & Related papers (2024-05-02T04:31:17Z) - SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose Estimation [74.07836010698801]
We propose an SMPL-based Transformer framework (SMPLer) to address this issue.
SMPLer incorporates two key ingredients: a decoupled attention operation and an SMPL-based target representation.
Extensive experiments demonstrate the effectiveness of SMPLer against existing 3D human shape and pose estimation methods.
arXiv Detail & Related papers (2024-04-23T17:59:59Z) - Deep Generative Design for Mass Production [17.60251862841578]
We introduce an innovative framework addressing manufacturability concerns by integrating constraints pertinent to die casting and injection molding into Generative Design.
This method simplifies intricate 3D geometries into manufacturable profiles, removing unfeasible features such as non-manufacturable overhangs.
We further enhance this approach by adopting an advanced 2D generative model, which offer a more efficient alternative to traditional 3D shape generation methods.
arXiv Detail & Related papers (2024-03-16T01:32:00Z) - Latent Diffusion Models for Structural Component Design [11.342098118480802]
This paper proposes a framework for the generative design of structural components.
We employ a Latent Diffusion model to generate potential designs of a component that can satisfy a set of problem-specific loading conditions.
arXiv Detail & Related papers (2023-09-20T19:28:45Z) - Adversarial Latent Autoencoder with Self-Attention for Structural Image Synthesis [4.619979201312323]
We propose a novel model Self-Attention Adversarial Latent Autoencoder (SA-ALAE), which allows generating feasible design images of complex engineering parts.
With SA-ALAE, users can not only explore novel variants of an existing design, but also control the generation process by operating in latent space.
arXiv Detail & Related papers (2023-07-19T17:50:03Z) - Sem2NeRF: Converting Single-View Semantic Masks to Neural Radiance
Fields [49.41982694533966]
We introduce a new task, Semantic-to-NeRF translation, conditioned on one single-view semantic mask as input.
In particular, Sem2NeRF addresses the highly challenging task by encoding the semantic mask into the latent code that controls the 3D scene representation of a pretrained decoder.
We verify the efficacy of the proposed Sem2NeRF and demonstrate it outperforms several strong baselines on two benchmark datasets.
arXiv Detail & Related papers (2022-03-21T09:15:58Z) - MO-PaDGAN: Generating Diverse Designs with Multivariate Performance
Enhancement [13.866787416457454]
Deep generative models have proven useful for automatic design synthesis and design space exploration.
They face three challenges when applied to engineering design: 1) generated designs lack diversity, 2) it is difficult to explicitly improve all the performance measures of generated designs, and 3) existing models generally do not generate high-performance novel designs.
We propose MO-PaDGAN, which contains a new Determinantal Point Processes based loss function for probabilistic modeling of diversity and performances.
arXiv Detail & Related papers (2020-07-07T21:57:29Z) - PaDGAN: A Generative Adversarial Network for Performance Augmented
Diverse Designs [13.866787416457454]
We develop a variant of the Generative Adversarial Network, named "Performance Augmented Diverse Generative Adversarial Network" or PaDGAN, which can generate novel high-quality designs with good coverage of the design space.
In comparison to a vanilla Generative Adversarial Network, on average, it generates samples with a 28% higher mean quality score with larger diversity and without the mode collapse issue.
arXiv Detail & Related papers (2020-02-26T04:53:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.