Adversarial Latent Autoencoder with Self-Attention for Structural Image Synthesis
- URL: http://arxiv.org/abs/2307.10166v2
- Date: Wed, 02 Oct 2024 08:24:08 GMT
- Title: Adversarial Latent Autoencoder with Self-Attention for Structural Image Synthesis
- Authors: Jiajie Fan, Laure Vuaille, Hao Wang, Thomas Bäck,
- Abstract summary: We propose a novel model Self-Attention Adversarial Latent Autoencoder (SA-ALAE), which allows generating feasible design images of complex engineering parts.
With SA-ALAE, users can not only explore novel variants of an existing design, but also control the generation process by operating in latent space.
- Score: 4.619979201312323
- License:
- Abstract: Generative Engineering Design approaches driven by Deep Generative Models (DGM) have been proposed to facilitate industrial engineering processes. In such processes, designs often come in the form of images, such as blueprints, engineering drawings, and CAD models depending on the level of detail. DGMs have been successfully employed for synthesis of natural images, e.g., displaying animals, human faces and landscapes. However, industrial design images are fundamentally different from natural scenes in that they contain rich structural patterns and long-range dependencies, which are challenging for convolution-based DGMs to generate. Moreover, DGM-driven generation process is typically triggered based on random noisy inputs, which outputs unpredictable samples and thus cannot perform an efficient industrial design exploration. We tackle these challenges by proposing a novel model Self-Attention Adversarial Latent Autoencoder (SA-ALAE), which allows generating feasible design images of complex engineering parts. With SA-ALAE, users can not only explore novel variants of an existing design, but also control the generation process by operating in latent space. The potential of SA-ALAE is shown by generating engineering blueprints in a real automotive design task.
Related papers
- McGAN: Generating Manufacturable Designs by Embedding Manufacturing Rules into Conditional Generative Adversarial Network [9.482982161281999]
We propose a novel Generative Design (GD) approach by using deep neural networks to encode design for manufacturing (DFM) rules.
A conditional generative adversarial neural network (cGAN), Pix2Pix, transforms unmanufacturable subregions into manufacturable subregions.
The results show that McGAN can transform existing unmanufacturable designs to generate their corresponding manufacturable counterparts automatically.
arXiv Detail & Related papers (2024-07-24T02:23:02Z) - SceneX:Procedural Controllable Large-scale Scene Generation via Large-language Models [53.961002112433576]
We introduce a large-scale scene generation framework, SceneX, which can automatically produce high-quality procedural models according to designers' textual descriptions.
Our SceneX can generate a city spanning 2.5 km times 2.5 km with delicate geometric layout and structures, drastically reducing the time cost from several weeks for professional PCG engineers to just a few hours for an ordinary user.
arXiv Detail & Related papers (2024-03-23T03:23:29Z) - Deep Generative Design for Mass Production [17.60251862841578]
We introduce an innovative framework addressing manufacturability concerns by integrating constraints pertinent to die casting and injection molding into Generative Design.
This method simplifies intricate 3D geometries into manufacturable profiles, removing unfeasible features such as non-manufacturable overhangs.
We further enhance this approach by adopting an advanced 2D generative model, which offer a more efficient alternative to traditional 3D shape generation methods.
arXiv Detail & Related papers (2024-03-16T01:32:00Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - A Meta-Generation framework for Industrial System Generation [0.0]
Generative design is an increasingly important tool in the industrial world.
Deep Generative Models are gaining popularity amongst Generative Design technologies.
However, developing and evaluating these models can be challenging.
We propose a Meta-VAE capable of producing multi-component industrial systems.
arXiv Detail & Related papers (2023-06-08T11:47:02Z) - Spatial Steerability of GANs via Self-Supervision from Discriminator [123.27117057804732]
We propose a self-supervised approach to improve the spatial steerability of GANs without searching for steerable directions in the latent space.
Specifically, we design randomly sampled Gaussian heatmaps to be encoded into the intermediate layers of generative models as spatial inductive bias.
During inference, users can interact with the spatial heatmaps in an intuitive manner, enabling them to edit the output image by adjusting the scene layout, moving, or removing objects.
arXiv Detail & Related papers (2023-01-20T07:36:29Z) - LayoutDETR: Detection Transformer Is a Good Multimodal Layout Designer [80.61492265221817]
Graphic layout designs play an essential role in visual communication.
Yet handcrafting layout designs is skill-demanding, time-consuming, and non-scalable to batch production.
Generative models emerge to make design automation scalable but it remains non-trivial to produce designs that comply with designers' desires.
arXiv Detail & Related papers (2022-12-19T21:57:35Z) - Design Space Exploration and Explanation via Conditional Variational
Autoencoders in Meta-model-based Conceptual Design of Pedestrian Bridges [52.77024349608834]
This paper provides a performance-driven design exploration framework to augment the human designer through a Conditional Variational Autoencoder (CVAE)
The CVAE is trained on 18'000 synthetically generated instances of a pedestrian bridge in Switzerland.
arXiv Detail & Related papers (2022-11-29T17:28:31Z) - DiVAE: Photorealistic Images Synthesis with Denoising Diffusion Decoder [73.1010640692609]
We propose a VQ-VAE architecture model with a diffusion decoder (DiVAE) to work as the reconstructing component in image synthesis.
Our model achieves state-of-the-art results and generates more photorealistic images specifically.
arXiv Detail & Related papers (2022-06-01T10:39:12Z) - Deep Generative Models in Engineering Design: A Review [1.933681537640272]
We present a review and analysis of Deep Generative Learning models in engineering design.
Recent DGMs have shown promising results in design applications like structural optimization, materials design, and shape synthesis.
arXiv Detail & Related papers (2021-10-21T02:50:10Z) - CreativeGAN: Editing Generative Adversarial Networks for Creative Design
Synthesis [1.933681537640272]
This paper proposes an automated method, named CreativeGAN, for generating novel designs.
It does so by identifying components that make a design unique and modifying a GAN model such that it becomes more likely to generate designs with identified unique components.
Using a dataset of bicycle designs, we demonstrate that the method can create new bicycle designs with unique frames and handles, and rare novelties to a broad set of designs.
arXiv Detail & Related papers (2021-03-10T18:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.