TopoDiff: A Performance and Constraint-Guided Diffusion Model for
Topology Optimization
- URL: http://arxiv.org/abs/2208.09591v1
- Date: Sat, 20 Aug 2022 03:26:00 GMT
- Title: TopoDiff: A Performance and Constraint-Guided Diffusion Model for
Topology Optimization
- Authors: Fran\c{c}ois Maz\'e, Faez Ahmed
- Abstract summary: TopoDiff is a conditional diffusion-model-based architecture for performance-aware and manufacturability-aware topology optimization.
Our method significantly outperforms a state-of-art conditional GAN by reducing the average error on physical performance by a factor of eight.
- Score: 4.091593765662773
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Structural topology optimization, which aims to find the optimal physical
structure that maximizes mechanical performance, is vital in engineering design
applications in aerospace, mechanical, and civil engineering. Generative
adversarial networks (GANs) have recently emerged as a popular alternative to
traditional iterative topology optimization methods. However, these models are
often difficult to train, have limited generalizability, and due to their goal
of mimicking optimal topologies, neglect manufacturability and performance
objectives like mechanical compliance. We propose TopoDiff, a conditional
diffusion-model-based architecture to perform performance-aware and
manufacturability-aware topology optimization that overcomes these issues. Our
model introduces a surrogate model-based guidance strategy that actively favors
structures with low compliance and good manufacturability. Our method
significantly outperforms a state-of-art conditional GAN by reducing the
average error on physical performance by a factor of eight and by producing 11
times fewer infeasible samples. By introducing diffusion models to topology
optimization, we show that conditional diffusion models have the ability to
outperform GANs in engineering design synthesis applications too. Our work also
suggests a general framework for engineering optimization problems using
diffusion models and external performance and constraint-aware guidance.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - Cliqueformer: Model-Based Optimization with Structured Transformers [102.55764949282906]
We develop a model that learns the structure of an MBO task and empirically leads to improved designs.
We evaluate Cliqueformer on various tasks, ranging from high-dimensional black-box functions to real-world tasks of chemical and genetic design.
arXiv Detail & Related papers (2024-10-17T00:35:47Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - Diffusion Model for Data-Driven Black-Box Optimization [54.25693582870226]
We focus on diffusion models, a powerful generative AI technology, and investigate their potential for black-box optimization.
We study two practical types of labels: 1) noisy measurements of a real-valued reward function and 2) human preference based on pairwise comparisons.
Our proposed method reformulates the design optimization problem into a conditional sampling problem, which allows us to leverage the power of diffusion models.
arXiv Detail & Related papers (2024-03-20T00:41:12Z) - Diffusion Generative Inverse Design [28.04683283070957]
Inverse design refers to the problem of optimizing the input of an objective function in order to enact a target outcome.
Recent developments in learned graph neural networks (GNNs) can be used for accurate, efficient, differentiable estimation of simulator dynamics.
We show how denoising diffusion diffusion models can be used to solve inverse design problems efficiently and propose a particle sampling algorithm for further improving their efficiency.
arXiv Detail & Related papers (2023-09-05T08:32:07Z) - Aligning Optimization Trajectories with Diffusion Models for Constrained
Design Generation [17.164961143132473]
We introduce a learning framework that demonstrates the efficacy of aligning the sampling trajectory of diffusion models with the optimization trajectory derived from traditional physics-based methods.
Our method allows for generating feasible and high-performance designs in as few as two steps without the need for expensive preprocessing, external surrogate models, or additional labeled data.
Our results demonstrate that TA outperforms state-of-the-art deep generative models on in-distribution configurations and halves the inference computational cost.
arXiv Detail & Related papers (2023-05-29T09:16:07Z) - Diffusing the Optimal Topology: A Generative Optimization Approach [6.375982344506753]
Topology optimization seeks to find the best design that satisfies a set of constraints while maximizing system performance.
Traditional iterative optimization methods like SIMP can be computationally expensive and get stuck in local minima.
We propose a Generative Optimization method that integrates classic optimization like SIMP as a refining mechanism for the topology generated by a deep generative model.
arXiv Detail & Related papers (2023-03-17T03:47:10Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - PcDGAN: A Continuous Conditional Diverse Generative Adversarial Network
For Inverse Design [10.50166876879424]
We introduce Performance Conditioned Diverse Generative Adversarial Network (PcDGAN)
PcDGAN uses a new self-reinforcing score called the Lambert Log Exponential Transition Score (LLETS) for improved conditioning.
Experiments on synthetic problems and a real-world airfoil design problem demonstrate that PcDGAN outperforms state-of-the-art GAN models.
arXiv Detail & Related papers (2021-06-07T13:45:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.