TopologyGAN: Topology Optimization Using Generative Adversarial Networks
Based on Physical Fields Over the Initial Domain
- URL: http://arxiv.org/abs/2003.04685v2
- Date: Wed, 11 Mar 2020 05:59:28 GMT
- Title: TopologyGAN: Topology Optimization Using Generative Adversarial Networks
Based on Physical Fields Over the Initial Domain
- Authors: Zhenguo Nie, Tong Lin, Haoliang Jiang, Levent Burak Kara
- Abstract summary: We propose a new data-driven topology optimization model called TopologyGAN.
TopologyGAN takes advantage of various physical fields computed on the original, unoptimized material domain, as inputs to the generator of a conditional generative adversarial network (cGAN)
Compared to a baseline cGAN, TopologyGAN achieves a nearly $3times$ reduction in the mean squared error and a $2.5times$ reduction in the mean absolute error on test problems involving previously unseen boundary conditions.
- Score: 2.0263791972068628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In topology optimization using deep learning, load and boundary conditions
represented as vectors or sparse matrices often miss the opportunity to encode
a rich view of the design problem, leading to less than ideal generalization
results. We propose a new data-driven topology optimization model called
TopologyGAN that takes advantage of various physical fields computed on the
original, unoptimized material domain, as inputs to the generator of a
conditional generative adversarial network (cGAN). Compared to a baseline cGAN,
TopologyGAN achieves a nearly $3\times$ reduction in the mean squared error and
a $2.5\times$ reduction in the mean absolute error on test problems involving
previously unseen boundary conditions. Built on several existing network
models, we also introduce a hybrid network called
U-SE(Squeeze-and-Excitation)-ResNet for the generator that further increases
the overall accuracy. We publicly share our full implementation and trained
network.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Classification of Data Generated by Gaussian Mixture Models Using Deep
ReLU Networks [28.437011792990347]
This paper studies the binary classification of data from $math RMs. generated under Gaussian Mixture networks.
We obtain $d2013x neural analysis rates for the first time convergence rates.
Results provide a theoretical verification of deep neural networks in practical classification problems.
arXiv Detail & Related papers (2023-08-15T20:40:42Z) - Does a sparse ReLU network training problem always admit an optimum? [0.0]
We show that the existence of an optimal solution is not always guaranteed, especially in the context of sparse ReLU neural networks.
In particular, we first show that optimization problems involving deep networks with certain sparsity patterns do not always have optimal parameters.
arXiv Detail & Related papers (2023-06-05T08:01:50Z) - Topology Optimization using Neural Networks with Conditioning Field
Initialization for Improved Efficiency [2.575019247624295]
We show that by using a prior initial field on the unoptimized domain, the efficiency of neural network based topology optimization can be further improved.
We employ the strain energy field calculated on the initial design domain as an additional conditioning field input to the neural network throughout the optimization.
arXiv Detail & Related papers (2023-05-17T07:42:24Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Regularized Training of Intermediate Layers for Generative Models for
Inverse Problems [9.577509224534323]
We introduce a principle that if a generative model is intended for inversion using an algorithm based on optimization of intermediate layers, it should be trained in a way that regularizes those intermediate layers.
We instantiate this principle for two notable recent inversion algorithms: Intermediate Layer Optimization and the Multi-Code GAN prior.
For both of these inversion algorithms, we introduce a new regularized GAN training algorithm and demonstrate that the learned generative model results in lower reconstruction errors across a wide range of under sampling ratios.
arXiv Detail & Related papers (2022-03-08T20:30:49Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.