Robust Topology Optimization Using Variational Autoencoders
- URL: http://arxiv.org/abs/2107.10661v1
- Date: Mon, 19 Jul 2021 20:40:51 GMT
- Title: Robust Topology Optimization Using Variational Autoencoders
- Authors: Rini Jasmine Gladstone, Mohammad Amin Nabian, Vahid Keshavarzzadeh,
Hadi Meidani
- Abstract summary: In this work, we use neural network surrogates to enable a faster solution approach via surrogate-based optimization.
We also build a Variational Autoencoder (VAE) to transform the high dimensional design space into a low dimensional one.
The resulting gradient-based optimization algorithm produces optimal designs with lower robust compliances than those observed in the training set.
- Score: 2.580765958706854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Topology Optimization is the process of finding the optimal arrangement of
materials within a design domain by minimizing a cost function, subject to some
performance constraints. Robust topology optimization (RTO) also incorporates
the effect of input uncertainties and produces a design with the best average
performance of the structure while reducing the response sensitivity to input
uncertainties. It is computationally expensive to carry out RTO using finite
element and Monte Carlo sampling. In this work, we use neural network
surrogates to enable a faster solution approach via surrogate-based
optimization and build a Variational Autoencoder (VAE) to transform the the
high dimensional design space into a low dimensional one. Furthermore, finite
element solvers will be replaced by a neural network surrogate. Also, to
further facilitate the design exploration, we limit our search to a subspace,
which consists of designs that are solutions to deterministic topology
optimization problems under different realizations of input uncertainties. With
these neural network approximations, a gradient-based optimization approach is
formed to minimize the predicted objective function over the low dimensional
design subspace. We demonstrate the effectiveness of the proposed approach on
two compliance minimization problems and show that VAE performs well on
learning the features of the design from minimal training data, and that
converting the design space into a low dimensional latent space makes the
problem computationally efficient. The resulting gradient-based optimization
algorithm produces optimal designs with lower robust compliances than those
observed in the training set.
Related papers
- Diffusion Generative Inverse Design [28.04683283070957]
Inverse design refers to the problem of optimizing the input of an objective function in order to enact a target outcome.
Recent developments in learned graph neural networks (GNNs) can be used for accurate, efficient, differentiable estimation of simulator dynamics.
We show how denoising diffusion diffusion models can be used to solve inverse design problems efficiently and propose a particle sampling algorithm for further improving their efficiency.
arXiv Detail & Related papers (2023-09-05T08:32:07Z) - DADO -- Low-Cost Query Strategies for Deep Active Design Optimization [1.6298921134113031]
We present two selection strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems.
We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance.
arXiv Detail & Related papers (2023-07-10T13:01:27Z) - Diffusing the Optimal Topology: A Generative Optimization Approach [6.375982344506753]
Topology optimization seeks to find the best design that satisfies a set of constraints while maximizing system performance.
Traditional iterative optimization methods like SIMP can be computationally expensive and get stuck in local minima.
We propose a Generative Optimization method that integrates classic optimization like SIMP as a refining mechanism for the topology generated by a deep generative model.
arXiv Detail & Related papers (2023-03-17T03:47:10Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - A mechanistic-based data-driven approach to accelerate structural
topology optimization through finite element convolutional neural network
(FE-CNN) [5.469226380238751]
A mechanistic data-driven approach is proposed to accelerate structural topology optimization.
Our approach can be divided into two stages: offline training, and online optimization.
Numerical examples demonstrate that this approach can accelerate optimization by up to an order of magnitude in computational time.
arXiv Detail & Related papers (2021-06-25T14:11:45Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Objective-Sensitive Principal Component Analysis for High-Dimensional
Inverse Problems [0.0]
We present a novel approach for adaptive, differentiable parameterization of large-scale random fields.
The developed technique is based on principal component analysis (PCA) but modifies a purely data-driven basis of principal components considering objective function behavior.
Three algorithms for optimal parameter decomposition are presented and applied to an objective of 2D synthetic history matching.
arXiv Detail & Related papers (2020-06-02T18:51:17Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.