Topology Optimization using Neural Networks with Conditioning Field
Initialization for Improved Efficiency
- URL: http://arxiv.org/abs/2305.10460v1
- Date: Wed, 17 May 2023 07:42:24 GMT
- Title: Topology Optimization using Neural Networks with Conditioning Field
Initialization for Improved Efficiency
- Authors: Hongrui Chen, Aditya Joglekar, Levent Burak Kara
- Abstract summary: We show that by using a prior initial field on the unoptimized domain, the efficiency of neural network based topology optimization can be further improved.
We employ the strain energy field calculated on the initial design domain as an additional conditioning field input to the neural network throughout the optimization.
- Score: 2.575019247624295
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose conditioning field initialization for neural network based
topology optimization. In this work, we focus on (1) improving upon existing
neural network based topology optimization, (2) demonstrating that by using a
prior initial field on the unoptimized domain, the efficiency of neural network
based topology optimization can be further improved. Our approach consists of a
topology neural network that is trained on a case by case basis to represent
the geometry for a single topology optimization problem. It takes in domain
coordinates as input to represent the density at each coordinate where the
topology is represented by a continuous density field. The displacement is
solved through a finite element solver. We employ the strain energy field
calculated on the initial design domain as an additional conditioning field
input to the neural network throughout the optimization. The addition of the
strain energy field input improves the convergence speed compared to standalone
neural network based topology optimization.
Related papers
- Neural Networks for Generating Better Local Optima in Topology Optimization [0.4543820534430522]
We show how neural network material discretizations can, under certain conditions, find better local optima in more challenging optimization problems.
We emphasize that the neural network material discretization's advantage comes from the interplay with its current limitations.
arXiv Detail & Related papers (2024-07-25T11:24:44Z) - DMF-TONN: Direct Mesh-free Topology Optimization using Neural Networks [4.663709549795511]
We propose a direct mesh-free method for performing topology optimization by integrating a density field approximation neural network with a displacement field approximation neural network.
We show that this direct integration approach can give comparable results to conventional topology optimization techniques.
arXiv Detail & Related papers (2023-05-06T18:04:51Z) - Concurrent build direction, part segmentation, and topology optimization
for additive manufacturing using neural networks [2.2911466677853065]
We propose a neural network-based approach to topology optimization that aims to reduce the use of support structures in additive manufacturing.
Our approach uses a network architecture that allows the simultaneous determination of an optimized: (1) part segmentation, (2) the topology of each part, and (3) the build direction of each part.
arXiv Detail & Related papers (2022-10-04T02:17:54Z) - Acceleration techniques for optimization over trained neural network
ensembles [1.0323063834827415]
We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit activation.
We present a mixed-integer linear program based on existing popular big-$M$ formulations for optimizing over a single neural network.
arXiv Detail & Related papers (2021-12-13T20:50:54Z) - Physics informed neural networks for continuum micromechanics [68.8204255655161]
Recently, physics informed neural networks have successfully been applied to a broad variety of problems in applied mathematics and engineering.
Due to the global approximation, physics informed neural networks have difficulties in displaying localized effects and strong non-linear solutions by optimization.
It is shown, that the domain decomposition approach is able to accurately resolve nonlinear stress, displacement and energy fields in heterogeneous microstructures obtained from real-world $mu$CT-scans.
arXiv Detail & Related papers (2021-10-14T14:05:19Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Optimization Theory for ReLU Neural Networks Trained with Normalization
Layers [82.61117235807606]
The success of deep neural networks in part due to the use of normalization layers.
Our analysis shows how the introduction of normalization changes the landscape and can enable faster activation.
arXiv Detail & Related papers (2020-06-11T23:55:54Z) - The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural
Networks: an Exact Characterization of the Optimal Solutions [51.60996023961886]
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints.
Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces.
arXiv Detail & Related papers (2020-06-10T15:38:30Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - TopologyGAN: Topology Optimization Using Generative Adversarial Networks
Based on Physical Fields Over the Initial Domain [2.0263791972068628]
We propose a new data-driven topology optimization model called TopologyGAN.
TopologyGAN takes advantage of various physical fields computed on the original, unoptimized material domain, as inputs to the generator of a conditional generative adversarial network (cGAN)
Compared to a baseline cGAN, TopologyGAN achieves a nearly $3times$ reduction in the mean squared error and a $2.5times$ reduction in the mean absolute error on test problems involving previously unseen boundary conditions.
arXiv Detail & Related papers (2020-03-05T14:40:11Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.