DMF-TONN: Direct Mesh-free Topology Optimization using Neural Networks
- URL: http://arxiv.org/abs/2305.04107v2
- Date: Fri, 22 Sep 2023 18:59:58 GMT
- Title: DMF-TONN: Direct Mesh-free Topology Optimization using Neural Networks
- Authors: Aditya Joglekar, Hongrui Chen, Levent Burak Kara
- Abstract summary: We propose a direct mesh-free method for performing topology optimization by integrating a density field approximation neural network with a displacement field approximation neural network.
We show that this direct integration approach can give comparable results to conventional topology optimization techniques.
- Score: 4.663709549795511
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a direct mesh-free method for performing topology optimization by
integrating a density field approximation neural network with a displacement
field approximation neural network. We show that this direct integration
approach can give comparable results to conventional topology optimization
techniques, with an added advantage of enabling seamless integration with
post-processing software, and a potential of topology optimization with
objectives where meshing and Finite Element Analysis (FEA) may be expensive or
not suitable. Our approach (DMF-TONN) takes in as inputs the boundary
conditions and domain coordinates and finds the optimum density field for
minimizing the loss function of compliance and volume fraction constraint
violation. The mesh-free nature is enabled by a physics-informed displacement
field approximation neural network to solve the linear elasticity partial
differential equation and replace the FEA conventionally used for calculating
the compliance. We show that using a suitable Fourier Features neural network
architecture and hyperparameters, the density field approximation neural
network can learn the weights to represent the optimal density field for the
given domain and boundary conditions, by directly backpropagating the loss
gradient through the displacement field approximation neural network, and
unlike prior work there is no requirement of a sensitivity filter, optimality
criterion method, or a separate training of density network in each topology
optimization iteration.
Related papers
- G-Adaptive mesh refinement -- leveraging graph neural networks and differentiable finite element solvers [21.82887690060956]
Mesh relocation (r-adaptivity) seeks to optimise the position of a fixed number of mesh points to obtain the best FE solution accuracy.
Recent machine learning approaches to r-adaptivity have mainly focused on the construction of fast surrogates for such classical methods.
Our new approach combines a graph neural network (GNN) powered architecture, with training based on direct minimisation of the FE solution error.
arXiv Detail & Related papers (2024-07-05T13:57:35Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - N-Adaptive Ritz Method: A Neural Network Enriched Partition of Unity for
Boundary Value Problems [1.2200609701777907]
This work introduces a novel neural network-enriched Partition of Unity (NN-PU) approach for solving boundary value problems via artificial neural networks.
The NN enrichment is constructed by combining pre-trained feature-encoded NN blocks with an untrained NN block.
The proposed method offers accurate solutions while notably reducing the computational cost compared to the conventional adaptive refinement in the mesh-based methods.
arXiv Detail & Related papers (2024-01-16T18:11:14Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Topology Optimization using Neural Networks with Conditioning Field
Initialization for Improved Efficiency [2.575019247624295]
We show that by using a prior initial field on the unoptimized domain, the efficiency of neural network based topology optimization can be further improved.
We employ the strain energy field calculated on the initial design domain as an additional conditioning field input to the neural network throughout the optimization.
arXiv Detail & Related papers (2023-05-17T07:42:24Z) - Concurrent build direction, part segmentation, and topology optimization
for additive manufacturing using neural networks [2.2911466677853065]
We propose a neural network-based approach to topology optimization that aims to reduce the use of support structures in additive manufacturing.
Our approach uses a network architecture that allows the simultaneous determination of an optimized: (1) part segmentation, (2) the topology of each part, and (3) the build direction of each part.
arXiv Detail & Related papers (2022-10-04T02:17:54Z) - Physics informed neural networks for continuum micromechanics [68.8204255655161]
Recently, physics informed neural networks have successfully been applied to a broad variety of problems in applied mathematics and engineering.
Due to the global approximation, physics informed neural networks have difficulties in displaying localized effects and strong non-linear solutions by optimization.
It is shown, that the domain decomposition approach is able to accurately resolve nonlinear stress, displacement and energy fields in heterogeneous microstructures obtained from real-world $mu$CT-scans.
arXiv Detail & Related papers (2021-10-14T14:05:19Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - De-homogenization using Convolutional Neural Networks [1.0323063834827415]
This paper presents a deep learning-based de-homogenization method for structural compliance minimization.
For an appropriate choice of parameters, the de-homogenized designs perform within $7-25%$ of the homogenization-based solution.
arXiv Detail & Related papers (2021-05-10T09:50:06Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.