De-homogenization using Convolutional Neural Networks
- URL: http://arxiv.org/abs/2105.04232v1
- Date: Mon, 10 May 2021 09:50:06 GMT
- Title: De-homogenization using Convolutional Neural Networks
- Authors: Martin O. Elingaard, Niels Aage, J. Andreas B{\ae}rentzen, Ole Sigmund
- Abstract summary: This paper presents a deep learning-based de-homogenization method for structural compliance minimization.
For an appropriate choice of parameters, the de-homogenized designs perform within $7-25%$ of the homogenization-based solution.
- Score: 1.0323063834827415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a deep learning-based de-homogenization method for
structural compliance minimization. By using a convolutional neural network to
parameterize the mapping from a set of lamination parameters on a coarse mesh
to a one-scale design on a fine mesh, we avoid solving the least square
problems associated with traditional de-homogenization approaches and save time
correspondingly. To train the neural network, a two-step custom loss function
has been developed which ensures a periodic output field that follows the local
lamination orientations. A key feature of the proposed method is that the
training is carried out without any use of or reference to the underlying
structural optimization problem, which renders the proposed method robust and
insensitive wrt. domain size, boundary conditions, and loading. A
post-processing procedure utilizing a distance transform on the output field
skeleton is used to project the desired lamination widths onto the output field
while ensuring a predefined minimum length-scale and volume fraction. To
demonstrate that the deep learning approach has excellent generalization
properties, numerical examples are shown for several different load and
boundary conditions. For an appropriate choice of parameters, the
de-homogenized designs perform within $7-25\%$ of the homogenization-based
solution at a fraction of the computational cost. With several options for
further improvements, the scheme may provide the basis for future interactive
high-resolution topology optimization.
Related papers
- A neural network approach for solving the Monge-Ampère equation with transport boundary condition [0.0]
This paper introduces a novel neural network-based approach to solving the Monge-Ampere equation with the transport boundary condition.
We leverage multilayer perceptron networks to learn approximate solutions by minimizing a loss function that encompasses the equation's residual, boundary conditions, and convexity constraints.
arXiv Detail & Related papers (2024-10-25T11:54:00Z) - Improving Generalization of Deep Neural Networks by Optimum Shifting [33.092571599896814]
We propose a novel method called emphoptimum shifting, which changes the parameters of a neural network from a sharp minimum to a flatter one.
Our method is based on the observation that when the input and output of a neural network are fixed, the matrix multiplications within the network can be treated as systems of under-determined linear equations.
arXiv Detail & Related papers (2024-05-23T02:31:55Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - A mechanistic-based data-driven approach to accelerate structural
topology optimization through finite element convolutional neural network
(FE-CNN) [5.469226380238751]
A mechanistic data-driven approach is proposed to accelerate structural topology optimization.
Our approach can be divided into two stages: offline training, and online optimization.
Numerical examples demonstrate that this approach can accelerate optimization by up to an order of magnitude in computational time.
arXiv Detail & Related papers (2021-06-25T14:11:45Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - NTopo: Mesh-free Topology Optimization using Implicit Neural
Representations [35.07884509198916]
We present a novel machine learning approach to tackle topology optimization problems.
We use multilayer perceptrons (MLPs) to parameterize both density and displacement fields.
As we show through our experiments, a major benefit of our approach is that it enables self-supervised learning of continuous solution spaces.
arXiv Detail & Related papers (2021-02-22T05:25:22Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - An Outer-approximation Guided Optimization Approach for Constrained
Neural Network Inverse Problems [0.0]
constrained neural network inverse problems refer to an optimization problem to find the best set of input values of a given trained neural network.
This paper analyzes the characteristics of optimal solutions of neural network inverse problems with rectified activation units.
Experiments demonstrate the superiority of the proposed algorithm compared to a projected gradient method.
arXiv Detail & Related papers (2020-02-24T17:49:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.