Deep convolutional neural network for shape optimization using level-set
approach
- URL: http://arxiv.org/abs/2201.06210v1
- Date: Mon, 17 Jan 2022 04:41:51 GMT
- Title: Deep convolutional neural network for shape optimization using level-set
approach
- Authors: Wrik Mallik, Neil Farvolden, Rajeev K. Jaiman and Jasmin Jelovica
- Abstract summary: This article presents a reduced-order modeling methodology for shape optimization applications via deep convolutional neural networks (CNNs)
A CNN-based reduced-order model (ROM) is constructed in a completely data-driven manner, and suited for non-intrusive applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This article presents a reduced-order modeling methodology for shape
optimization applications via deep convolutional neural networks (CNNs). The
CNN provides a nonlinear mapping between the shapes and their associated
attributes while conserving the equivariance of these attributes to the shape
translations. To implicitly represent complex shapes via a CNN-applicable
Cartesian structured grid, a level-set method is employed. The CNN-based
reduced-order model (ROM) is constructed in a completely data-driven manner,
and suited for non-intrusive applications. We demonstrate our complete
ROM-based shape optimization on a gradient-based three-dimensional shape
optimization problem to minimize the induced drag of a wing in potential flow.
We show a satisfactory comparison between ROM-based optima for the aerodynamic
coefficients compared to their counterparts obtained via a potential flow
solver. The predicted behavior of our ROM-based global optima closely matches
the theoretical predictions. We also present the learning mechanism of the deep
CNN model in a physically interpretable manner. The CNN-ROM-based shape
optimization algorithm exhibits significant computational efficiency compared
to full order model-based online optimization applications. Thus, it promises a
tractable solution for shape optimization of complex configuration and physical
problems.
Related papers
- The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Mixed-Integer Optimisation of Graph Neural Networks for Computer-Aided
Molecular Design [4.593587844188084]
ReLU neural networks have been modelled as constraints in mixed integer linear programming (MILP)
We propose a formulation for ReLU Graph Convolutional Neural Networks and a MILP formulation for ReLU GraphSAGE models.
These formulations enable solving optimisation problems with trained GNNs embedded to global optimality.
arXiv Detail & Related papers (2023-12-02T21:10:18Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks [86.37110868126548]
In this work, we make use of deep residual neural networks to solve the non-stationary ODE (flow equation) based on a Euler's discretization scheme.
We illustrate these ideas on diverse registration problems of 3D shapes under complex topology-preserving transformations.
arXiv Detail & Related papers (2021-02-16T04:07:13Z) - Enhanced data efficiency using deep neural networks and Gaussian
processes for aerodynamic design optimization [0.0]
Adjoint-based optimization methods are attractive for aerodynamic shape design.
They can become prohibitively expensive when multiple optimization problems are being solved.
We propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver.
arXiv Detail & Related papers (2020-08-15T15:09:21Z) - Implicit Convex Regularizers of CNN Architectures: Convex Optimization
of Two- and Three-Layer Networks in Polynomial Time [70.15611146583068]
We study training of Convolutional Neural Networks (CNNs) with ReLU activations.
We introduce exact convex optimization with a complexity with respect to the number of data samples, the number of neurons, and data dimension.
arXiv Detail & Related papers (2020-06-26T04:47:20Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z) - Deep learning-based topological optimization for representing a
user-specified design area [0.060917028769172814]
We propose a new deep learning model to generate an optimized structure for a given design domain and other boundary conditions without iteration.
The resolution of the optimized structure is 32 * 32 pixels, and the design conditions are design area, volume fraction, distribution of external forces, and load.
Comparing the performance of our proposed model with a CNN model that does not use BN and SPADE, values for mean absolute error (MAE), mean compliance error, and volume error with the optimized topology structure generated in MAT-LAB code were smaller.
arXiv Detail & Related papers (2020-04-11T18:54:07Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.