Physics-consistent deep learning for structural topology optimization
- URL: http://arxiv.org/abs/2012.05359v1
- Date: Wed, 9 Dec 2020 23:05:55 GMT
- Title: Physics-consistent deep learning for structural topology optimization
- Authors: Jaydeep Rade, Aditya Balu, Ethan Herron, Jay Pathak, Rishikesh Ranade,
Soumik Sarkar, Adarsh Krishnamurthy
- Abstract summary: Topology optimization has emerged as a popular approach to refine a component's design and increasing its performance.
Current state-of-the-art topology optimization frameworks are compute-intensive.
In this paper, we explore a deep learning-based framework for performing topology optimization for three-dimensional geometries with a reasonably fine (high) resolution.
- Score: 8.391633158275692
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Topology optimization has emerged as a popular approach to refine a
component's design and increasing its performance. However, current
state-of-the-art topology optimization frameworks are compute-intensive, mainly
due to multiple finite element analysis iterations required to evaluate the
component's performance during the optimization process. Recently, machine
learning-based topology optimization methods have been explored by researchers
to alleviate this issue. However, previous approaches have mainly been
demonstrated on simple two-dimensional applications with low-resolution
geometry. Further, current approaches are based on a single machine learning
model for end-to-end prediction, which requires a large dataset for training.
These challenges make it non-trivial to extend the current approaches to higher
resolutions. In this paper, we explore a deep learning-based framework for
performing topology optimization for three-dimensional geometries with a
reasonably fine (high) resolution. We are able to achieve this by training
multiple networks, each trying to learn a different aspect of the overall
topology optimization methodology. We demonstrate the application of our
framework on both 2D and 3D geometries. The results show that our approach
predicts the final optimized design better than current ML-based topology
optimization methods.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.
We identify two pivotal factors in model parameter learning: update direction and update method.
In particular, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - A Survey on Multi-Objective based Parameter Optimization for Deep
Learning [1.3223682837381137]
We focus on exploring the effectiveness of multi-objective optimization strategies for parameter optimization in conjunction with deep neural networks.
The two methods are combined to provide valuable insights into the generation of predictions and analysis in multiple applications.
arXiv Detail & Related papers (2023-05-17T07:48:54Z) - Tile Networks: Learning Optimal Geometric Layout for Whole-page
Recommendation [14.951408879079272]
We show it is possible to solve configuration optimization problems for whole-page recommendation using reinforcement learning.
The proposed textitTile Networks is a neural architecture that optimize 2D geometric configurations by arranging items on proper positions.
arXiv Detail & Related papers (2023-03-03T02:18:55Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - RL-PGO: Reinforcement Learning-based Planar Pose-Graph Optimization [1.4884785898657995]
This paper presents a state-of-the-art Deep Reinforcement Learning (DRL) based environment and proposed agent for 2D pose-graph optimization.
We demonstrate that the pose-graph optimization problem can be modeled as a partially observable Decision Process and evaluate performance on real-world and synthetic datasets.
arXiv Detail & Related papers (2022-02-26T20:10:14Z) - Real-Time Topology Optimization in 3D via Deep Transfer Learning [0.0]
We introduce a transfer learning method based on a convolutional neural network.
We show it can handle high-resolution 3D design domains of various shapes and topologies.
Our experiments achieved an average binary accuracy of around 95% at real-time prediction rates.
arXiv Detail & Related papers (2021-02-11T21:09:58Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - A Primer on Zeroth-Order Optimization in Signal Processing and Machine
Learning [95.85269649177336]
ZO optimization iteratively performs three major steps: gradient estimation, descent direction, and solution update.
We demonstrate promising applications of ZO optimization, such as evaluating and generating explanations from black-box deep learning models, and efficient online sensor management.
arXiv Detail & Related papers (2020-06-11T06:50:35Z) - Objective-Sensitive Principal Component Analysis for High-Dimensional
Inverse Problems [0.0]
We present a novel approach for adaptive, differentiable parameterization of large-scale random fields.
The developed technique is based on principal component analysis (PCA) but modifies a purely data-driven basis of principal components considering objective function behavior.
Three algorithms for optimal parameter decomposition are presented and applied to an objective of 2D synthetic history matching.
arXiv Detail & Related papers (2020-06-02T18:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.