Topology Optimization via Machine Learning and Deep Learning: A Review
- URL: http://arxiv.org/abs/2210.10782v2
- Date: Mon, 5 Jun 2023 15:01:43 GMT
- Title: Topology Optimization via Machine Learning and Deep Learning: A Review
- Authors: Seungyeon Shin, Dongju Shin, Namwoo Kang
- Abstract summary: Topology optimization (TO) is a method of deriving an optimal design that satisfies a given load and boundary conditions within a design domain.
This study reviews and analyzes previous research on machine learning-based TO (MLTO)
- Score: 4.447467536572626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Topology optimization (TO) is a method of deriving an optimal design that
satisfies a given load and boundary conditions within a design domain. This
method enables effective design without initial design, but has been limited in
use due to high computational costs. At the same time, machine learning (ML)
methodology including deep learning has made great progress in the 21st
century, and accordingly, many studies have been conducted to enable effective
and rapid optimization by applying ML to TO. Therefore, this study reviews and
analyzes previous research on ML-based TO (MLTO). Two different perspectives of
MLTO are used to review studies: (1) TO and (2) ML perspectives. The TO
perspective addresses "why" to use ML for TO, while the ML perspective
addresses "how" to apply ML to TO. In addition, the limitations of current MLTO
research and future research directions are examined.
Related papers
- Common pitfalls to avoid while using multiobjective optimization in machine learning [1.2499537119440245]
There has been an increasing interest in exploring the application of multiobjective optimization (MOO) in machine learning (ML)
Despite its potential, there is a noticeable lack of satisfactory literature that could serve as an entry-level guide for ML practitioners who want to use MOO.
We critically review previous studies, particularly those involving MOO in deep learning (using Physics-Informed Neural Networks (PINNs) as a guiding example) and identify misconceptions that highlight the need for a better grasp of MOO principles in ML.
arXiv Detail & Related papers (2024-05-02T17:12:25Z) - Towards Optimal Learning of Language Models [124.65669486710992]
We present a theory for the optimal learning of language models (LMs)
We derive a theorem, named Learning Law, to reveal the properties of the dynamics in the optimal learning process under our objective.
We empirically verify that the optimal learning of LMs essentially stems from the improvement of the coefficients in the scaling law of LMs.
arXiv Detail & Related papers (2024-02-27T18:52:19Z) - Learning to optimize by multi-gradient for multi-objective optimization [0.0]
We introduce a new automatic learning paradigm for optimizing MOO problems, and propose a multi-gradient learning to optimize (ML2O) method.
As a learning-based method, ML2O acquires knowledge of local landscapes by leveraging information from the current step.
We show that our learned outperforms hand-designed competitors on training multi-task learning (MTL) neural network.
arXiv Detail & Related papers (2023-11-01T14:55:54Z) - Symbolic Learning to Optimize: Towards Interpretability and Scalability [113.23813868412954]
Recent studies on Learning to Optimize (L2O) suggest a promising path to automating and accelerating the optimization procedure for complicated tasks.
Existing L2O models parameterize optimization rules by neural networks, and learn those numerical rules via meta-training.
In this paper, we establish a holistic symbolic representation and analysis framework for L2O.
We propose a lightweight L2O model that can be meta-trained on large-scale problems and outperformed human-designed and tuneds.
arXiv Detail & Related papers (2022-03-13T06:04:25Z) - MAML is a Noisy Contrastive Learner [72.04430033118426]
Model-agnostic meta-learning (MAML) is one of the most popular and widely-adopted meta-learning algorithms nowadays.
We provide a new perspective to the working mechanism of MAML and discover that: MAML is analogous to a meta-learner using a supervised contrastive objective function.
We propose a simple but effective technique, zeroing trick, to alleviate such interference.
arXiv Detail & Related papers (2021-06-29T12:52:26Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Learning by Design: Structuring and Documenting the Human Choices in
Machine Learning Development [6.903929927172917]
We present a method consisting of eight design questions that outline the deliberation and normative choices going into creating a machine learning model.
Our method affords several benefits, such as supporting critical assessment through methodological transparency.
We believe that our method can help ML practitioners structure and justify their choices and assumptions when developing ML models.
arXiv Detail & Related papers (2021-05-03T08:47:45Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z) - Stress Testing of Meta-learning Approaches for Few-shot Learning [2.733700237741334]
Meta-learning (ML) has emerged as a promising learning method under resource constraints such as few-shot learning.
We measure the performance of ML approaches for few-shot learning against increasing task complexity.
arXiv Detail & Related papers (2021-01-21T13:00:10Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.