Enhancing Inverse Problem Solutions with Accurate Surrogate Simulators
and Promising Candidates
- URL: http://arxiv.org/abs/2304.13860v2
- Date: Fri, 17 Nov 2023 07:54:27 GMT
- Title: Enhancing Inverse Problem Solutions with Accurate Surrogate Simulators
and Promising Candidates
- Authors: Akihiro Fujii, Hideki Tsunashima, Yoshihiro Fukuhara, Koji Shimizu,
Satoshi Watanabe
- Abstract summary: The impact of surrogate simulators' accuracy on the solutions in the neural adjoint (NA) method remains uncertain.
We develop an extension of the NA method, named Neural Lagrangian (NeuLag) method, capable of efficiently optimizing a sufficient number of solution candidates.
- Score: 0.7499722271664147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep-learning inverse techniques have attracted significant attention in
recent years. Among them, the neural adjoint (NA) method, which employs a
neural network surrogate simulator, has demonstrated impressive performance in
the design tasks of artificial electromagnetic materials (AEM). However, the
impact of the surrogate simulators' accuracy on the solutions in the NA method
remains uncertain. Furthermore, achieving sufficient optimization becomes
challenging in this method when the surrogate simulator is large, and
computational resources are limited. Additionally, the behavior under
constraints has not been studied, despite its importance from the engineering
perspective. In this study, we investigated the impact of surrogate simulators'
accuracy on the solutions and discovered that the more accurate the surrogate
simulator is, the better the solutions become. We then developed an extension
of the NA method, named Neural Lagrangian (NeuLag) method, capable of
efficiently optimizing a sufficient number of solution candidates. We then
demonstrated that the NeuLag method can find optimal solutions even when
handling sufficient candidates is difficult due to the use of a large and
accurate surrogate simulator. The resimulation errors of the NeuLag method were
approximately 1/50 compared to previous methods for three AEM tasks. Finally,
we performed optimization under constraint using NA and NeuLag, and confirmed
their potential in optimization with soft or hard constraints. We believe our
method holds potential in areas that require large and accurate surrogate
simulators.
Related papers
- Synergistic Development of Perovskite Memristors and Algorithms for Robust Analog Computing [53.77822620185878]
We propose a synergistic methodology to concurrently optimize perovskite memristor fabrication and develop robust analog DNNs.
We develop "BayesMulti", a training strategy utilizing BO-guided noise injection to improve the resistance of analog DNNs to memristor imperfections.
Our integrated approach enables use of analog computing in much deeper and wider networks, achieving up to 100-fold improvements.
arXiv Detail & Related papers (2024-12-03T19:20:08Z) - Large Language Models as Surrogate Models in Evolutionary Algorithms: A Preliminary Study [5.6787965501364335]
Surrogate-assisted selection is a core step in evolutionary algorithms to solve expensive optimization problems.
Traditionally, this has relied on conventional machine learning methods, leveraging historical evaluated evaluations to predict the performance of new solutions.
In this work, we propose a novel surrogate model based purely on LLM inference capabilities, eliminating the need for training.
arXiv Detail & Related papers (2024-06-15T15:54:00Z) - PETScML: Second-order solvers for training regression problems in Scientific Machine Learning [0.22499166814992438]
In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis.
We introduce a software built on top of the Portable and Extensible Toolkit for Scientific computation to bridge the gap between deep-learning software and conventional machine-learning techniques.
arXiv Detail & Related papers (2024-03-18T18:59:42Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Exploring the effectiveness of surrogate-assisted evolutionary
algorithms on the batch processing problem [0.0]
This paper introduces a simulation of a well-known batch processing problem in the literature.
Evolutionary algorithms such as Genetic Algorithm (GA), Differential Evolution (DE) are used to find the optimal schedule for the simulation.
We then compare the quality of solutions obtained by the surrogate-assisted versions of the algorithms against the baseline algorithms.
arXiv Detail & Related papers (2022-10-31T09:00:39Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Constrained multi-objective optimization of process design parameters in
settings with scarce data: an application to adhesive bonding [48.7576911714538]
Finding the optimal process parameters for an adhesive bonding process is challenging.
Traditional evolutionary approaches (such as genetic algorithms) are then ill-suited to solve the problem.
In this research, we successfully applied specific machine learning techniques to emulate the objective and constraint functions.
arXiv Detail & Related papers (2021-12-16T10:14:39Z) - Application of an automated machine learning-genetic algorithm
(AutoML-GA) coupled with computational fluid dynamics simulations for rapid
engine design optimization [0.0]
The present work describes and validates an automated active learning approach, AutoML-GA, for surrogate-based optimization of internal combustion engines.
A genetic algorithm is employed to locate the design optimum on the machine learning surrogate surface.
It is demonstrated that AutoML-GA leads to a better optimum with a lower number of CFD simulations.
arXiv Detail & Related papers (2021-01-07T17:50:52Z) - Surrogate Assisted Evolutionary Algorithm for Medium Scale Expensive
Multi-Objective Optimisation Problems [4.338938227238059]
Building a surrogate model of an objective function has shown to be effective to assist evolutionary algorithms (EAs) to solve real-world complex optimisation problems.
We propose a Gaussian process surrogate model assisted EA for medium-scale expensive multi-objective optimisation problems with up to 50 decision variables.
The effectiveness of our proposed algorithm is validated on benchmark problems with 10, 20, 50 variables, comparing with three state-of-the-art SAEAs.
arXiv Detail & Related papers (2020-02-08T12:06:08Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.