A novel machine learning-based optimization algorithm (ActivO) for
accelerating simulation-driven engine design
- URL: http://arxiv.org/abs/2012.04649v2
- Date: Mon, 4 Jan 2021 22:02:05 GMT
- Title: A novel machine learning-based optimization algorithm (ActivO) for
accelerating simulation-driven engine design
- Authors: Opeoluwa Owoyele, Pinaki Pal
- Abstract summary: The proposed approach is a surrogate-based scheme, where predictions of a weak leaner and a strong learner are utilized within an active learning loop.
ActivO reduces the number of function evaluations needed to reach the global optimum, and thereby time-to-design by 80%.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A novel design optimization approach (ActivO) that employs an ensemble of
machine learning algorithms is presented. The proposed approach is a
surrogate-based scheme, where the predictions of a weak leaner and a strong
learner are utilized within an active learning loop. The weak learner is used
to identify promising regions within the design space to explore, while the
strong learner is used to determine the exact location of the optimum within
promising regions. For each design iteration, exploration is done by randomly
selecting evaluation points within regions where the weak learner-predicted
fitness is high. The global optimum obtained by using the strong learner as a
surrogate is also evaluated to enable rapid convergence once the most promising
region has been identified. First, the performance of ActivO was compared
against five other optimizers on a cosine mixture function with 25 local optima
and one global optimum. In the second problem, the objective was to minimize
indicated specific fuel consumption of a compression-ignition internal
combustion (IC) engine while adhering to desired constraints associated with
in-cylinder pressure and emissions. Here, the efficacy of the proposed approach
is compared to that of a genetic algorithm, which is widely used within the
internal combustion engine community for engine optimization, showing that
ActivO reduces the number of function evaluations needed to reach the global
optimum, and thereby time-to-design by 80%. Furthermore, the optimization of
engine design parameters leads to savings of around 1.9% in energy consumption,
while maintaining operability and acceptable pollutant emissions.
Related papers
- Understanding Optimization in Deep Learning with Central Flows [53.66160508990508]
We show that an RMS's implicit behavior can be explicitly captured by a "central flow:" a differential equation.
We show that these flows can empirically predict long-term optimization trajectories of generic neural networks.
arXiv Detail & Related papers (2024-10-31T17:58:13Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Beyond Single-Model Views for Deep Learning: Optimization versus
Generalizability of Stochastic Optimization Algorithms [13.134564730161983]
This paper adopts a novel approach to deep learning optimization, focusing on gradient descent (SGD) and its variants.
We show that SGD and its variants demonstrate performance on par with flat-minimas like SAM, albeit with half the gradient evaluations.
Our study uncovers several key findings regarding the relationship between training loss and hold-out accuracy, as well as the comparable performance of SGD and noise-enabled variants.
arXiv Detail & Related papers (2024-03-01T14:55:22Z) - Characterization of Locality in Spin States and Forced Moves for
Optimizations [0.36868085124383626]
In optimization problems, the existence of local minima in energy landscapes is problematic to use to seek the global minimum.
We develop an algorithm to get out of the local minima efficiently while it does not yield the exact samplings.
As the proposed algorithm is based on a rejection-free algorithm, the computational cost is low.
arXiv Detail & Related papers (2023-12-05T07:21:00Z) - Finding the Optimum Design of Large Gas Engines Prechambers Using CFD
and Bayesian Optimization [5.381050729919025]
The turbulent jet ignition concept using prechambers is a promising solution to achieve stable combustion at lean conditions in large gas engines.
Due to the wide range of design and operating parameters for large gas engine prechambers, the preferred method for evaluating different designs is computational fluid dynamics (CFD)
The present study deals with the computationally efficient Bayesian optimization of large gas engine prechambers design using CFD simulation.
arXiv Detail & Related papers (2023-08-03T13:07:46Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Application of Monte Carlo Stochastic Optimization (MOST) to Deep
Learning [0.0]
In this paper, we apply the Monte Carlo optimization (MOST) proposed by the authors to a deep learning of XOR gate.
As a result, it was confirmed that it converged faster than the existing method.
arXiv Detail & Related papers (2021-09-02T05:52:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.