Accelerating genetic optimization of nonlinear model predictive control by learning optimal search space size
- URL: http://arxiv.org/abs/2305.08094v2
- Date: Mon, 13 Jan 2025 14:53:11 GMT
- Title: Accelerating genetic optimization of nonlinear model predictive control by learning optimal search space size
- Authors: Eslam Mostafa, Hussein A. Aly, Ahmed Elliethy,
- Abstract summary: Genetic algorithm (GA) is typically used to solve nonlinear model predictive control's optimization problem.
This paper proposes accelerating the genetic optimization of NMPC by learning optimal search space size.
The proposed approach reduces the GA's computational time, improves the chance of convergence to better control inputs, and provides a stable and feasible solution.
- Score: 0.40964539027092917
- License:
- Abstract: Genetic algorithm (GA) is typically used to solve nonlinear model predictive control's optimization problem. However, the size of the search space in which the GA searches for the optimal control inputs is crucial for its applicability to fast-response systems. This paper proposes accelerating the genetic optimization of NMPC by learning optimal search space size. The approach trains a multivariate regression model to adaptively predict the best smallest size of the search space in every control cycle. The proposed approach reduces the GA's computational time, improves the chance of convergence to better control inputs, and provides a stable and feasible solution. The proposed approach was evaluated on three nonlinear systems and compared to four other evolutionary algorithms implemented in a processor-in-the-loop fashion. The results show that the proposed approach provides a 17-45\% reduction in computational time and increases the convergence rate by 35-47\%. The source code is available on GitHub.
Related papers
- Scalable Bayesian Optimization via Focalized Sparse Gaussian Processes [8.40647440727154]
We argue that Bayesian optimization algorithms with sparse GPs can more efficiently allocate their representational power to relevant regions of the search space.
We show that FocalBO can efficiently leverage large amounts of offline and online data to achieve state-of-the-art performance on robot morphology design and to control a 585-dimensional musculoskeletal system.
arXiv Detail & Related papers (2024-12-29T06:36:15Z) - Frog-Snake prey-predation Relationship Optimization (FSRO) : A novel nature-inspired metaheuristic algorithm for feature selection [0.0]
This study proposes the Frog-Snake prey-predation Relationship Optimization (FSRO) algorithm.
It is inspired by the prey-predation relationship between frogs and snakes for application to discrete optimization problems.
The proposed algorithm conducts computational experiments on feature selection using 26 types of machine learning datasets.
arXiv Detail & Related papers (2024-02-13T06:39:15Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Genetically Modified Wolf Optimization with Stochastic Gradient Descent
for Optimising Deep Neural Networks [0.0]
This research aims to analyze an alternative approach to optimizing neural network (NN) weights, with the use of population-based metaheuristic algorithms.
A hybrid between Grey Wolf (GWO) and Genetic Modified Algorithms (GA) is explored, in conjunction with Gradient Descent (SGD)
This algorithm allows for a combination between exploitation and exploration, whilst also tackling the issue of high-dimensionality.
arXiv Detail & Related papers (2023-01-21T13:22:09Z) - High-dimensional Bayesian Optimization Algorithm with Recurrent Neural
Network for Disease Control Models in Time Series [1.9371782627708491]
We propose a new high dimensional Bayesian Optimization algorithm combining Recurrent neural networks.
The proposed RNN-BO algorithm can solve the optimal control problems in the lower dimension space.
We also discuss the impacts of different numbers of the RNN layers and training epochs on the trade-off between solution quality and related computational efforts.
arXiv Detail & Related papers (2022-01-01T08:40:17Z) - High dimensional Bayesian Optimization Algorithm for Complex System in
Time Series [1.9371782627708491]
This paper presents a novel high dimensional Bayesian optimization algorithm.
Based on the time-dependent or dimension-dependent characteristics of the model, the proposed algorithm can reduce the dimension evenly.
To increase the final accuracy of the optimal solution, the proposed algorithm adds a local search based on a series of Adam-based steps at the final stage.
arXiv Detail & Related papers (2021-08-04T21:21:17Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.