Boosting the Efficiency of Metaheuristics Through Opposition-Based Learning in Optimum Locating of Control Systems in Tall Buildings
- URL: http://arxiv.org/abs/2411.05864v1
- Date: Thu, 07 Nov 2024 13:05:40 GMT
- Title: Boosting the Efficiency of Metaheuristics Through Opposition-Based Learning in Optimum Locating of Control Systems in Tall Buildings
- Authors: Salar Farahmand-Tabar, Sina Shirgir,
- Abstract summary: Opposition-based learning is an effective approach to improve the performance of metaheuristic algorithms.
Case studies on the application of opposition strategies in engineering problems are provided.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Opposition-based learning (OBL) is an effective approach to improve the performance of metaheuristic optimization algorithms, which are commonly used for solving complex engineering problems. This chapter provides a comprehensive review of the literature on the use of opposition strategies in metaheuristic optimization algorithms, discussing the benefits and limitations of this approach. An overview of the opposition strategy concept, its various implementations, and its impact on the performance of metaheuristic algorithms are presented. Furthermore, case studies on the application of opposition strategies in engineering problems are provided, including the optimum locating of control systems in tall building. A shear frame with Magnetorheological (MR) fluid damper is considered as a case study. The results demonstrate that the incorporation of opposition strategies in metaheuristic algorithms significantly enhances the quality and speed of the optimization process. This chapter aims to provide a clear understanding of the opposition strategy in metaheuristic optimization algorithms and its engineering applications, with the ultimate goal of facilitating its adoption in real-world engineering problems.
Related papers
- Advancing CMA-ES with Learning-Based Cooperative Coevolution for Scalable Optimization [12.899626317088885]
This paper introduces LCC, a pioneering learning-based cooperative coevolution framework.
LCC dynamically schedules decomposition strategies during optimization processes.
It offers certain advantages over state-of-the-art baselines in terms of optimization effectiveness and resource consumption.
arXiv Detail & Related papers (2025-04-24T14:09:22Z) - Learning Strategies in Particle Swarm Optimizer: A Critical Review and Performance Analysis [0.6437284704257459]
Particle swarm optimization (PSO) is widely adopted among SI algorithms due to its simplicity and efficiency.
We review and classify various learning strategies to address this gap, assessing their impact on optimization performance.
We discuss open challenges and future directions, emphasizing the need for self-adaptive, intelligent PSO variants.
arXiv Detail & Related papers (2025-04-16T06:50:02Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.
deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.
This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - Deep Reinforcement Learning for Online Optimal Execution Strategies [49.1574468325115]
This paper tackles the challenge of learning non-Markovian optimal execution strategies in dynamic financial markets.
We introduce a novel actor-critic algorithm based on Deep Deterministic Policy Gradient (DDPG)
We show that our algorithm successfully approximates the optimal execution strategy.
arXiv Detail & Related papers (2024-10-17T12:38:08Z) - An Efficient Learning-based Solver Comparable to Metaheuristics for the
Capacitated Arc Routing Problem [67.92544792239086]
We introduce an NN-based solver to significantly narrow the gap with advanced metaheuristics.
First, we propose direction-aware facilitating attention model (DaAM) to incorporate directionality into the embedding process.
Second, we design a supervised reinforcement learning scheme that involves supervised pre-training to establish a robust initial policy.
arXiv Detail & Related papers (2024-03-11T02:17:42Z) - Deep Reinforcement Learning for Dynamic Algorithm Selection: A
Proof-of-Principle Study on Differential Evolution [27.607740475924448]
We propose a deep reinforcement learning-based dynamic algorithm selection framework to accomplish this task.
We employ a sophisticated deep neural network model to infer the optimal action, ensuring informed algorithm selections.
As a proof-of-principle study, we apply this framework to a group of Differential Evolution algorithms.
arXiv Detail & Related papers (2024-03-04T15:40:28Z) - Explainable Benchmarking for Iterative Optimization Heuristics [0.8192907805418583]
We introduce the IOH-Xplainer software framework, for analyzing and understanding the performance of various optimization algorithms.
We examine the impact of different algorithmic components and configurations, offering insights into their performance across diverse scenarios.
arXiv Detail & Related papers (2024-01-31T14:02:26Z) - An Invariant Information Geometric Method for High-Dimensional Online
Optimization [9.538618632613714]
We introduce a full invariance oriented evolution strategies algorithm, derived from its corresponding framework.
We benchmark SynCMA against leading algorithms in Bayesian optimization and evolution strategies.
In all scenarios, SynCMA demonstrates great competence, if not dominance, over other algorithms in sample efficiency.
arXiv Detail & Related papers (2024-01-03T07:06:26Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Efficient Inverse Design Optimization through Multi-fidelity Simulations, Machine Learning, and Search Space Reduction Strategies [0.8646443773218541]
This paper introduces a methodology designed to augment the inverse design optimization process in scenarios constrained by limited compute.
The proposed methodology is analyzed on two distinct engineering inverse design problems: airfoil inverse design and the scalar field reconstruction problem.
Notably, this method is adaptable across any inverse design application, facilitating a synergy between a representative low-fidelity ML model, and high-fidelity simulation, and can be seamlessly applied across any variety of population-based optimization algorithms.
arXiv Detail & Related papers (2023-12-06T18:20:46Z) - DADO -- Low-Cost Query Strategies for Deep Active Design Optimization [1.6298921134113031]
We present two selection strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems.
We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance.
arXiv Detail & Related papers (2023-07-10T13:01:27Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.