An Adaptive Dual-Domain Prediction Strategy based on Second-order Derivatives for Dynamic Multi-Objective Optimization
- URL: http://arxiv.org/abs/2410.05787v1
- Date: Tue, 8 Oct 2024 08:13:49 GMT
- Title: An Adaptive Dual-Domain Prediction Strategy based on Second-order Derivatives for Dynamic Multi-Objective Optimization
- Authors: Ru Lei, Lin Li, Rustam Stolkin, Bin Feng,
- Abstract summary: This paper demonstrates new approaches to change prediction strategies within an evolutionary algorithm paradigm.
We propose a novel adaptive prediction strategy, which utilizes the concept of second-order derivatives adaptively in different domains.
We compare the performance of the proposed algorithm against four other state-of-the-art algorithms from the literature, using DMOPs benchmark problems.
- Score: 7.272641346606365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the problem of dynamic multi-objective optimization problems (DMOPs), by demonstrating new approaches to change prediction strategies within an evolutionary algorithm paradigm. Because the objectives of such problems change over time, the Pareto optimal set (PS) and Pareto optimal front (PF) are also dynamic. To accurately track the changing PS and PF in the decision and objective spaces, we propose a novel adaptive prediction strategy, which utilizes the concept of second-order derivatives adaptively in different domains. %to deal with DMOPs. Firstly, the changes in both the PS and the PF are considered in this paper, which makes the proposed a dual-domain based method. Firstly, we propose a dual-domain method, which takes into account changes in both the PS and the PF simultaneously. An adaptive strategy is adopted to self-adjust the proportion of the search space. Secondly, a second-order derivative prediction strategy is proposed to predicatively re-initialize the population. We compare the performance of the proposed algorithm against four other state-of-the-art algorithms from the literature, using DMOPs benchmark problems. Experimental results show that the proposed method outperforms the other algorithms on most of the test problems.
Related papers
- Deep Reinforcement Learning for Online Optimal Execution Strategies [49.1574468325115]
This paper tackles the challenge of learning non-Markovian optimal execution strategies in dynamic financial markets.
We introduce a novel actor-critic algorithm based on Deep Deterministic Policy Gradient (DDPG)
We show that our algorithm successfully approximates the optimal execution strategy.
arXiv Detail & Related papers (2024-10-17T12:38:08Z) - Analyzing and Overcoming Local Optima in Complex Multi-Objective Optimization by Decomposition-Based Evolutionary Algorithms [5.153202024713228]
Multi-objective Evolutionary Algorithms (MOEADs) often converge to local optima, limiting solution diversity.
We introduce an innovative RP selection strategy, the Vector-Guided Weight-Hybrid method, designed to overcome the local optima issue.
Our research comprises two main experimental components: an ablation involving 14 algorithms within the MOEADs framework from 2014 to 2022 to validate our theoretical framework, and a series empirical tests to evaluate the effectiveness of our proposed method against both traditional and cutting-edge alternatives.
arXiv Detail & Related papers (2024-04-12T14:29:45Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.
We identify two pivotal factors in model parameter learning: update direction and update method.
In particular, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - Combining Kernelized Autoencoding and Centroid Prediction for Dynamic
Multi-objective Optimization [3.431120541553662]
This paper proposes a unified paradigm, which combines the kernelized autoncoding evolutionary search and the centriod-based prediction.
The proposed method is compared with five state-of-the-art algorithms on a number of complex benchmark problems.
arXiv Detail & Related papers (2023-12-02T00:24:22Z) - Bidirectional Looking with A Novel Double Exponential Moving Average to
Adaptive and Non-adaptive Momentum Optimizers [109.52244418498974]
We propose a novel textscAdmeta (textbfADouble exponential textbfMov averagtextbfE textbfAdaptive and non-adaptive momentum) framework.
We provide two implementations, textscAdmetaR and textscAdmetaS, the former based on RAdam and the latter based on SGDM.
arXiv Detail & Related papers (2023-07-02T18:16:06Z) - Acceleration in Policy Optimization [50.323182853069184]
We work towards a unifying paradigm for accelerating policy optimization methods in reinforcement learning (RL) by integrating foresight in the policy improvement step via optimistic and adaptive updates.
We define optimism as predictive modelling of the future behavior of a policy, and adaptivity as taking immediate and anticipatory corrective actions to mitigate errors from overshooting predictions or delayed responses to change.
We design an optimistic policy gradient algorithm, adaptive via meta-gradient learning, and empirically highlight several design choices pertaining to acceleration, in an illustrative task.
arXiv Detail & Related papers (2023-06-18T15:50:57Z) - A novel multiobjective evolutionary algorithm based on decomposition and
multi-reference points strategy [14.102326122777475]
Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a significantly promising approach for solving multiobjective optimization problems (MOPs)
We propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points.
arXiv Detail & Related papers (2021-10-27T02:07:08Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Deep Reinforcement Learning for Field Development Optimization [0.0]
In this work, the goal is to apply convolutional neural network-based (CNN) deep reinforcement learning (DRL) algorithms to the field development optimization problem.
The proximal policy optimization (PPO) algorithm is considered with two CNN architectures of varying number of layers and composition.
Both networks obtained policies that provide satisfactory results when compared to a hybrid particle swarm optimization - mesh adaptive direct search (PSO-MADS) algorithm.
arXiv Detail & Related papers (2020-08-05T06:26:13Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.