Balancing Exploration and Exploitation for Solving Large-scale
Multiobjective Optimization via Attention Mechanism
- URL: http://arxiv.org/abs/2205.10052v1
- Date: Fri, 20 May 2022 09:45:49 GMT
- Title: Balancing Exploration and Exploitation for Solving Large-scale
Multiobjective Optimization via Attention Mechanism
- Authors: Haokai Hong, Min Jiang, Liang Feng, Qiuzhen Lin and Kay Chen Tan
- Abstract summary: We propose a large-scale multiobjective optimization algorithm based on the attention mechanism, called (LMOAM)
The attention mechanism will assign a unique weight to each decision variable, and LMOAM will use this weight to strike a balance between exploration and exploitation from the decision variable level.
- Score: 18.852491892952514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale multiobjective optimization problems (LSMOPs) refer to
optimization problems with multiple conflicting optimization objectives and
hundreds or even thousands of decision variables. A key point in solving LSMOPs
is how to balance exploration and exploitation so that the algorithm can search
in a huge decision space efficiently. Large-scale multiobjective evolutionary
algorithms consider the balance between exploration and exploitation from the
individual's perspective. However, these algorithms ignore the significance of
tackling this issue from the perspective of decision variables, which makes the
algorithm lack the ability to search from different dimensions and limits the
performance of the algorithm. In this paper, we propose a large-scale
multiobjective optimization algorithm based on the attention mechanism, called
(LMOAM). The attention mechanism will assign a unique weight to each decision
variable, and LMOAM will use this weight to strike a balance between
exploration and exploitation from the decision variable level. Nine different
sets of LSMOP benchmarks are conducted to verify the algorithm proposed in this
paper, and the experimental results validate the effectiveness of our design.
Related papers
- EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - A Review on Quantum Approximate Optimization Algorithm and its Variants [47.89542334125886]
The Quantum Approximate Optimization Algorithm (QAOA) is a highly promising variational quantum algorithm that aims to solve intractable optimization problems.
This comprehensive review offers an overview of the current state of QAOA, encompassing its performance analysis in diverse scenarios.
We conduct a comparative study of selected QAOA extensions and variants, while exploring future prospects and directions for the algorithm.
arXiv Detail & Related papers (2023-06-15T15:28:12Z) - Improving Performance Insensitivity of Large-scale Multiobjective
Optimization via Monte Carlo Tree Search [7.34812867861951]
We propose an evolutionary algorithm for solving large-scale multiobjective optimization problems based on Monte Carlo tree search.
The proposed method samples the decision variables to construct new nodes on the Monte Carlo tree for optimization and evaluation.
It selects nodes with good evaluation for further search to reduce the performance sensitivity caused by large-scale decision variables.
arXiv Detail & Related papers (2023-04-08T17:15:49Z) - Efficiently Tackling Million-Dimensional Multiobjective Problems: A Direction Sampling and Fine-Tuning Approach [21.20603338339053]
We define very large-scale multiobjective optimization problems as optimizing multiple objectives (VLSMOPs) with more than 100,000 decision variables.
We propose a novel approach called the very large-scale multiobjective optimization framework (VMOF)
arXiv Detail & Related papers (2023-04-08T16:51:27Z) - A Scale-Independent Multi-Objective Reinforcement Learning with
Convergence Analysis [0.6091702876917281]
Many sequential decision-making problems need optimization of different objectives which possibly conflict with each other.
We develop a single-agent scale-independent multi-objective reinforcement learning on the basis of the Advantage Actor-Critic (A2C) algorithm.
A convergence analysis is then done for the devised multi-objective algorithm providing a convergence-in-mean guarantee.
arXiv Detail & Related papers (2023-02-08T16:38:55Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - Solving Large-Scale Multi-Objective Optimization via Probabilistic
Prediction Model [10.916384208006157]
An efficient LSMOP algorithm should have the ability to escape the local optimal solution from the huge search space.
Maintaining the diversity of the population is one of the effective ways to improve search efficiency.
We propose a probabilistic prediction model based on trend prediction model and generating-filtering strategy, called LT-PPM, to tackle the LSMOP.
arXiv Detail & Related papers (2021-07-16T09:43:35Z) - Manifold Interpolation for Large-Scale Multi-Objective Optimization via
Generative Adversarial Networks [12.18471608552718]
Large-scale multiobjective optimization problems (LSMOPs) are characterized as involving hundreds or even thousands of decision variables and multiple conflicting objectives.
Previous research has shown that these optimal solutions are uniformly distributed on the manifold structure in the low-dimensional space.
In this work, a generative adversarial network (GAN)-based manifold framework is proposed to learn the manifold and generate high-quality solutions.
arXiv Detail & Related papers (2021-01-08T09:38:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.