Multi-Level Evolution Strategies for High-Resolution Black-Box Control
- URL: http://arxiv.org/abs/2010.01524v1
- Date: Sun, 4 Oct 2020 09:24:40 GMT
- Title: Multi-Level Evolution Strategies for High-Resolution Black-Box Control
- Authors: Ofer M. Shir and Xi Xing and Herschel Rabitz
- Abstract summary: This paper introduces a multi-level (m-lev) mechanism into Evolution Strategies (ESs)
It addresses a class of global optimization problems that could benefit from fine discretization of their decision variables.
- Score: 0.2320417845168326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a multi-level (m-lev) mechanism into Evolution
Strategies (ESs) in order to address a class of global optimization problems
that could benefit from fine discretization of their decision variables. Such
problems arise in engineering and scientific applications, which possess a
multi-resolution control nature, and thus may be formulated either by means of
low-resolution variants (providing coarser approximations with presumably lower
accuracy for the general problem) or by high-resolution controls. A particular
scientific application concerns practical Quantum Control (QC) problems, whose
targeted optimal controls may be discretized to increasingly higher resolution,
which in turn carries the potential to obtain better control yields. However,
state-of-the-art derivative-free optimization heuristics for high-resolution
formulations nominally call for an impractically large number of objective
function calls. Therefore, an effective algorithmic treatment for such problems
is needed. We introduce a framework with an automated scheme to facilitate
guided-search over increasingly finer levels of control resolution for the
optimization problem, whose on-the-fly learned parameters require careful
adaptation. We instantiate the proposed m-lev self-adaptive ES framework by two
specific strategies, namely the classical elitist single-child (1+1)-ES and the
non-elitist multi-child derandomized $(\mu_W,\lambda)$-sep-CMA-ES. We first
show that the approach is suitable by simulation-based optimization of QC
systems which were heretofore viewed as too complex to address. We also present
a laboratory proof-of-concept for the proposed approach on a basic experimental
QC system objective.
Related papers
- Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Pontryagin Optimal Control via Neural Networks [19.546571122359534]
We integrate Neural Networks with the Pontryagin's Maximum Principle (PMP), and propose a sample efficient framework NN-PMP-Gradient.
The resulting controller can be implemented for systems with unknown and complex dynamics.
Compared with the widely applied model-free and model-based reinforcement learning (RL) algorithms, our NN-PMP-Gradient achieves higher sample-efficiency and performance in terms of control objectives.
arXiv Detail & Related papers (2022-12-30T06:47:03Z) - Multi-surrogate Assisted Efficient Global Optimization for Discrete
Problems [0.9127162004615265]
This paper investigates the possible benefit of a concurrent utilization of multiple simulation-based surrogate models to solve discrete problems.
Our findings indicate that SAMA-DiEGO can rapidly converge to better solutions on a majority of the test problems.
arXiv Detail & Related papers (2022-12-13T09:10:08Z) - Learning Adaptive Evolutionary Computation for Solving Multi-Objective
Optimization Problems [3.3266268089678257]
This paper proposes a framework that integrates MOEAs with adaptive parameter control using Deep Reinforcement Learning (DRL)
The DRL policy is trained to adaptively set the values that dictate the intensity and probability of mutation for solutions during optimization.
We show the learned policy is transferable, i.e., the policy trained on a simple benchmark problem can be directly applied to solve the complex warehouse optimization problem.
arXiv Detail & Related papers (2022-11-01T22:08:34Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Multi-Agent Deep Reinforcement Learning in Vehicular OCC [14.685237010856953]
We introduce a spectral efficiency optimization approach in vehicular OCC.
We model the optimization problem as a Markov decision process (MDP) to enable the use of solutions that can be applied online.
We verify the performance of our proposed scheme through extensive simulations and compare it with various variants of our approach and a random method.
arXiv Detail & Related papers (2022-05-05T14:25:54Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients [99.13839450032408]
It is desired to design a universal framework for adaptive algorithms to solve general problems.
In particular, our novel framework provides adaptive methods under non convergence support for setting.
arXiv Detail & Related papers (2021-06-15T15:16:28Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Simplified Swarm Optimization for Bi-Objection Active Reliability
Redundancy Allocation Problems [1.5990720051907859]
The reliability redundancy allocation problem (RRAP) is a well-known problem in system design, development, and management.
In this study, a bi-objective RRAP is formulated by changing the cost constraint as a new goal.
To solve the proposed problem, a new simplified swarm optimization (SSO) with a penalty function, a real one-type solution structure, a number-based self-adaptive new update mechanism, a constrained non-dominated solution selection, and a new pBest replacement policy is developed.
arXiv Detail & Related papers (2020-06-17T13:15:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.