Marginal Probability-Based Integer Handling for CMA-ES Tackling
Single-and Multi-Objective Mixed-Integer Black-Box Optimization
- URL: http://arxiv.org/abs/2212.09260v2
- Date: Thu, 11 Jan 2024 12:40:19 GMT
- Title: Marginal Probability-Based Integer Handling for CMA-ES Tackling
Single-and Multi-Objective Mixed-Integer Black-Box Optimization
- Authors: Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa
- Abstract summary: We study the mixed-integer black-box optimization (MI-BBO) problem where continuous and integer variables should be optimized simultaneously.
We propose a simple integer handling for the CMA-ES based on lower-bounding the marginal probabilities associated with the generation of integer variables in the MGD.
- Score: 10.42820615166362
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This study targets the mixed-integer black-box optimization (MI-BBO) problem
where continuous and integer variables should be optimized simultaneously. The
CMA-ES, our focus in this study, is a population-based stochastic search method
that samples solution candidates from a multivariate Gaussian distribution
(MGD), which shows excellent performance in continuous BBO. The parameters of
MGD, mean and (co)variance, are updated based on the evaluation value of
candidate solutions in the CMA-ES. If the CMA-ES is applied to the MI-BBO with
straightforward discretization, however, the variance corresponding to the
integer variables becomes much smaller than the granularity of the
discretization before reaching the optimal solution, which leads to the
stagnation of the optimization. In particular, when binary variables are
included in the problem, this stagnation more likely occurs because the
granularity of the discretization becomes wider, and the existing integer
handling for the CMA-ES does not address this stagnation. To overcome these
limitations, we propose a simple integer handling for the CMA-ES based on
lower-bounding the marginal probabilities associated with the generation of
integer variables in the MGD. The numerical experiments on the MI-BBO benchmark
problems demonstrate the efficiency and robustness of the proposed method.
Furthermore, in order to demonstrate the generality of the idea of the proposed
method, in addition to the single-objective optimization case, we incorporate
it into multi-objective CMA-ES and verify its performance on bi-objective
mixed-integer benchmark problems.
Related papers
- A Fresh Look at Generalized Category Discovery through Non-negative Matrix Factorization [83.12938977698988]
Generalized Category Discovery (GCD) aims to classify both base and novel images using labeled base data.
Current approaches inadequately address the intrinsic optimization of the co-occurrence matrix $barA$ based on cosine similarity.
We propose a Non-Negative Generalized Category Discovery (NN-GCD) framework to address these deficiencies.
arXiv Detail & Related papers (2024-10-29T07:24:11Z) - CMA-ES for Discrete and Mixed-Variable Optimization on Sets of Points [9.130749109828717]
This paper focuses on the optimization on sets of points and proposes an optimization method by extending the covariance matrix adaptation evolution strategy (CMA-ES)
The CMA-ES-SoP incorporates margin correction that maintains the generation probability of neighboring points to prevent premature convergence to a specific non-optimal point.
Numerical simulations demonstrated that the CMA-ES-SoP successfully optimized the optimization problems on sets of points, whereas the naive CMA-ES failed to optimize them due to premature convergence.
arXiv Detail & Related papers (2024-08-23T13:10:06Z) - CatCMA : Stochastic Optimization for Mixed-Category Problems [9.13074910982872]
Black-box optimization problems often require simultaneously optimizing different types of variables, such as continuous, integer, and categorical variables.
Several Bayesian optimization methods can deal with mixed-category black-box optimization (MC-BBO), but they suffer from a lack of scalability to high-dimensional problems and internal computational cost.
This paper proposes CatCMA, a new optimization method for MC-BBO problems.
arXiv Detail & Related papers (2024-05-16T10:11:18Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - (1+1)-CMA-ES with Margin for Discrete and Mixed-Integer Problems [6.181926091202142]
This paper introduces the margin correction into (1+1)-CMA-ES, an elitist version of CMA-ES.
The (1+1)-CMA-ES is often advantageous for unimodal functions and can be computationally less expensive.
The numerical simulation using benchmark functions on mixed-integer, integer, and binary domains shows that (1+1)-CMA-ES with margin outperforms the CMA-ES with margin.
arXiv Detail & Related papers (2023-05-01T14:34:55Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - CMA-ES with Margin: Lower-Bounding Marginal Probability for
Mixed-Integer Black-Box Optimization [5.237999056930947]
This study targets the mixed-integer black-box optimization (MI-BBO) problem where continuous and integer variables should be optimized simultaneously.
We propose a simple modification of the CMA-ES based on lower-bounding the marginal probabilities associated with the generation of integer variables in the MGD.
arXiv Detail & Related papers (2022-05-26T16:47:56Z) - A unified algorithm framework for mean-variance optimization in
discounted Markov decision processes [7.510742715895749]
This paper studies the risk-averse mean-variance optimization in infinite-horizon discounted Markov decision processes (MDPs)
We introduce a pseudo mean to transform the untreatable MDP to a standard one with a redefined reward function in standard form.
We propose a unified algorithm framework with a bilevel optimization structure for the discounted mean-variance optimization.
arXiv Detail & Related papers (2022-01-15T02:19:56Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Distributionally Robust Bayesian Optimization [121.71766171427433]
We present a novel distributionally robust Bayesian optimization algorithm (DRBO) for zeroth-order, noisy optimization.
Our algorithm provably obtains sub-linear robust regret in various settings.
We demonstrate the robust performance of our method on both synthetic and real-world benchmarks.
arXiv Detail & Related papers (2020-02-20T22:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.