A Newton Method for Hausdorff Approximations of the Pareto Front within Multi-objective Evolutionary Algorithms
- URL: http://arxiv.org/abs/2405.05721v2
- Date: Wed, 25 Sep 2024 13:56:05 GMT
- Title: A Newton Method for Hausdorff Approximations of the Pareto Front within Multi-objective Evolutionary Algorithms
- Authors: Hao Wang, Angel E. Rodriguez-Fernandez, Lourdes Uribe, André Deutz, Oziel Cortés-Piña, Oliver Schütze,
- Abstract summary: We propose a set-based Newton method for Hausdorff approximations of the Pareto front to be used within multi-objective evolutionary algorithms.
We show the benefit of the Newton method as a post-processing step on several benchmark test functions and different base evolutionary algorithms.
- Score: 3.2888428450190044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common goal in evolutionary multi-objective optimization is to find suitable finite-size approximations of the Pareto front of a given multi-objective optimization problem. While many multi-objective evolutionary algorithms have proven to be very efficient in finding good Pareto front approximations, they may need quite a few resources or may even fail to obtain optimal or nearly approximations. Hereby, optimality is implicitly defined by the chosen performance indicator. In this work, we propose a set-based Newton method for Hausdorff approximations of the Pareto front to be used within multi-objective evolutionary algorithms. To this end, we first generalize the previously proposed Newton step for the performance indicator for the treatment of constrained problems for general reference sets. To approximate the target Pareto front, we propose a particular strategy for generating the reference set that utilizes the data gathered by the evolutionary algorithm during its run. Finally, we show the benefit of the Newton method as a post-processing step on several benchmark test functions and different base evolutionary algorithms.
Related papers
- Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - B\'ezier Flow: a Surface-wise Gradient Descent Method for
Multi-objective Optimization [12.487037582320804]
We extend the stability of optimization algorithms in the sense of Probability Approximately Correct (PAC) learning.
We show that multi-objective optimization algorithms derived from a gradient descent-based single-objective optimization algorithm are PAC stable.
arXiv Detail & Related papers (2022-05-23T07:47:58Z) - A novel multiobjective evolutionary algorithm based on decomposition and
multi-reference points strategy [14.102326122777475]
Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a significantly promising approach for solving multiobjective optimization problems (MOPs)
We propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points.
arXiv Detail & Related papers (2021-10-27T02:07:08Z) - Surrogate-Assisted Reference Vector Adaptation to Various Pareto Front
Shapes for Many-Objective Bayesian Optimization [0.0]
We propose a surrogate-assisted reference vector adaptation (SRVA) method to solve expensive multi- and many-objective optimization problems.
The proposed algorithm is compared with two other MBO algorithms by applying them to benchmark problems.
Experimental results show that the proposed algorithm outperforms the other two in the problems whose objective functions are reasonably approximated by the Kriging models.
arXiv Detail & Related papers (2021-10-10T03:05:12Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Adaptive Sampling of Pareto Frontiers with Binary Constraints Using
Regression and Classification [0.0]
We present a novel adaptive optimization algorithm for black-box multi-objective optimization problems with binary constraints.
Our method is based on probabilistic regression and classification models, which act as a surrogate for the optimization goals.
We also present a novel ellipsoid truncation method to speed up the expected hypervolume calculation.
arXiv Detail & Related papers (2020-08-27T09:15:02Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.