Optimization on Pareto sets: On a theory of multi-objective optimization
- URL: http://arxiv.org/abs/2308.02145v1
- Date: Fri, 4 Aug 2023 05:55:52 GMT
- Title: Optimization on Pareto sets: On a theory of multi-objective optimization
- Authors: Abhishek Roy, Geelon So, Yi-An Ma
- Abstract summary: In multi-objective optimization, a single decision vector must balance the trade-offs between many objectives.
We consider a more practically significant optimization problem, where the goal is to optimize a constrained set.
- Score: 7.907376287850398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In multi-objective optimization, a single decision vector must balance the
trade-offs between many objectives. Solutions achieving an optimal trade-off
are said to be Pareto optimal: these are decision vectors for which improving
any one objective must come at a cost to another. But as the set of Pareto
optimal vectors can be very large, we further consider a more practically
significant Pareto-constrained optimization problem, where the goal is to
optimize a preference function constrained to the Pareto set.
We investigate local methods for solving this constrained optimization
problem, which poses significant challenges because the constraint set is (i)
implicitly defined, and (ii) generally non-convex and non-smooth, even when the
objectives are. We define notions of optimality and stationarity, and provide
an algorithm with a last-iterate convergence rate of $O(K^{-1/2})$ to
stationarity when the objectives are strongly convex and Lipschitz smooth.
Related papers
- Preference-Optimized Pareto Set Learning for Blackbox Optimization [1.9628841617148691]
No single solution exists that can optimize all the objectives simultaneously.
In a typical MOO problem, the goal is to find a set of optimum solutions (Pareto set) that trades off the preferences among objectives.
Our formulation leads to a bilevel optimization problem that can be solved by e.g. differentiable cross-entropy methods.
arXiv Detail & Related papers (2024-08-19T13:23:07Z) - Few for Many: Tchebycheff Set Scalarization for Many-Objective Optimization [14.355588194787073]
Multi-objective optimization can be found in many real-world applications where some conflicting objectives can not be optimized by a single solution.
We propose a novel Tchebycheff set scalarization method to find a few representative solutions to cover a large number of objectives.
In this way, each objective can be well addressed by at least one solution in the small solution set.
arXiv Detail & Related papers (2024-05-30T03:04:57Z) - Multi-Objective Bayesian Optimization with Active Preference Learning [18.066263838953223]
We propose a Bayesian optimization (BO) approach to identifying the most preferred solution in a multi-objective optimization (MOO) problem.
To minimize the interaction cost with the decision maker (DM), we also propose an active learning strategy for the preference estimation.
arXiv Detail & Related papers (2023-11-22T15:24:36Z) - Evolutionary Solution Adaption for Multi-Objective Metal Cutting Process
Optimization [59.45414406974091]
We introduce a framework for system flexibility that allows us to study the ability of an algorithm to transfer solutions from previous optimization tasks.
We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, that optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, that accommodates different possibilities that can be activated or deactivated.
Results show that adaption with standard NSGA-II greatly reduces the number of evaluations required for optimization to a target goal, while the proposed variants further improve the adaption costs.
arXiv Detail & Related papers (2023-05-31T12:07:50Z) - Algorithm for Constrained Markov Decision Process with Linear
Convergence [55.41644538483948]
An agent aims to maximize the expected accumulated discounted reward subject to multiple constraints on its costs.
A new dual approach is proposed with the integration of two ingredients: entropy regularized policy and Vaidya's dual.
The proposed approach is shown to converge (with linear rate) to the global optimum.
arXiv Detail & Related papers (2022-06-03T16:26:38Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Goal Seeking Quadratic Unconstrained Binary Optimization [0.5439020425819]
We present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
In this paper, we present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
arXiv Detail & Related papers (2021-03-24T03:03:13Z) - A Hybrid 2-stage Neural Optimization for Pareto Front Extraction [3.918940900258555]
A major obstacle to optimal trade-off solutions is that they don't always converge to each other.
We propose a two-stage approach that is accurate and cost-effective.
arXiv Detail & Related papers (2021-01-27T20:56:19Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.