Approximation of a Pareto Set Segment Using a Linear Model with Sharing Variables
- URL: http://arxiv.org/abs/2404.00251v1
- Date: Sat, 30 Mar 2024 05:42:07 GMT
- Title: Approximation of a Pareto Set Segment Using a Linear Model with Sharing Variables
- Authors: Ping Guo, Qingfu Zhang, Xi Lin,
- Abstract summary: We develop a performance metric that considers both optimality and variable sharing.
We then design an algorithm for finding the model that minimizes the metric to meet the user's requirements.
Experimental results illustrate that we can obtain a linear model that approximates the mapping from the preference vectors to solutions in a local area well.
- Score: 14.161627541155775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many real-world applications, the Pareto Set (PS) of a continuous multiobjective optimization problem can be a piecewise continuous manifold. A decision maker may want to find a solution set that approximates a small part of the PS and requires the solutions in this set share some similarities. This paper makes a first attempt to address this issue. We first develop a performance metric that considers both optimality and variable sharing. Then we design an algorithm for finding the model that minimizes the metric to meet the user's requirements. Experimental results illustrate that we can obtain a linear model that approximates the mapping from the preference vectors to solutions in a local area well.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Preference-Optimized Pareto Set Learning for Blackbox Optimization [1.9628841617148691]
No single solution exists that can optimize all the objectives simultaneously.
In a typical MOO problem, the goal is to find a set of optimum solutions (Pareto set) that trades off the preferences among objectives.
Our formulation leads to a bilevel optimization problem that can be solved by e.g. differentiable cross-entropy methods.
arXiv Detail & Related papers (2024-08-19T13:23:07Z) - Outer Approximation and Super-modular Cuts for Constrained Assortment Optimization under Mixed-Logit Model [6.123324869194196]
We study the assortment optimization problem under the mixed-logit customer choice model.
Existing exact methods have primarily relied on mixed-integer linear programming (MILP) or second-order cone (CONIC) reformulations.
Our work addresses the problem by focusing on components of the objective function that can be proven to be monotonically super-modular and convex.
arXiv Detail & Related papers (2024-07-26T06:27:11Z) - MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Factorization of Multi-Agent Sampling-Based Motion Planning [72.42734061131569]
Modern robotics often involves multiple embodied agents operating within a shared environment.
Standard sampling-based algorithms can be used to search for solutions in the robots' joint space.
We integrate the concept of factorization into sampling-based algorithms, which requires only minimal modifications to existing methods.
We present a general implementation of a factorized SBA, derive an analytical gain in terms of sample complexity for PRM*, and showcase empirical results for RRG.
arXiv Detail & Related papers (2023-04-01T15:50:18Z) - Pareto Set Learning for Expensive Multi-Objective Optimization [5.419608513284392]
Expensive multi-objective optimization problems can be found in many real-world applications.
This paper develops a novel learning-based method to approximate the whole Pareto set for MOBO.
arXiv Detail & Related papers (2022-10-16T09:41:54Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Robust Multi-view Registration of Point Sets with Laplacian Mixture
Model [25.865100974015412]
We propose a novel probabilistic generative method to align multiple point sets based on the heavy-tailed Laplacian distribution.
We demonstrate the advantages of our method by comparing it with representative state-of-the-art approaches on benchmark challenging data sets.
arXiv Detail & Related papers (2021-10-26T14:49:09Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Sparse Approximate Solutions to Max-Plus Equations with Application to
Multivariate Convex Regression [34.99564569478268]
We show how one can obtain such solutions efficiently and in minimum time for any $ell_p$ approximation error.
We propose a novel method for piecewise fitting of convex functions, with optimality guarantees and an approximately sparse affine regions.
arXiv Detail & Related papers (2020-11-06T15:17:00Z) - MINA: Convex Mixed-Integer Programming for Non-Rigid Shape Alignment [77.38594866794429]
convex mixed-integer programming formulation for non-rigid shape matching.
We propose a novel shape deformation model based on an efficient low-dimensional discrete model.
arXiv Detail & Related papers (2020-02-28T09:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.