An Approach to Ordering Objectives and Pareto Efficient Solutions
- URL: http://arxiv.org/abs/2205.15291v1
- Date: Mon, 30 May 2022 17:55:53 GMT
- Title: An Approach to Ordering Objectives and Pareto Efficient Solutions
- Authors: Sebastian H\"onel, Welf L\"owe
- Abstract summary: Solutions to multi-objective optimization problems can generally not be compared or ordered.
Decision-makers are often made to believe that scaled objectives can be compared.
We present a method that uses the probability integral transform in order to map the objectives of a problem into scores that all share the same range.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Solutions to multi-objective optimization problems can generally not be
compared or ordered, due to the lack of orderability of the single objectives.
Furthermore, decision-makers are often made to believe that scaled objectives
can be compared. This is a fallacy, as the space of solutions is in practice
inhomogeneous without linear trade-offs. We present a method that uses the
probability integral transform in order to map the objectives of a problem into
scores that all share the same range. In the score space, we can learn which
trade-offs are actually possible and develop methods for mapping the desired
trade-off back into the preference space. Our results demonstrate that Pareto
efficient solutions can be ordered using a low- or no-preference aggregation of
the single objectives. When using scores instead of raw objectives during
optimization, the process allows for obtaining trade-offs significantly closer
to the expressed preference. Using a non-linear mapping for transforming a
desired solution in the score space to the required preference for optimization
improves this even more drastically.
Related papers
- Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Few for Many: Tchebycheff Set Scalarization for Many-Objective Optimization [14.355588194787073]
Multi-objective optimization can be found in many real-world applications where some conflicting objectives can not be optimized by a single solution.
We propose a novel Tchebycheff set scalarization method to find a few representative solutions to cover a large number of objectives.
In this way, each objective can be well addressed by at least one solution in the small solution set.
arXiv Detail & Related papers (2024-05-30T03:04:57Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Controllable Preference Optimization: Toward Controllable Multi-Objective Alignment [103.12563033438715]
Alignment in artificial intelligence pursues consistency between model responses and human preferences as well as values.
Existing alignment techniques are mostly unidirectional, leading to suboptimal trade-offs and poor flexibility over various objectives.
We introduce controllable preference optimization (CPO), which explicitly specifies preference scores for different objectives.
arXiv Detail & Related papers (2024-02-29T12:12:30Z) - Optimization on Pareto sets: On a theory of multi-objective optimization [7.907376287850398]
In multi-objective optimization, a single decision vector must balance the trade-offs between many objectives.
We consider a more practically significant optimization problem, where the goal is to optimize a constrained set.
arXiv Detail & Related papers (2023-08-04T05:55:52Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Data-Driven Offline Decision-Making via Invariant Representation
Learning [97.49309949598505]
offline data-driven decision-making involves synthesizing optimized decisions with no active interaction.
A key challenge is distributional shift: when we optimize with respect to the input into a model trained from offline data, it is easy to produce an out-of-distribution (OOD) input that appears erroneously good.
In this paper, we formulate offline data-driven decision-making as domain adaptation, where the goal is to make accurate predictions for the value of optimized decisions.
arXiv Detail & Related papers (2022-11-21T11:01:37Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Goal Seeking Quadratic Unconstrained Binary Optimization [0.5439020425819]
We present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
In this paper, we present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
arXiv Detail & Related papers (2021-03-24T03:03:13Z) - Extracting Optimal Solution Manifolds using Constrained Neural
Optimization [6.800113407368289]
Constrained Optimization solution algorithms are restricted to point based solutions.
We present an approach for extracting optimal sets as approximate, where unmodified non-informed constraints are defined.
arXiv Detail & Related papers (2020-09-13T15:37:44Z) - Ensuring smoothly navigable approximation sets by Bezier curve
parameterizations in evolutionary bi-objective optimization -- applied to
brachytherapy treatment planning for prostate cancer [0.0]
We study the case of parameterizing approximation sets as smooth Bezier curves in decision space.
We show that high-quality approximation sets can be obtained with BezEA, sometimes even outperforming the domination- and UHV-based algorithms.
arXiv Detail & Related papers (2020-06-11T13:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.