Goal Seeking Quadratic Unconstrained Binary Optimization
- URL: http://arxiv.org/abs/2103.12951v1
- Date: Wed, 24 Mar 2021 03:03:13 GMT
- Title: Goal Seeking Quadratic Unconstrained Binary Optimization
- Authors: Amit Verma and Mark Lewis
- Abstract summary: We present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
In this paper, we present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
- Score: 0.5439020425819
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Quadratic Unconstrained Binary Optimization (QUBO) modeling and solution
framework is required for quantum and digital annealers whose goal is the
optimization of a well defined metric, the objective function. However, diverse
suboptimal solutions may be preferred over harder to implement strict optimal
ones. In addition, the decision-maker usually has insights that are not always
efficiently translated into the optimization model, such as acceptable target,
interval or range values. Multi-criteria decision making is an example of
involving the user in the decision process. In this paper, we present two
variants of goal-seeking QUBO that minimize the deviation from the goal through
a tabu-search based greedy one-flip heuristic. Experimental results illustrate
the efficacy of the proposed approach over Constraint Programming for quickly
finding a satisficing set of solutions.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Few for Many: Tchebycheff Set Scalarization for Many-Objective Optimization [14.355588194787073]
Multi-objective optimization can be found in many real-world applications where some conflicting objectives can not be optimized by a single solution.
We propose a novel Tchebycheff set scalarization method to find a few representative solutions to cover a large number of objectives.
In this way, each objective can be well addressed by at least one solution in the small solution set.
arXiv Detail & Related papers (2024-05-30T03:04:57Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Optimization on Pareto sets: On a theory of multi-objective optimization [7.907376287850398]
In multi-objective optimization, a single decision vector must balance the trade-offs between many objectives.
We consider a more practically significant optimization problem, where the goal is to optimize a constrained set.
arXiv Detail & Related papers (2023-08-04T05:55:52Z) - Bi-objective Ranking and Selection Using Stochastic Kriging [0.0]
We consider bi-objective ranking and selection problems in which the two objective outcomes have been observed with uncertainty.
We propose a novel Bayesian bi-objective ranking and selection method that sequentially allocates extra samples to competitive solutions.
Experimental results show that the proposed method outperforms the standard allocation method, as well as a well-known state-of-the-art algorithm.
arXiv Detail & Related papers (2022-09-05T23:51:07Z) - Algorithm for Constrained Markov Decision Process with Linear
Convergence [55.41644538483948]
An agent aims to maximize the expected accumulated discounted reward subject to multiple constraints on its costs.
A new dual approach is proposed with the integration of two ingredients: entropy regularized policy and Vaidya's dual.
The proposed approach is shown to converge (with linear rate) to the global optimum.
arXiv Detail & Related papers (2022-06-03T16:26:38Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - A Surrogate Objective Framework for Prediction+Optimization with Soft
Constraints [29.962390392493507]
Decision-focused prediction approaches, such as SPO+ and direct optimization, have been proposed to fill this gap.
This paper proposes a novel analytically differentiable surrogate objective framework for real-world linear and semi-definite negative quadratic programming problems.
arXiv Detail & Related papers (2021-11-22T17:09:57Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Extracting Optimal Solution Manifolds using Constrained Neural
Optimization [6.800113407368289]
Constrained Optimization solution algorithms are restricted to point based solutions.
We present an approach for extracting optimal sets as approximate, where unmodified non-informed constraints are defined.
arXiv Detail & Related papers (2020-09-13T15:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.