RIGA: A Regret-Based Interactive Genetic Algorithm
- URL: http://arxiv.org/abs/2311.06063v1
- Date: Fri, 10 Nov 2023 13:56:15 GMT
- Title: RIGA: A Regret-Based Interactive Genetic Algorithm
- Authors: Nawal Benabbou and Cassandre Leroy and Thibaut Lust
- Abstract summary: We propose an interactive genetic algorithm for solving multi-objective optimization problems under preference imprecision.
Our algorithm, called RIGA, can be applied to any multi-objective optimization problem provided that the aggregation function is linear in its parameters.
For several performance indicators (computation times, gap to optimality and number of queries), RIGA obtains better results than state-of-the-art algorithms.
- Score: 14.388696798649658
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose an interactive genetic algorithm for solving
multi-objective combinatorial optimization problems under preference
imprecision. More precisely, we consider problems where the decision maker's
preferences over solutions can be represented by a parameterized aggregation
function (e.g., a weighted sum, an OWA operator, a Choquet integral), and we
assume that the parameters are initially not known by the recommendation
system. In order to quickly make a good recommendation, we combine elicitation
and search in the following way: 1) we use regret-based elicitation techniques
to reduce the parameter space in a efficient way, 2) genetic operators are
applied on parameter instances (instead of solutions) to better explore the
parameter space, and 3) we generate promising solutions (population) using
existing solving methods designed for the problem with known preferences. Our
algorithm, called RIGA, can be applied to any multi-objective combinatorial
optimization problem provided that the aggregation function is linear in its
parameters and that a (near-)optimal solution can be efficiently determined for
the problem with known preferences. We also study its theoretical performances:
RIGA can be implemented in such way that it runs in polynomial time while
asking no more than a polynomial number of queries. The method is tested on the
multi-objective knapsack and traveling salesman problems. For several
performance indicators (computation times, gap to optimality and number of
queries), RIGA obtains better results than state-of-the-art algorithms.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Learning Adaptive Evolutionary Computation for Solving Multi-Objective
Optimization Problems [3.3266268089678257]
This paper proposes a framework that integrates MOEAs with adaptive parameter control using Deep Reinforcement Learning (DRL)
The DRL policy is trained to adaptively set the values that dictate the intensity and probability of mutation for solutions during optimization.
We show the learned policy is transferable, i.e., the policy trained on a simple benchmark problem can be directly applied to solve the complex warehouse optimization problem.
arXiv Detail & Related papers (2022-11-01T22:08:34Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - A Simple Evolutionary Algorithm for Multi-modal Multi-objective
Optimization [0.0]
We introduce a steady-state evolutionary algorithm for solving multi-modal, multi-objective optimization problems (MMOPs)
We report its performance on 21 MMOPs from various test suites that are widely used for benchmarking using a low computational budget of 1000 function evaluations.
arXiv Detail & Related papers (2022-01-18T03:31:11Z) - QROSS: QUBO Relaxation Parameter Optimisation via Learning Solver
Surrogates [14.905085636501438]
We build surrogate models of QUBO solvers via learning from solver data on a collection of instances of a problem.
In this way, we are able capture the common structure of the instances and their interactions with the solver, and produce good choices of penalty parameters.
QROSS is shown to generalise well to out-of-distribution datasets and different types of QUBO solvers.
arXiv Detail & Related papers (2021-03-19T09:06:12Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Bayesian preference elicitation for multiobjective combinatorial
optimization [12.96855751244076]
We introduce a new incremental preference elicitation procedure able to deal with noisy responses of a Decision Maker (DM)
We assume that the preferences of the DM are represented by an aggregation function whose parameters are unknown and that the uncertainty about them is represented by a density function on the parameter space.
arXiv Detail & Related papers (2020-07-29T12:28:37Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.