An exploration for higher efficiency in multi objective optimisation with reinforcement learning
- URL: http://arxiv.org/abs/2512.10208v1
- Date: Thu, 11 Dec 2025 01:58:04 GMT
- Title: An exploration for higher efficiency in multi objective optimisation with reinforcement learning
- Authors: Mehmet Emin Aydin,
- Abstract summary: Efficiency in optimisation and search processes persists to be one of the challenges.<n>Utilising a pool of operators instead of a single operator to handle move operations within a neighbourhood remains promising.<n>One of the promising ideas is to generalise experiences and seek how to utilise it.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficiency in optimisation and search processes persists to be one of the challenges, which affects the performance and use of optimisation algorithms. Utilising a pool of operators instead of a single operator to handle move operations within a neighbourhood remains promising, but an optimum or near optimum sequence of operators necessitates further investigation. One of the promising ideas is to generalise experiences and seek how to utilise it. Although numerous works are done around this issue for single objective optimisation, multi-objective cases have not much been touched in this regard. A generalised approach based on multi-objective reinforcement learning approach seems to create remedy for this issue and offer good solutions. This paper overviews a generalisation approach proposed with certain stages completed and phases outstanding that is aimed to help demonstrate the efficiency of using multi-objective reinforcement learning.
Related papers
- Aligned Multi Objective Optimization [15.404668020811513]
In machine learning practice, there are many scenarios where such conflict does not take place.<n>Recent findings from multi-task learning, reinforcement learning, and LLMs training show that diverse related tasks can enhance performance across objectives simultaneously.<n>We introduce the Aligned Multi-Objective Optimization framework, propose new algorithms for this setting, and provide theoretical guarantees of their superior performance.
arXiv Detail & Related papers (2025-02-19T20:50:03Z) - Constrained Multi-objective Bayesian Optimization through Optimistic Constraints Estimation [10.77641869521259]
We propose a novel constrained multi-objective Bayesian optimization algorithm COMBOO that balances active learning of the level-set defined on multiple unknowns with multi-objective optimization within the feasible region.<n>We provide both theoretical analysis and empirical evidence, demonstrating the efficacy of our approach on various synthetic benchmarks and real-world applications.
arXiv Detail & Related papers (2024-11-06T03:38:00Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Deep Pareto Reinforcement Learning for Multi-Objective Recommender Systems [54.2484458418885]
optimizing multiple objectives simultaneously is an important task for recommendation platforms.<n>Existing multi-objective recommender systems do not systematically consider such dynamic relationships.
arXiv Detail & Related papers (2024-07-04T02:19:49Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Multi-objective optimisation via the R2 utilities [4.12484724941528]
We show how to recast the multi-objective optimisation problem into a single-objective optimisation problem defined over sets.<n>An appropriate class of objective functions for this new problem are the R2 utilities, which are utility functions that are defined as a weighted integral over the scalarised optimisation problems.<n>We then analyse the performance of these greedy algorithms both theoretically and empirically.
arXiv Detail & Related papers (2023-05-19T16:01:35Z) - Characterization of Constrained Continuous Multiobjective Optimization
Problems: A Performance Space Perspective [0.0]
Constrained multiobjective optimization problems (CMOPs) are unsatisfactorily understood.
The choice of adequate CMOPs for benchmarking is difficult and lacks a formal background.
This paper presents a novel performance assessment approach designed explicitly for constrained multiobjective optimization.
arXiv Detail & Related papers (2023-02-04T14:12:30Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z) - Empirical Study on the Benefits of Multiobjectivization for Solving
Single-Objective Problems [0.0]
Local optima are often preventing algorithms from making progress and thus pose a severe threat.
With the use of a sophisticated visualization technique based on the multi-objective gradients, the properties of the arising multi-objective landscapes are illustrated and examined.
We will empirically show that the multi-objective COCO MOGSA is able to exploit these properties to overcome local traps.
arXiv Detail & Related papers (2020-06-25T14:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.