Quality-Diversity Optimization as Multi-Objective Optimization
- URL: http://arxiv.org/abs/2602.00478v1
- Date: Sat, 31 Jan 2026 03:01:13 GMT
- Title: Quality-Diversity Optimization as Multi-Objective Optimization
- Authors: Xi Lin, Ping Guo, Yilu Liu, Qingfu Zhang, Jianyong Sun,
- Abstract summary: The Quality-Diversity (QD) optimization aims to discover a collection of high-performing solutions that simultaneously exhibit diverse behaviors.<n>This work introduces a novel reformulation by casting the QD optimization as a multi-objective optimization problem.
- Score: 20.499045742095582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Quality-Diversity (QD) optimization aims to discover a collection of high-performing solutions that simultaneously exhibit diverse behaviors within a user-defined behavior space. This paradigm has stimulated significant research interest and demonstrated practical utility in domains including robot control, creative design, and adversarial sample generation. A variety of QD algorithms with distinct design principles have been proposed in recent years. Instead of proposing a new QD algorithm, this work introduces a novel reformulation by casting the QD optimization as a multi-objective optimization (MOO) problem with a huge number of optimization objectives. By establishing this connection, we enable the direct adoption of well-established MOO methods, particularly set-based scalarization techniques, to solve QD problems through a collaborative search process. We further provide a theoretical analysis demonstrating that our approach inherits theoretical guarantees from MOO while providing desirable properties for the QD optimization. Experimental studies across several QD applications confirm that our method achieves performance competitive with state-of-the-art QD algorithms.
Related papers
- Multi-Objective Covariance Matrix Adaptation MAP-Annealing [7.103319934188755]
Quality-Diversity (QD) optimization is an emerging field that focuses on finding a set of behaviorally diverse and high-quality solutions.<n>Recent work on Multi-Objective Quality-Diversity (MOQD) extends QD optimization to simultaneously optimize multiple objective functions.<n>This opens up multi-objective applications for QD, such as generating a diverse set of game maps that maximize difficulty, realism, or other properties.
arXiv Detail & Related papers (2025-05-27T04:39:28Z) - Preference Optimization for Combinatorial Optimization Problems [54.87466279363487]
Reinforcement Learning (RL) has emerged as a powerful tool for neural optimization, enabling models learns that solve complex problems without requiring expert knowledge.<n>Despite significant progress, existing RL approaches face challenges such as diminishing reward signals and inefficient exploration in vast action spaces.<n>We propose Preference Optimization, a novel method that transforms quantitative reward signals into qualitative preference signals via statistical comparison modeling.
arXiv Detail & Related papers (2025-05-13T16:47:00Z) - Quality Diversity for Variational Quantum Circuit Optimization [4.385485960663339]
Quality diversity (QD) search methods combine diversity-driven optimization with user-specified features that offer insight into the optimization quality of circuit solution candidates.<n>We introduce a matrix-based circuit engineering that can be readily optimized with QD-CMA methods and evaluate circuit quality properties like expressivity and gate-diversity as quality measures.
arXiv Detail & Related papers (2025-04-11T11:44:08Z) - Preference-Guided Diffusion for Multi-Objective Offline Optimization [64.08326521234228]
We propose a preference-guided diffusion model for offline multi-objective optimization.<n>Our guidance is a preference model trained to predict the probability that one design dominates another.<n>Our results highlight the effectiveness of classifier-guided diffusion models in generating diverse and high-quality solutions.
arXiv Detail & Related papers (2025-03-21T16:49:38Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.<n> deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.<n>This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - Large Language Models as In-context AI Generators for Quality-Diversity [8.585387103144825]
In-context QD aims to generate interesting solutions using few-shot and many-shot prompting with quality-diverse examples from the QD archive as context.
In-context QD displays promising results compared to both QD baselines and similar strategies developed for single-objective optimization.
arXiv Detail & Related papers (2024-04-24T10:35:36Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Multiobjective Optimization Analysis for Finding Infrastructure-as-Code
Deployment Configurations [0.3774866290142281]
This paper is focused on a multiobjective problem related to Infrastructure-as-Code deployment configurations.
We resort in this paper to nine different evolutionary-based multiobjective algorithms.
Results obtained by each method after 10 independent runs have been compared using Friedman's non-parametric tests.
arXiv Detail & Related papers (2024-01-18T13:55:32Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - Few-shot Quality-Diversity Optimization [50.337225556491774]
Quality-Diversity (QD) optimization has been shown to be effective tools in dealing with deceptive minima and sparse rewards in Reinforcement Learning.
We show that, given examples from a task distribution, information about the paths taken by optimization in parameter space can be leveraged to build a prior population, which when used to initialize QD methods in unseen environments, allows for few-shot adaptation.
Experiments carried in both sparse and dense reward settings using robotic manipulation and navigation benchmarks show that it considerably reduces the number of generations that are required for QD optimization in these environments.
arXiv Detail & Related papers (2021-09-14T17:12:20Z) - Multi-Objective Optimization of the Textile Manufacturing Process Using
Deep-Q-Network Based Multi-Agent Reinforcement Learning [5.900286890213338]
The paper proposes a multi-agent reinforcement learning (MARL) framework to transform the optimization process into a game.
A utilitarian selection mechanism was employed in the game to avoid the interruption of multiple equilibriumlibria.
The proposed MARL system is possible to achieve the optimal solutions for the textile ozonation process and it performs better than the traditional approaches.
arXiv Detail & Related papers (2020-12-02T11:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.