Extract-QD Framework: A Generic Approach for Quality-Diversity in Noisy, Stochastic or Uncertain Domains
- URL: http://arxiv.org/abs/2502.06585v1
- Date: Mon, 10 Feb 2025 15:52:58 GMT
- Title: Extract-QD Framework: A Generic Approach for Quality-Diversity in Noisy, Stochastic or Uncertain Domains
- Authors: Manon Flageat, Johann Huber, François Helenon, Stephane Doncieux, Antoine Cully,
- Abstract summary: We present two contributions: first, the Extract-QD Framework (EQD Framework), and second, Extract-ME (EME)
The EQD Framework unifies existing approaches within a modular view, and facilitates developing novel methods by interchanging modules.
For any new uncertain task, our contributions now provide EME as a reliable "first guess" method, and the EQD Framework as a tool for developing task-specific approaches.
- Score: 4.084778152234208
- License:
- Abstract: Quality-Diversity (QD) has demonstrated potential in discovering collections of diverse solutions to optimisation problems. Originally designed for deterministic environments, QD has been extended to noisy, stochastic, or uncertain domains through various Uncertain-QD (UQD) methods. However, the large number of UQD methods, each with unique constraints, makes selecting the most suitable one challenging. To remedy this situation, we present two contributions: first, the Extract-QD Framework (EQD Framework), and second, Extract-ME (EME), a new method derived from it. The EQD Framework unifies existing approaches within a modular view, and facilitates developing novel methods by interchanging modules. We use it to derive EME, a novel method that consistently outperforms or matches the best existing methods on standard benchmarks, while previous methods show varying performance. In a second experiment, we show how our EQD Framework can be used to augment existing QD algorithms and in particular the well-established Policy-Gradient-Assisted-MAP-Elites method, and demonstrate improved performance in uncertain domains at no additional evaluation cost. For any new uncertain task, our contributions now provide EME as a reliable "first guess" method, and the EQD Framework as a tool for developing task-specific approaches. Together, these contributions aim to lower the cost of adopting UQD insights in QD applications.
Related papers
- Token-Level Density-Based Uncertainty Quantification Methods for Eliciting Truthfulness of Large Language Models [76.17975723711886]
Uncertainty quantification (UQ) is a prominent approach for eliciting truthful answers from large language models (LLMs)
In this work, we adapt Mahalanobis Distance (MD) - a well-established UQ technique in classification tasks - for text generation.
Our method extracts token embeddings from multiple layers of LLMs, computes MD scores for each token, and uses linear regression trained on these features to provide robust uncertainty scores.
arXiv Detail & Related papers (2025-02-20T10:25:13Z) - Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph [83.90988015005934]
Uncertainty quantification is a key element of machine learning applications.
We introduce a novel benchmark that implements a collection of state-of-the-art UQ baselines.
We conduct a large-scale empirical investigation of UQ and normalization techniques across eleven tasks, identifying the most effective approaches.
arXiv Detail & Related papers (2024-06-21T20:06:31Z) - Large Language Models as In-context AI Generators for Quality-Diversity [8.585387103144825]
In-context QD aims to generate interesting solutions using few-shot and many-shot prompting with quality-diverse examples from the QD archive as context.
In-context QD displays promising results compared to both QD baselines and similar strategies developed for single-objective optimization.
arXiv Detail & Related papers (2024-04-24T10:35:36Z) - Benchmark tasks for Quality-Diversity applied to Uncertain domains [1.5469452301122175]
We introduce a set of 8 easy-to-implement and lightweight tasks, split into 3 main categories.
We identify the key uncertainty properties to easily define UQD benchmark tasks.
All our tasks build on the Redundant Arm: a common QD environment that is lightweight and easily replicable.
arXiv Detail & Related papers (2023-04-24T21:23:26Z) - Don't Bet on Luck Alone: Enhancing Behavioral Reproducibility of
Quality-Diversity Solutions in Uncertain Domains [2.639902239625779]
We introduce Archive Reproducibility Improvement Algorithm (ARIA)
ARIA is a plug-and-play approach that improves the quality of solutions present in an archive.
We show that our algorithm enhances the quality and descriptor space coverage of any given archive by at least 50%.
arXiv Detail & Related papers (2023-04-07T14:45:14Z) - Benchmarking the Reliability of Post-training Quantization: a Particular
Focus on Worst-case Performance [53.45700148820669]
Post-training quantization (PTQ) is a popular method for compressing deep neural networks (DNNs) without modifying their original architecture or training procedures.
Despite its effectiveness and convenience, the reliability of PTQ methods in the presence of some extrem cases such as distribution shift and data noise remains largely unexplored.
This paper first investigates this problem on various commonly-used PTQ methods.
arXiv Detail & Related papers (2023-03-23T02:55:50Z) - Enhancing MAP-Elites with Multiple Parallel Evolution Strategies [8.585387103144825]
We propose a novel Quality-Diversity (QD) algorithm based on Evolution Strategies (ES)
MEMES maintains multiple (up to 100) simultaneous ES processes, each with its own independent objective and reset mechanism designed for QD optimisation.
We show that MEMES outperforms both gradient-based and mutation-based QD algorithms on black-box optimisation and QD-Reinforcement-Learning tasks.
arXiv Detail & Related papers (2023-03-10T18:55:02Z) - Uncertain Quality-Diversity: Evaluation methodology and new methods for
Quality-Diversity in Uncertain Domains [1.827510863075184]
Quality-Diversity optimisation (QD) has proven to yield promising results across a broad set of applications.
However, QD approaches struggle in the presence of uncertainty in the environment, as it impacts their ability to quantify the true performance and novelty of solutions.
We formalise a common framework for uncertain domains: the Uncertain QD setting.
Second, we propose a new methodology to evaluate Uncertain QD approaches, relying on a new per-generation sampling budget and a set of existing and new metrics specifically designed for Uncertain QD.
Third, we propose three new Uncertain QD algorithms: Archive-sampling, Parallel-Adaptive
arXiv Detail & Related papers (2023-02-01T14:12:04Z) - Plug-and-Play Few-shot Object Detection with Meta Strategy and Explicit
Localization Inference [78.41932738265345]
This paper proposes a plug detector that can accurately detect the objects of novel categories without fine-tuning process.
We introduce two explicit inferences into the localization process to reduce its dependence on annotated data.
It shows a significant lead in both efficiency, precision, and recall under varied evaluation protocols.
arXiv Detail & Related papers (2021-10-26T03:09:57Z) - Few-shot Quality-Diversity Optimization [50.337225556491774]
Quality-Diversity (QD) optimization has been shown to be effective tools in dealing with deceptive minima and sparse rewards in Reinforcement Learning.
We show that, given examples from a task distribution, information about the paths taken by optimization in parameter space can be leveraged to build a prior population, which when used to initialize QD methods in unseen environments, allows for few-shot adaptation.
Experiments carried in both sparse and dense reward settings using robotic manipulation and navigation benchmarks show that it considerably reduces the number of generations that are required for QD optimization in these environments.
arXiv Detail & Related papers (2021-09-14T17:12:20Z) - SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
Reinforcement Learning [102.78958681141577]
We present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy deep reinforcement learning algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration.
arXiv Detail & Related papers (2020-07-09T17:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.