Objectives Are All You Need: Solving Deceptive Problems Without Explicit
Diversity Maintenance
- URL: http://arxiv.org/abs/2311.02283v1
- Date: Sat, 4 Nov 2023 00:09:48 GMT
- Title: Objectives Are All You Need: Solving Deceptive Problems Without Explicit
Diversity Maintenance
- Authors: Ryan Boldi, Li Ding, Lee Spector
- Abstract summary: We present an approach with promise to solve deceptive domains without explicit diversity maintenance.
We use lexicase selection to optimize for these objectives as it has been shown to implicitly maintain population diversity.
We find that decomposing objectives into many objectives and optimizing them outperforms MAP-Elites on the deceptive domains that we explore.
- Score: 7.3153233408665495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Navigating deceptive domains has often been a challenge in machine learning
due to search algorithms getting stuck at sub-optimal local optima. Many
algorithms have been proposed to navigate these domains by explicitly
maintaining diversity or equivalently promoting exploration, such as Novelty
Search or other so-called Quality Diversity algorithms. In this paper, we
present an approach with promise to solve deceptive domains without explicit
diversity maintenance by optimizing a potentially large set of defined
objectives. These objectives can be extracted directly from the environment by
sub-aggregating the raw performance of individuals in a variety of ways. We use
lexicase selection to optimize for these objectives as it has been shown to
implicitly maintain population diversity. We compare this technique with a
varying number of objectives to a commonly used quality diversity algorithm,
MAP-Elites, on a set of discrete optimization as well as reinforcement learning
domains with varying degrees of deception. We find that decomposing objectives
into many objectives and optimizing them outperforms MAP-Elites on the
deceptive domains that we explore. Furthermore, we find that this technique
results in competitive performance on the diversity-focused metrics of QD-Score
and Coverage, without explicitly optimizing for these things. Our ablation
study shows that this technique is robust to different subaggregation
techniques. However, when it comes to non-deceptive, or ``illumination"
domains, quality diversity techniques generally outperform our objective-based
framework with respect to exploration (but not exploitation), hinting at
potential directions for future work.
Related papers
- Illuminating the Diversity-Fitness Trade-Off in Black-Box Optimization [9.838618121102053]
In real-world applications, users often favor structurally diverse design choices over one high-quality solution.
This paper presents a fresh perspective on this challenge by considering the problem of identifying a fixed number of solutions with a pairwise distance above a specified threshold.
arXiv Detail & Related papers (2024-08-29T09:55:55Z) - Revisiting the Domain Shift and Sample Uncertainty in Multi-source
Active Domain Transfer [69.82229895838577]
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.
This setting neglects the more practical scenario where training data are collected from multiple sources.
This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains.
arXiv Detail & Related papers (2023-11-21T13:12:21Z) - Can the Problem-Solving Benefits of Quality Diversity Be Obtained
Without Explicit Diversity Maintenance? [0.0]
We argue that the correct comparison should be made to emphmulti-objective optimization frameworks.
We present a method that utilizes dimensionality reduction to automatically determine a set of behavioral descriptors for an individual.
arXiv Detail & Related papers (2023-05-12T21:24:04Z) - A Unified Algorithm Framework for Unsupervised Discovery of Skills based
on Determinantal Point Process [53.86223883060367]
We show that diversity and coverage in unsupervised option discovery can indeed be unified under the same mathematical framework.
Our proposed algorithm, ODPP, has undergone extensive evaluation on challenging tasks created with Mujoco and Atari.
arXiv Detail & Related papers (2022-12-01T01:40:03Z) - Multi-Objective Quality Diversity Optimization [2.4608515808275455]
We propose an extension of the MAP-Elites algorithm in the multi-objective setting: Multi-Objective MAP-Elites (MOME)
Namely, it combines the diversity inherited from the MAP-Elites grid algorithm with the strength of multi-objective optimizations.
We evaluate our method on several tasks, from standard optimization problems to robotics simulations.
arXiv Detail & Related papers (2022-02-07T10:48:28Z) - Domain Adaptive Semantic Segmentation with Self-Supervised Depth
Estimation [84.34227665232281]
Domain adaptation for semantic segmentation aims to improve the model performance in the presence of a distribution shift between source and target domain.
We leverage the guidance from self-supervised depth estimation, which is available on both domains, to bridge the domain gap.
We demonstrate the effectiveness of our proposed approach on the benchmark tasks SYNTHIA-to-Cityscapes and GTA-to-Cityscapes.
arXiv Detail & Related papers (2021-04-28T07:47:36Z) - Selection-Expansion: A Unifying Framework for Motion-Planning and
Diversity Search Algorithms [69.87173070473717]
We investigate the properties of two diversity search algorithms, the Novelty Search and the Goal Exploration Process algorithms.
The relation to MP algorithms reveals that the smoothness, or lack of smoothness of the mapping between the policy parameter space and the outcome space plays a key role in the search efficiency.
arXiv Detail & Related papers (2021-04-10T13:52:27Z) - MetaAlign: Coordinating Domain Alignment and Classification for
Unsupervised Domain Adaptation [84.90801699807426]
This paper proposes an effective meta-optimization based strategy dubbed MetaAlign.
We treat the domain alignment objective and the classification objective as the meta-train and meta-test tasks in a meta-learning scheme.
Experimental results demonstrate the effectiveness of our proposed method on top of various alignment-based baseline approaches.
arXiv Detail & Related papers (2021-03-25T03:16:05Z) - BOP-Elites, a Bayesian Optimisation algorithm for Quality-Diversity
search [0.0]
We propose the Bayesian optimisation of Elites (BOP-Elites) algorithm.
By considering user defined regions of the feature space as 'niches' our task is to find the optimal solution in each niche.
The resulting algorithm is very effective in identifying the parts of the search space that belong to a niche in feature space, and finding the optimal solution in each niche.
arXiv Detail & Related papers (2020-05-08T23:49:13Z) - Optimized Generic Feature Learning for Few-shot Classification across
Domains [96.4224578618561]
We propose to use cross-domain, cross-task data as validation objective for hyper- parameter optimization (HPO)
We demonstrate the effectiveness of this strategy on few-shot image classification within and across domains.
The learned features outperform all previous few-shot and meta-learning approaches.
arXiv Detail & Related papers (2020-01-22T09:31:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.