When to Truncate the Archive? On the Effect of the Truncation Frequency in Multi-Objective Optimisation
- URL: http://arxiv.org/abs/2504.01332v2
- Date: Sun, 13 Apr 2025 10:09:55 GMT
- Title: When to Truncate the Archive? On the Effect of the Truncation Frequency in Multi-Objective Optimisation
- Authors: Zhiji Cui, Zimin Liang, Lie Meng Pang, Hisao Ishibuchi, Miqing Li,
- Abstract summary: We show that, interestingly, truncating the archive once a new solution generated tends to be the best, whereas considering an unbounded archive is often the worst.<n>Our results highlight the importance of developing effective subset selection techniques.
- Score: 6.391724105255245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Using an archive to store nondominated solutions found during the search of a multi-objective evolutionary algorithm (MOEA) is a useful practice. However, as nondominated solutions of a multi-objective optimisation problem can be enormous or infinitely many, it is desirable to provide the decision-maker with only a small, representative portion of all the nondominated solutions in the archive, thus entailing a truncation operation. Then, an important issue is when to truncate the archive. This can be done once a new solution generated, a batch of new solutions generated, or even using an unbounded archive to keep all nondominated solutions generated and truncate it later. Intuitively, the last approach may lead to a better result since we have all the information in hand before performing the truncation. In this paper, we study this issue and investigate the effect of the timing of truncating the archive. We apply well-established truncation criteria that are commonly used in the population maintenance procedure of MOEAs (e.g., crowding distance, hypervolume indicator, and decomposition). We show that, interestingly, truncating the archive once a new solution generated tends to be the best, whereas considering an unbounded archive is often the worst. We analyse and discuss this phenomenon. Our results highlight the importance of developing effective subset selection techniques (rather than employing the population maintenance methods in MOEAs) when using a large archive.
Related papers
- Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - CItruS: Chunked Instruction-aware State Eviction for Long Sequence Modeling [52.404072802235234]
We introduce Chunked Instruction-aware State Eviction (CItruS), a novel modeling technique that integrates the attention preferences useful for a downstream task into the eviction process of hidden states.
Our training-free method exhibits superior performance on long sequence comprehension and retrieval tasks over several strong baselines under the same memory budget.
arXiv Detail & Related papers (2024-06-17T18:34:58Z) - An Archive Can Bring Provable Speed-ups in Multi-Objective Evolutionary Algorithms [22.123838452178585]
We show for the first time, that using an archive can guarantee speed-ups for MOEAs.
We prove that for two well-established MOEAs, using an archive brings a acceleration on the expected running time.
arXiv Detail & Related papers (2024-06-04T08:59:16Z) - Multi-Objective Archiving [6.469246318869941]
archiving is the process of comparing new solutions with previous ones and deciding how to update the archive/population.
There is lack of systematic study of archiving methods from a general theoretical perspective.
arXiv Detail & Related papers (2023-03-16T23:08:52Z) - Experiments on Generalizability of BERTopic on Multi-Domain Short Text [2.352645870795664]
We explore how the state-of-the-art BERTopic algorithm performs on short multi-domain text.
We analyze the performance of the HDBSCAN clustering algorithm utilized by BERTopic.
When we replace HDBSCAN with k-Means, we achieve similar performance, but without outliers.
arXiv Detail & Related papers (2022-12-16T13:07:39Z) - Effects of Archive Size on Computation Time and Solution Quality for
Multi-Objective Optimization [6.146046338698174]
An external archive has been used to store all nondominated solutions found by an evolutionary multi-objective optimization algorithm in some studies.
We examine the effects of the archive size on three aspects: (i) the quality of the selected final solution set, (ii) the total computation time for the archive maintenance and the final solution set selection, and (iii) the required memory size.
arXiv Detail & Related papers (2022-09-07T12:25:16Z) - Geodesics, Non-linearities and the Archive of Novelty Search [69.6462706723023]
We show that a key effect of the archive is that it counterbalances the exploration biases that result from the use of inadequate behavior metrics.
Our observations seem to hint that attributing a more active role to the archive in sampling can be beneficial.
arXiv Detail & Related papers (2022-05-06T12:03:40Z) - Autoregressive Search Engines: Generating Substrings as Document
Identifiers [53.0729058170278]
Autoregressive language models are emerging as the de-facto standard for generating answers.
Previous work has explored ways to partition the search space into hierarchical structures.
In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers.
arXiv Detail & Related papers (2022-04-22T10:45:01Z) - ResLT: Residual Learning for Long-tailed Recognition [64.19728932445523]
We propose a more fundamental perspective for long-tailed recognition, i.e., from the aspect of parameter space.
We design the effective residual fusion mechanism -- with one main branch optimized to recognize images from all classes, another two residual branches are gradually fused and optimized to enhance images from medium+tail classes and tail classes respectively.
We test our method on several benchmarks, i.e., long-tailed version of CIFAR-10, CIFAR-100, Places, ImageNet, and iNaturalist 2018.
arXiv Detail & Related papers (2021-01-26T08:43:50Z) - Meta-AAD: Active Anomaly Detection with Deep Reinforcement Learning [56.65934079419417]
High false-positive rate is a long-standing challenge for anomaly detection algorithms.
We propose Active Anomaly Detection with Meta-Policy (Meta-AAD), a novel framework that learns a meta-policy for query selection.
arXiv Detail & Related papers (2020-09-16T01:47:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.