Effects of Archive Size on Computation Time and Solution Quality for
Multi-Objective Optimization
- URL: http://arxiv.org/abs/2209.03100v1
- Date: Wed, 7 Sep 2022 12:25:16 GMT
- Title: Effects of Archive Size on Computation Time and Solution Quality for
Multi-Objective Optimization
- Authors: Tianye Shu and Ke Shang and Hisao Ishibuchi and Yang Nan
- Abstract summary: An external archive has been used to store all nondominated solutions found by an evolutionary multi-objective optimization algorithm in some studies.
We examine the effects of the archive size on three aspects: (i) the quality of the selected final solution set, (ii) the total computation time for the archive maintenance and the final solution set selection, and (iii) the required memory size.
- Score: 6.146046338698174
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An unbounded external archive has been used to store all nondominated
solutions found by an evolutionary multi-objective optimization algorithm in
some studies. It has been shown that a selected solution subset from the stored
solutions is often better than the final population. However, the use of the
unbounded archive is not always realistic. When the number of examined
solutions is huge, we must pre-specify the archive size. In this study, we
examine the effects of the archive size on three aspects: (i) the quality of
the selected final solution set, (ii) the total computation time for the
archive maintenance and the final solution set selection, and (iii) the
required memory size. Unsurprisingly, the increase of the archive size improves
the final solution set quality. Interestingly, the total computation time of a
medium-size archive is much larger than that of a small-size archive and a
huge-size archive (e.g., an unbounded archive). To decrease the computation
time, we examine two ideas: periodical archive update and archiving only in
later generations. Compared with updating the archive at every generation, the
first idea can obtain almost the same final solution set quality using a much
shorter computation time at the cost of a slight increase of the memory size.
The second idea drastically decreases the computation time at the cost of a
slight deterioration of the final solution set quality. Based on our
experimental results, some suggestions are given about how to appropriately
choose an archiving strategy and an archive size.
Related papers
- When to Truncate the Archive? On the Effect of the Truncation Frequency in Multi-Objective Optimisation [6.391724105255245]
We show that, interestingly, truncating the archive once a new solution generated tends to be the best, whereas considering an unbounded archive is often the worst.
Our results highlight the importance of developing effective subset selection techniques.
arXiv Detail & Related papers (2025-04-02T03:33:49Z) - MeMSVD: Long-Range Temporal Structure Capturing Using Incremental SVD [27.472705540825316]
This paper is on long-term video understanding where the goal is to recognise human actions over long temporal windows (up to minutes long)
We propose an alternative to attention-based schemes which is based on a low-rank approximation of the memory obtained using Singular Value Decomposition.
Our scheme has two advantages: (a) it reduces complexity by more than an order of magnitude, and (b) it is amenable to an efficient implementation for the calculation of the memory bases.
arXiv Detail & Related papers (2024-06-11T12:03:57Z) - Streaming Long Video Understanding with Large Language Models [83.11094441893435]
VideoStreaming is an advanced vision-language large model (VLLM) for video understanding.
It capably understands arbitrary-length video with a constant number of video streaming tokens encoded and propagatedly selected.
Our model achieves superior performance and higher efficiency on long video benchmarks.
arXiv Detail & Related papers (2024-05-25T02:22:09Z) - Multi-Objective Archiving [6.469246318869941]
archiving is the process of comparing new solutions with previous ones and deciding how to update the archive/population.
There is lack of systematic study of archiving methods from a general theoretical perspective.
arXiv Detail & Related papers (2023-03-16T23:08:52Z) - Generalizing Few-Shot NAS with Gradient Matching [165.5690495295074]
One-Shot methods train one supernet to approximate the performance of every architecture in the search space via weight-sharing.
Few-Shot NAS reduces the level of weight-sharing by splitting the One-Shot supernet into multiple separated sub-supernets.
It significantly outperforms its Few-Shot counterparts while surpassing previous comparable methods in terms of the accuracy of derived architectures.
arXiv Detail & Related papers (2022-03-29T03:06:16Z) - Memory Replay with Data Compression for Continual Learning [80.95444077825852]
We propose memory replay with data compression to reduce the storage cost of old training samples.
We extensively validate this across several benchmarks of class-incremental learning and in a realistic scenario of object detection for autonomous driving.
arXiv Detail & Related papers (2022-02-14T10:26:23Z) - Structured Prediction Problem Archive [30.27508546519084]
Structured prediction problems are one of the fundamental tools in machine learning.
We collect in one place a large number of datasets in easy to read formats for a diverse set of problem classes.
For reference we also give a non-exhaustive selection of algorithms proposed in the literature for their solution.
arXiv Detail & Related papers (2022-02-04T12:30:49Z) - ResLT: Residual Learning for Long-tailed Recognition [64.19728932445523]
We propose a more fundamental perspective for long-tailed recognition, i.e., from the aspect of parameter space.
We design the effective residual fusion mechanism -- with one main branch optimized to recognize images from all classes, another two residual branches are gradually fused and optimized to enhance images from medium+tail classes and tail classes respectively.
We test our method on several benchmarks, i.e., long-tailed version of CIFAR-10, CIFAR-100, Places, ImageNet, and iNaturalist 2018.
arXiv Detail & Related papers (2021-01-26T08:43:50Z) - Memory-Efficient Hierarchical Neural Architecture Search for Image
Restoration [68.6505473346005]
We propose a memory-efficient hierarchical NAS HiNAS (HiNAS) for image denoising and image super-resolution tasks.
With a single GTX1080Ti GPU, it takes only about 1 hour for searching for denoising network on BSD 500 and 3.5 hours for searching for the super-resolution structure on DIV2K.
arXiv Detail & Related papers (2020-12-24T12:06:17Z) - Evolutionary Multi-Objective Optimization Algorithm Framework with Three
Solution Sets [7.745468825770201]
It is assumed that a final solution is selected by a decision maker from a non-dominated solution set obtained by an EMO algorithm.
In this paper, we suggest the use of a general EMO framework with three solution sets to handle various situations.
arXiv Detail & Related papers (2020-12-14T08:04:07Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.