Multi-Objective Archiving
- URL: http://arxiv.org/abs/2303.09685v2
- Date: Wed, 13 Sep 2023 21:40:48 GMT
- Title: Multi-Objective Archiving
- Authors: Miqing Li, Manuel L\'opez-Ib\'a\~nez, Xin Yao
- Abstract summary: archiving is the process of comparing new solutions with previous ones and deciding how to update the archive/population.
There is lack of systematic study of archiving methods from a general theoretical perspective.
- Score: 6.469246318869941
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most multi-objective optimisation algorithms maintain an archive explicitly
or implicitly during their search. Such an archive can be solely used to store
high-quality solutions presented to the decision maker, but in many cases may
participate in the search process (e.g., as the population in evolutionary
computation). Over the last two decades, archiving, the process of comparing
new solutions with previous ones and deciding how to update the
archive/population, stands as an important issue in evolutionary
multi-objective optimisation (EMO). This is evidenced by constant efforts from
the community on developing various effective archiving methods, ranging from
conventional Pareto-based methods to more recent indicator-based and
decomposition-based ones. However, the focus of these efforts is on empirical
performance comparison in terms of specific quality indicators; there is lack
of systematic study of archiving methods from a general theoretical
perspective. In this paper, we attempt to conduct a systematic overview of
multi-objective archiving, in the hope of paving the way to understand
archiving algorithms from a holistic perspective of theory and practice, and
more importantly providing a guidance on how to design theoretically desirable
and practically useful archiving algorithms. In doing so, we also present that
archiving algorithms based on weakly Pareto compliant indicators (e.g.,
epsilon-indicator), as long as designed properly, can achieve the same
theoretical desirables as archivers based on Pareto compliant indicators (e.g.,
hypervolume indicator). Such desirables include the property limit-optimal, the
limit form of the possible optimal property that a bounded archiving algorithm
can have with respect to the most general form of superiority between solution
sets.
Related papers
- Constrained Auto-Regressive Decoding Constrains Generative Retrieval [71.71161220261655]
Generative retrieval seeks to replace traditional search index data structures with a single large-scale neural network.
In this paper, we examine the inherent limitations of constrained auto-regressive generation from two essential perspectives: constraints and beam search.
arXiv Detail & Related papers (2025-04-14T06:54:49Z) - When to Truncate the Archive? On the Effect of the Truncation Frequency in Multi-Objective Optimisation [6.391724105255245]
We show that, interestingly, truncating the archive once a new solution generated tends to be the best, whereas considering an unbounded archive is often the worst.
Our results highlight the importance of developing effective subset selection techniques.
arXiv Detail & Related papers (2025-04-02T03:33:49Z) - Learning More Effective Representations for Dense Retrieval through Deliberate Thinking Before Search [65.53881294642451]
Deliberate Thinking based Dense Retriever (DEBATER)
DEBATER enhances recent dense retrievers by enabling them to learn more effective document representations through a step-by-step thinking process.
Experimental results show that DEBATER significantly outperforms existing methods across several retrieval benchmarks.
arXiv Detail & Related papers (2025-02-18T15:56:34Z) - MO-IOHinspector: Anytime Benchmarking of Multi-Objective Algorithms using IOHprofiler [0.7418044931036347]
We propose a new software tool which uses principles from unbounded archiving as a logging structure.
This leads to a clearer separation between experimental design and subsequent analysis decisions.
arXiv Detail & Related papers (2024-12-10T12:00:53Z) - Hierarchical Reinforcement Learning for Temporal Abstraction of Listwise Recommendation [51.06031200728449]
We propose a novel framework called mccHRL to provide different levels of temporal abstraction on listwise recommendation.
Within the hierarchical framework, the high-level agent studies the evolution of user perception, while the low-level agent produces the item selection policy.
Results observe significant performance improvement by our method, compared with several well-known baselines.
arXiv Detail & Related papers (2024-09-11T17:01:06Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - Experimental Analysis of Large-scale Learnable Vector Storage
Compression [42.52474894105165]
Learnable embedding vector is one of the most important applications in machine learning.
The high dimensionality of sparse data in recommendation tasks and the huge volume of corpus in retrieval-related tasks lead to a large memory consumption of the embedding table.
Recent research has proposed various methods to compress the embeddings at the cost of a slight decrease in model quality or the introduction of other overheads.
arXiv Detail & Related papers (2023-11-27T07:11:47Z) - Hierarchical learning, forecasting coherent spatio-temporal individual
and aggregated building loads [1.3764085113103222]
We propose a novel multi-dimensional hierarchical forecasting method built upon structurally-informed machine-learning regressors and hierarchical reconciliation taxonomy.
The method is evaluated on two different case studies to predict building electrical loads.
Overall, the paper expands and unites traditionally hierarchical forecasting methods providing a fertile route toward a novel generation of forecasting regressors.
arXiv Detail & Related papers (2023-01-30T15:11:46Z) - Multi-Resolution Online Deterministic Annealing: A Hierarchical and
Progressive Learning Architecture [0.0]
We introduce a general-purpose hierarchical learning architecture that is based on the progressive partitioning of a possibly multi-resolution data space.
We show that the solution of each optimization problem can be estimated online using gradient-free approximation updates.
Asymptotic convergence analysis and experimental results are provided for supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-12-15T23:21:49Z) - Multi-Task Off-Policy Learning from Bandit Feedback [54.96011624223482]
We propose a hierarchical off-policy optimization algorithm (HierOPO), which estimates the parameters of the hierarchical model and then acts pessimistically with respect to them.
We prove per-task bounds on the suboptimality of the learned policies, which show a clear improvement over not using the hierarchical model.
Our theoretical and empirical results show a clear advantage of using the hierarchy over solving each task independently.
arXiv Detail & Related papers (2022-12-09T08:26:27Z) - Language Model Decoding as Likelihood-Utility Alignment [54.70547032876017]
We introduce a taxonomy that groups decoding strategies based on their implicit assumptions about how well the model's likelihood is aligned with the task-specific notion of utility.
Specifically, by analyzing the correlation between the likelihood and the utility of predictions across a diverse set of tasks, we provide the first empirical evidence supporting the proposed taxonomy.
arXiv Detail & Related papers (2022-10-13T17:55:51Z) - Effects of Archive Size on Computation Time and Solution Quality for
Multi-Objective Optimization [6.146046338698174]
An external archive has been used to store all nondominated solutions found by an evolutionary multi-objective optimization algorithm in some studies.
We examine the effects of the archive size on three aspects: (i) the quality of the selected final solution set, (ii) the total computation time for the archive maintenance and the final solution set selection, and (iii) the required memory size.
arXiv Detail & Related papers (2022-09-07T12:25:16Z) - Autoregressive Search Engines: Generating Substrings as Document
Identifiers [53.0729058170278]
Autoregressive language models are emerging as the de-facto standard for generating answers.
Previous work has explored ways to partition the search space into hierarchical structures.
In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers.
arXiv Detail & Related papers (2022-04-22T10:45:01Z) - Provable Hierarchy-Based Meta-Reinforcement Learning [50.17896588738377]
We analyze HRL in the meta-RL setting, where learner learns latent hierarchical structure during meta-training for use in a downstream task.
We provide "diversity conditions" which, together with a tractable optimism-based algorithm, guarantee sample-efficient recovery of this natural hierarchy.
Our bounds incorporate common notions in HRL literature such as temporal and state/action abstractions, suggesting that our setting and analysis capture important features of HRL in practice.
arXiv Detail & Related papers (2021-10-18T17:56:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.