Multi-objective Pseudo Boolean Functions in Runtime Analysis: A Review
- URL: http://arxiv.org/abs/2503.19166v1
- Date: Mon, 24 Mar 2025 21:42:33 GMT
- Title: Multi-objective Pseudo Boolean Functions in Runtime Analysis: A Review
- Authors: Zimin Liang, Miqing Li,
- Abstract summary: We survey commonly used multi-objective functions in the theory domain and systematically review their features, limitations and implications to practical use.<n>We present several new functions with more realistic features, such as local optimality and nonlinearity of the Pareto front, through simply mixing and matching classical single-objective functions.
- Score: 3.3472202807168774
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, there has been growing interest within the theoretical community in analytically studying multi-objective evolutionary algorithms. This runtime analysis-focused research can help formally understand algorithm behaviour, explain empirical observations, and provide theoretical insights to support algorithm development and exploration. However, the test problems commonly used in the theoretical analysis are predominantly limited to problems with heavy ``artificial'' characteristics (e.g., symmetric objectives and linear Pareto fronts), which may not be able to well represent realistic scenarios. In this paper, we survey commonly used multi-objective functions in the theory domain and systematically review their features, limitations and implications to practical use. Moreover, we present several new functions with more realistic features, such as local optimality and nonlinearity of the Pareto front, through simply mixing and matching classical single-objective functions in the area (e.g., LeadingOnes, Jump and RoyalRoad). We hope these functions can enrich the existing test problem suites, and strengthen the connection between theoretic and practical research.
Related papers
- Constrained Multi-objective Bayesian Optimization through Optimistic Constraints Estimation [10.77641869521259]
We propose a novel constrained multi-objective Bayesian optimization algorithm COMBOO that balances active learning of the level-set defined on multiple unknowns with multi-objective optimization within the feasible region.
We provide both theoretical analysis and empirical evidence, demonstrating the efficacy of our approach on various synthetic benchmarks and real-world applications.
arXiv Detail & Related papers (2024-11-06T03:38:00Z) - Empirical Tests of Optimization Assumptions in Deep Learning [41.05664717242051]
This paper develops new empirical metrics to track the key quantities that must be controlled in theoretical analysis.
All of our tested assumptions fail to reliably capture optimization performance.
This highlights a need for new empirical verification of analytical assumptions used in theoretical analysis.
arXiv Detail & Related papers (2024-07-01T21:56:54Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - A Principled Method for the Creation of Synthetic Multi-fidelity Data
Sets [3.512854793379827]
Multifidelity and multioutput optimisation algorithms allow experimental and computational proxies to be used intelligently in the search for optimal species.
Characterisation of these algorithms involves benchmarks that typically either use analytic functions or existing multifidelity datasets.
We present a methodology for systematic generation of synthetic fidelities derived from a reference ground truth function with a controllable degree of correlation.
arXiv Detail & Related papers (2022-08-11T06:55:14Z) - Efficient Dependency Analysis for Rule-Based Ontologies [0.2752817022620644]
dependencies have been proposed for static analysis of existential rule properties.
We focus on two kinds of rule dependencies -- positive reliances and restraints.
We implement optimised algorithms for their efficient computation.
arXiv Detail & Related papers (2022-07-20T05:53:36Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - On Connections between Regularizations for Improving DNN Robustness [67.28077776415724]
This paper analyzes regularization terms proposed recently for improving the adversarial robustness of deep neural networks (DNNs)
We study possible connections between several effective methods, including input-gradient regularization, Jacobian regularization, curvature regularization, and a cross-Lipschitz functional.
arXiv Detail & Related papers (2020-07-04T23:43:32Z) - Fixed-Target Runtime Analysis [8.246980996934347]
runtime analysis classically studies the time needed to find an optimal solution.
Two complementary approaches have been suggested: fixed-budget analyses and fixed-target analyses.
We show that, different from fixed-budget analyses, many classical methods from the runtime analysis of discrete evolutionary algorithms yield fixed-target results without greater effort.
arXiv Detail & Related papers (2020-04-20T20:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.