Runtime Analysis of the (1+1) EA on Weighted Sums of Transformed Linear
Functions
- URL: http://arxiv.org/abs/2208.05670v1
- Date: Thu, 11 Aug 2022 07:05:15 GMT
- Title: Runtime Analysis of the (1+1) EA on Weighted Sums of Transformed Linear
Functions
- Authors: Frank Neumann and Carsten Witt
- Abstract summary: We study the class of objective functions that are weighted sums of two transformed linear functions.
Our results show that the (1+1) EA, with a mutation rate depending on the number of overlapping bits of the functions, obtains an optimal solution for these functions in expected time O(n log n)
- Score: 13.264683014487376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear functions play a key role in the runtime analysis of evolutionary
algorithms and studies have provided a wide range of new insights and
techniques for analyzing evolutionary computation methods. Motivated by studies
on separable functions and the optimization behaviour of evolutionary
algorithms as well as objective functions from the area of chance constrained
optimization, we study the class of objective functions that are weighted sums
of two transformed linear functions. Our results show that the (1+1) EA, with a
mutation rate depending on the number of overlapping bits of the functions,
obtains an optimal solution for these functions in expected time O(n log n),
thereby generalizing a well-known result for linear functions to a much wider
range of problems.
Related papers
- Functional Bilevel Optimization for Machine Learning [36.081761044296904]
We introduce a new functional point of view on bilevel optimization problems for machine learning, where the inner objective is minimized over a function space.
We propose scalable and efficient algorithms for the functional bilevel optimization problem.
arXiv Detail & Related papers (2024-03-29T15:22:03Z) - Benchmarking Differential Evolution on a Quantum Simulator [0.0]
Differential Evolution (DE) can be used to compute the minima of functions such as the rastrigin function and rosenbrock function.
This work is an attempt to study the result of applying the DE method on these functions with candidate individuals generated on classical Turing modeled computation.
arXiv Detail & Related papers (2023-11-06T14:27:00Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Using Affine Combinations of BBOB Problems for Performance Assessment [0.9281671380673306]
We show how affine function combinations can be used to analyze the behavior of optimization algorithms.
In particular, we highlight that by varying the weighting between the combined problems, we can gain insights into the effects of added global structure.
arXiv Detail & Related papers (2023-03-08T13:37:55Z) - On Solution Functions of Optimization: Universal Approximation and
Covering Number Bounds [6.3291148076593355]
We study the expressability of convex optimization functions solution functions of linear targetibility(1) (LP) and approximable approximation power of QP.
Our results provide the emph we model for rigorous analysis of the properties of some restricted programming and implications for algorithmic design and performance guarantees.
arXiv Detail & Related papers (2022-12-02T17:16:04Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Multivariate functional group sparse regression: functional predictor
selection [2.0063942015243423]
We develop two methods for functional group-sparse regression under a generic Hilbert space of infinite dimension.
We show the convergence of algorithms and the consistency of the estimation and the selection.
The applications to the functional magnetic resonance imaging (fMRI) reveal the regions of the human brain related to ADHD and IQ.
arXiv Detail & Related papers (2021-07-05T17:11:28Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.