Vectorial Genetic Programming -- Optimizing Segments for Feature
Extraction
- URL: http://arxiv.org/abs/2303.03200v1
- Date: Fri, 3 Mar 2023 10:08:10 GMT
- Title: Vectorial Genetic Programming -- Optimizing Segments for Feature
Extraction
- Authors: Philipp Fleck, Stephan Winkler, Michael Kommenda, Michael Affenzeller
- Abstract summary: Vec-GP allows aggregating vectors only over a limited segment of the vector instead of the whole vector.
This paper formalizes an optimization problem to analyze different strategies for optimizing a window for aggregation functions.
- Score: 2.561649173827544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vectorial Genetic Programming (Vec-GP) extends GP by allowing vectors as
input features along regular, scalar features, using them by applying
arithmetic operations component-wise or aggregating vectors into scalars by
some aggregation function. Vec-GP also allows aggregating vectors only over a
limited segment of the vector instead of the whole vector, which offers great
potential but also introduces new parameters that GP has to optimize. This
paper formalizes an optimization problem to analyze different strategies for
optimizing a window for aggregation functions. Different strategies are
presented, included random and guided sampling, where the latter leverages
information from an approximated gradient. Those strategies can be applied as a
simple optimization algorithm, which itself ca be applied inside a specialized
mutation operator within GP. The presented results indicate, that the different
random sampling strategies do not impact the overall algorithm performance
significantly, and that the guided strategies suffer from becoming stuck in
local optima. However, results also indicate, that there is still potential in
discovering more efficient algorithms that could outperform the presented
strategies.
Related papers
- Optimizing Posterior Samples for Bayesian Optimization via Rootfinding [2.94944680995069]
We introduce an efficient global optimization strategy for posterior samples based on global rootfinding.
We demonstrate remarkable improvement in both inner- and outer-loop optimization.
We also propose a sample-average formulation of GP-TS, which has a parameter to explicitly control exploitation.
arXiv Detail & Related papers (2024-10-29T17:57:16Z) - Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Gaussian Process Thompson Sampling via Rootfinding [2.94944680995069]
Thompson sampling (TS) is a simple, effective policy in Bayesian decision making.
In continuous optimization, the posterior of the objective function is often a Gaussian process (GP), whose sample paths have numerous local optima.
We introduce an efficient global optimization strategy for GP-TS that carefully selects starting points for gradient-based multi-starts.
arXiv Detail & Related papers (2024-10-10T16:06:45Z) - A Principle for Global Optimization with Gradients [0.0]
This work demonstrates the utility of gradients for the global optimization of certain differentiable functions with many suboptimal local minima.
Experiments measure the quality of non-local search directions as well as the performance of a proposed simplistic algorithm.
arXiv Detail & Related papers (2023-08-18T13:39:29Z) - Thompson sampling for improved exploration in GFlowNets [75.89693358516944]
Generative flow networks (GFlowNets) are amortized variational inference algorithms that treat sampling from a distribution over compositional objects as a sequential decision-making problem with a learnable action policy.
We show in two domains that TS-GFN yields improved exploration and thus faster convergence to the target distribution than the off-policy exploration strategies used in past work.
arXiv Detail & Related papers (2023-06-30T14:19:44Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Optimistic search strategy: Change point detection for large-scale data
via adaptive logarithmic queries [1.3212010735248336]
Change point detection is often formulated as a search for the maximum of a gain function describing improved fits when segmenting the data.
We propose optimistic search strategies with $O(log T)$ exploiting specific structure of the gain function.
arXiv Detail & Related papers (2020-10-20T11:09:52Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Optimization of Graph Total Variation via Active-Set-based Combinatorial
Reconditioning [48.42916680063503]
We propose a novel adaptive preconditioning strategy for proximal algorithms on this problem class.
We show that nested-forest decomposition of the inactive edges yields a guaranteed local linear convergence rate.
Our results suggest that local convergence analysis can serve as a guideline for selecting variable metrics in proximal algorithms.
arXiv Detail & Related papers (2020-02-27T16:33:09Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.