Fast Ergodic Search with Kernel Functions
- URL: http://arxiv.org/abs/2403.01536v1
- Date: Sun, 3 Mar 2024 15:30:31 GMT
- Title: Fast Ergodic Search with Kernel Functions
- Authors: Muchen Sun, Ayush Gaggar, Peter Trautman, Todd Murphey
- Abstract summary: We develop a kernel-based ergodic metric and generalize it from Euclidean space to Lie groups.
We derive the first-order optimality condition of the kernel ergodic metric for nonlinear systems.
Comprehensive numerical benchmarks show that the proposed method is at least two orders of magnitude faster than the state-of-the-art algorithm.
- Score: 0.4915744683251149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ergodic search enables optimal exploration of an information distribution
while guaranteeing the asymptotic coverage of the search space. However,
current methods typically have exponential computation complexity in the search
space dimension and are restricted to Euclidean space. We introduce a
computationally efficient ergodic search method. Our contributions are
two-fold. First, we develop a kernel-based ergodic metric and generalize it
from Euclidean space to Lie groups. We formally prove the proposed metric is
consistent with the standard ergodic metric while guaranteeing linear
complexity in the search space dimension. Secondly, we derive the first-order
optimality condition of the kernel ergodic metric for nonlinear systems, which
enables efficient trajectory optimization. Comprehensive numerical benchmarks
show that the proposed method is at least two orders of magnitude faster than
the state-of-the-art algorithm. Finally, we demonstrate the proposed algorithm
with a peg-in-hole insertion task. We formulate the problem as a coverage task
in the space of SE(3) and use a 30-second-long human demonstration as the prior
distribution for ergodic coverage. Ergodicity guarantees the asymptotic
solution of the peg-in-hole problem so long as the solution resides within the
prior information distribution, which is seen in the 100\% success rate.
Related papers
- A Three-Stage Algorithm for the Closest String Problem on Artificial and Real Gene Sequences [39.58317527488534]
Closest String Problem is an NP-hard problem that aims to find a string that has the minimum distance from all sequences that belong to the given set of strings.
In this paper, we introduce a three-stage algorithm that comprises the following process: first, we apply a novel alphabet pruning method to reduce the search space for effectively finding promising search regions.
Second, a variant of beam search to find a solution is employed. This method utilizes a newly developed guiding function based on an expected distance score of partial solutions.
arXiv Detail & Related papers (2024-07-17T21:26:27Z) - l1-norm regularized l1-norm best-fit lines [3.0963566281269594]
We present a novel fitting procedure, utilizing simple ratios and sorting techniques.
The proposed algorithm demonstrates a worst-case time complexity of $O$(n2 m log n)$ and, in certain instances, achieves global optimality for the sparse subspace.
arXiv Detail & Related papers (2024-02-26T16:30:58Z) - AdaSub: Stochastic Optimization Using Second-Order Information in
Low-Dimensional Subspaces [0.0]
We introduce AdaSub, a search algorithm that computes a search direction based on second-order information in a low-dimensional subspace.
Compared to first-order methods, second-order methods exhibit better convergence characteristics, but the need to compute the Hessian matrix at each iteration results in excessive computational expenses.
Our preliminary numerical results demonstrate that AdaSub surpasses popular iterations in terms of time and number of iterations required to reach a given accuracy.
arXiv Detail & Related papers (2023-10-30T22:24:23Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Structural Kernel Search via Bayesian Optimization and Symbolical
Optimal Transport [5.1672267755831705]
For Gaussian processes, selecting the kernel is a crucial task, often done manually by the expert.
We propose a novel, efficient search method through a general, structured kernel space.
arXiv Detail & Related papers (2022-10-21T09:30:21Z) - On the Second-order Convergence Properties of Random Search Methods [7.788439526606651]
We prove that standard randomsearch methods that do not rely on second-order information converge to a second-order stationary point.
We propose a novel variant of negative curvature by relying only on function evaluations.
arXiv Detail & Related papers (2021-10-25T20:59:31Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Distributed Averaging Methods for Randomized Second Order Optimization [54.51566432934556]
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a bottleneck.
We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian.
We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems.
arXiv Detail & Related papers (2020-02-16T09:01:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.