A Gradient Sampling Algorithm for Stratified Maps with Applications to
Topological Data Analysis
- URL: http://arxiv.org/abs/2109.00530v2
- Date: Fri, 3 Sep 2021 09:04:46 GMT
- Title: A Gradient Sampling Algorithm for Stratified Maps with Applications to
Topological Data Analysis
- Authors: Jacob Leygonie, Mathieu Carri\`ere (DATASHAPE), Th\'eo Lacombe
(DATASHAPE), Steve Oudot (DATASHAPE)
- Abstract summary: We introduce a novel gradient descent algorithm extending the well-known Gradient Sampling methodology.
We then apply our method to objective functions based on the persistent homology map computed over lower-star filters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel gradient descent algorithm extending the well-known
Gradient Sampling methodology to the class of stratifiably smooth objective
functions, which are defined as locally Lipschitz functions that are smooth on
some regular pieces-called the strata-of the ambient Euclidean space. For this
class of functions, our algorithm achieves a sub-linear convergence rate. We
then apply our method to objective functions based on the (extended) persistent
homology map computed over lower-star filters, which is a central tool of
Topological Data Analysis. For this, we propose an efficient exploration of the
corresponding stratification by using the Cayley graph of the permutation
group. Finally, we provide benchmark and novel topological optimization
problems, in order to demonstrate the utility and applicability of our
framework.
Related papers
- Diffeomorphic interpolation for efficient persistence-based topological optimization [3.7550827441501844]
Topological Data Analysis (TDA) provides a pipeline to extract quantitative topological descriptors from structured objects.
We show that our approach combines efficiently with subsampling techniques, as the diffeomorphism derived from the gradient computed on a subsample can be used to update the coordinates of the full input object.
We also showcase the relevance of our approach for black-box autoencoder (AE) regularization, where we aim at enforcing topological priors on the latent spaces associated to fixed, pre-trained, blackbox AE models.
arXiv Detail & Related papers (2024-05-29T07:00:28Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - A Principle for Global Optimization with Gradients [0.0]
This work demonstrates the utility of gradients for the global optimization of certain differentiable functions with many suboptimal local minima.
Experiments measure the quality of non-local search directions as well as the performance of a proposed simplistic algorithm.
arXiv Detail & Related papers (2023-08-18T13:39:29Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Whiplash Gradient Descent Dynamics [2.0508733018954843]
We introduce the symplectic convergence analysis for the Whiplash system for convex functions.
We study the algorithm's performance for various costs and provide a practical methodology for analyzing convergence rates.
arXiv Detail & Related papers (2022-03-04T05:47:26Z) - Random-reshuffled SARAH does not need a full gradient computations [61.85897464405715]
The StochAstic Recursive grAdientritHm (SARAH) algorithm is a variance reduced variant of the Gradient Descent (SGD) algorithm.
In this paper, we remove the necessity of a full gradient.
The aggregated gradients serve as an estimate of a full gradient in the SARAH algorithm.
arXiv Detail & Related papers (2021-11-26T06:00:44Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Asymptotic study of stochastic adaptive algorithm in non-convex
landscape [2.1320960069210484]
This paper studies some assumption properties of adaptive algorithms widely used in optimization and machine learning.
Among them Adagrad and Rmsprop, which are involved in most of the blackbox deep learning algorithms.
arXiv Detail & Related papers (2020-12-10T12:54:45Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - An adaptive stochastic gradient-free approach for high-dimensional
blackbox optimization [0.0]
We propose an adaptive gradient-free (ASGF) approach for high-dimensional non-smoothing problems.
We illustrate the performance of this method on benchmark global problems and learning tasks.
arXiv Detail & Related papers (2020-06-18T22:47:58Z) - Optimization of Graph Total Variation via Active-Set-based Combinatorial
Reconditioning [48.42916680063503]
We propose a novel adaptive preconditioning strategy for proximal algorithms on this problem class.
We show that nested-forest decomposition of the inactive edges yields a guaranteed local linear convergence rate.
Our results suggest that local convergence analysis can serve as a guideline for selecting variable metrics in proximal algorithms.
arXiv Detail & Related papers (2020-02-27T16:33:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.