Geometric Localization of Homology Cycles
- URL: http://arxiv.org/abs/2406.03183v1
- Date: Wed, 5 Jun 2024 12:13:25 GMT
- Title: Geometric Localization of Homology Cycles
- Authors: Amritendu Dhar, Vijay Natarajan, Abhishek Rathod,
- Abstract summary: We present a geometric optimization of the cycles that is computable in the time and is stable in an approximate sense.
In practice, the (trivial) exact algorithm is computationally expensive despite having a worst case runtime.
These algorithms have reasonable runtimes for moderate sized datasets and are consistently of high quality.
- Score: 2.4906439728472494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computing an optimal cycle in a given homology class, also referred to as the homology localization problem, is known to be an NP-hard problem in general. Furthermore, there is currently no known optimality criterion that localizes classes geometrically and admits a stability property under the setting of persistent homology. We present a geometric optimization of the cycles that is computable in polynomial time and is stable in an approximate sense. Tailoring our search criterion to different settings, we obtain various optimization problems like optimal homologous cycle, minimum homology basis, and minimum persistent homology basis. In practice, the (trivial) exact algorithm is computationally expensive despite having a worst case polynomial runtime. Therefore, we design approximation algorithms for the above problems and study their performance experimentally. These algorithms have reasonable runtimes for moderate sized datasets and the cycles computed by these algorithms are consistently of high quality as demonstrated via experiments on multiple datasets.
Related papers
- A Block-Coordinate Descent EMO Algorithm: Theoretical and Empirical Analysis [17.89683724761454]
We consider whether conditions exist under which block-coordinate descent is efficient in evolutionary multi-objective optimization.
We propose a block-coordinate version of GSEMO and compare its running time to the standard GSEMO algorithm.
arXiv Detail & Related papers (2024-04-04T23:50:18Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Generative modeling of time-dependent densities via optimal transport
and projection pursuit [3.069335774032178]
We propose a cheap alternative to popular deep learning algorithms for temporal modeling.
Our method is highly competitive compared with state-of-the-art solvers.
arXiv Detail & Related papers (2023-04-19T13:50:13Z) - Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast
Evasion of Non-Degenerate Saddle Points [8.452349885923507]
Gradient-based first-order convex optimization algorithms find widespread applicability in a variety of domains, including machine learning tasks.
Motivated by the recent advances in fixed-time theory of optimal time, we introduce a framework for designing accelerated optimization algorithms.
For functions that admit non-de saddle-points, we show that the time required to evade these saddle-points is uniformly bounded for all initial conditions.
arXiv Detail & Related papers (2022-12-07T16:36:23Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces [93.35384756718868]
min-max algorithms have been analyzed in the Euclidean setting.
We prove that the extraiteient (RCEG) method corrected lastrate convergence at a linear rate.
arXiv Detail & Related papers (2022-06-04T18:53:44Z) - Minimal Cycle Representatives in Persistent Homology using Linear
Programming: an Empirical Study with User's Guide [4.46514714749204]
Cycle representatives of persistent homology classes can be used to provide descriptions of topological features in data.
One approach to solving this problem is to optimize the choice of representative against some measure that is meaningful in the context of the data.
arXiv Detail & Related papers (2021-05-14T18:38:48Z) - Hardness of Random Optimization Problems for Boolean Circuits,
Low-Degree Polynomials, and Langevin Dynamics [78.46689176407936]
We show that families of algorithms fail to produce nearly optimal solutions with high probability.
For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory.
arXiv Detail & Related papers (2020-04-25T05:45:59Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.