Analyzing and improving a classical Betti number estimation algorithm
- URL: http://arxiv.org/abs/2509.16171v1
- Date: Fri, 19 Sep 2025 17:29:57 GMT
- Title: Analyzing and improving a classical Betti number estimation algorithm
- Authors: Julien Sorci,
- Abstract summary: A classical algorithm for estimating the normalized Betti number of an arbitrary simplicial complex was proposed.<n>Motivated by a quantum algorithm with a similar Monte Carlo structure and improved sample complexity, we give a more in-depth analysis of the sample complexity of this classical algorithm.<n>We show that for certain models our improvement almost always leads to a reduced sample complexity, and also produce separate regimes where the sample complexity for both algorithms is exponential.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, a classical algorithm for estimating the normalized Betti number of an arbitrary simplicial complex was proposed. Motivated by a quantum algorithm with a similar Monte Carlo structure and improved sample complexity, we give a more in-depth analysis of the sample complexity of this classical algorithm. To this end, we present bounds for the variance of the estimators used in the classical algorithm and show that the variance depends on certain combinatorial properties of the underlying simplicial complex. This new analysis leads us to propose an improvement to the classical algorithm which makes the "easy cases easier'', in that it reduces the sample complexity for simplicial complexes where the variance is sufficiently small. We show the effectiveness and limitations of these classical algorithms by considering Erd\H{o}s-Renyi random graph models to demonstrate the existence of "easy" and "hard" cases. Namely, we show that for certain models our improvement almost always leads to a reduced sample complexity, and also produce separate regimes where the sample complexity for both algorithms is exponential.
Related papers
- Comparing quantum and classical Monte Carlo algorithms for estimating Betti numbers of clique complexes [6.713479655907349]
Several quantum and classical Monte Carlo algorithms for Betti Number Estimation (BNE) on clique complexes have recently been proposed.
We review these algorithms, emphasising their common Monte Carlo structure within a new modular framework.
By recombining the different modules, we create a new quantum algorithm with an exponentially-improved dependence in the sample complexity.
arXiv Detail & Related papers (2024-08-29T22:35:50Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - Iterative regularization for low complexity regularizers [18.87017835436693]
Iterative regularization exploits the implicit bias of an optimization algorithm to regularize ill-posed problems.
We propose and study the first iterative regularization procedure able to handle biases described by non smooth and non strongly convex functionals.
arXiv Detail & Related papers (2022-02-01T14:09:00Z) - Information-Theoretic Generalization Bounds for Iterative
Semi-Supervised Learning [81.1071978288003]
In particular, we seek to understand the behaviour of the em generalization error of iterative SSL algorithms using information-theoretic principles.
Our theoretical results suggest that when the class conditional variances are not too large, the upper bound on the generalization error decreases monotonically with the number of iterations, but quickly saturates.
arXiv Detail & Related papers (2021-10-03T05:38:49Z) - On the Cryptographic Hardness of Learning Single Periodic Neurons [42.86685497609574]
We show a simple reduction which demonstrates the cryptographic hardness of learning a single neuron over isotropic Gaussian distributions in the presence of noise.
Our proposed algorithm is not a gradient-based or an adversarial SQ-time algorithm, but is rather based on the celebrated Lenstra-LenstraLov'asz (LLL) lattice basis reduction algorithm.
arXiv Detail & Related papers (2021-06-20T20:03:52Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Rejection sampling from shape-constrained distributions in sublinear
time [14.18847457501901]
We study the query complexity of rejection sampling in a minimax framework for various classes of discrete distributions.
Our results provide new algorithms for sampling whose complexity scales sublinearly with the alphabet size.
arXiv Detail & Related papers (2021-05-29T01:00:42Z) - Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra [53.46106569419296]
We create classical (non-quantum) dynamic data structures supporting queries for recommender systems and least-squares regression.
We argue that the previous quantum-inspired algorithms for these problems are doing leverage or ridge-leverage score sampling in disguise.
arXiv Detail & Related papers (2020-11-09T01:13:07Z) - Beyond Worst-Case Analysis in Stochastic Approximation: Moment
Estimation Improves Instance Complexity [58.70807593332932]
We study oracle complexity of gradient based methods for approximation problems.
We focus on instance-dependent complexity instead of worst case complexity.
Our proposed algorithm and its analysis provide a theoretical justification for the success of moment estimation.
arXiv Detail & Related papers (2020-06-08T09:25:47Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.