Sparse and Smooth: improved guarantees for Spectral Clustering in the
Dynamic Stochastic Block Model
- URL: http://arxiv.org/abs/2002.02892v2
- Date: Mon, 10 Feb 2020 15:46:24 GMT
- Title: Sparse and Smooth: improved guarantees for Spectral Clustering in the
Dynamic Stochastic Block Model
- Authors: Nicolas Keriven, Samuel Vaiter
- Abstract summary: We analyse classical variants of the Spectral Clustering (SC) algorithm in the Dynamic Block Model (DSBM)
Existing results show that, in the relatively sparse case where the expected degree grows logarithmically with the number of nodes, guarantees in the static case can be extended to the dynamic case.
We improve over these results by drawing a new link between the sparsity and the smoothness of the DSBM.
- Score: 12.538755088321404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we analyse classical variants of the Spectral Clustering (SC)
algorithm in the Dynamic Stochastic Block Model (DSBM). Existing results show
that, in the relatively sparse case where the expected degree grows
logarithmically with the number of nodes, guarantees in the static case can be
extended to the dynamic case and yield improved error bounds when the DSBM is
sufficiently smooth in time, that is, the communities do not change too much
between two time steps. We improve over these results by drawing a new link
between the sparsity and the smoothness of the DSBM: the more regular the DSBM
is, the more sparse it can be, while still guaranteeing consistent recovery. In
particular, a mild condition on the smoothness allows to treat the sparse case
with bounded degree. We also extend these guarantees to the normalized
Laplacian, and as a by-product of our analysis, we obtain to our knowledge the
best spectral concentration bound available for the normalized Laplacian of
matrices with independent Bernoulli entries.
Related papers
- Gradient Normalization with(out) Clipping Ensures Convergence of Nonconvex SGD under Heavy-Tailed Noise with Improved Results [60.92029979853314]
This paper investigates Gradient Normalization without (NSGDC) its gradient reduction variant (NSGDC-VR)
We present significant improvements in the theoretical results for both algorithms.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - (Almost) Smooth Sailing: Towards Numerical Stability of Neural Networks Through Differentiable Regularization of the Condition Number [0.0]
We introduce a novel regularizer that is provably differentiable almost everywhere.
We show the advantages of this approach for noisy classification and denoising of MNIST images.
arXiv Detail & Related papers (2024-09-30T19:18:15Z) - Robust Stochastic Optimization via Gradient Quantile Clipping [6.2844649973308835]
We introduce a quant clipping strategy for Gradient Descent (SGD)
We use gradient new outliers as norm clipping chains.
We propose an implementation of the algorithm using Huberiles.
arXiv Detail & Related papers (2023-09-29T15:24:48Z) - Convergence of mean-field Langevin dynamics: Time and space
discretization, stochastic gradient, and variance reduction [49.66486092259376]
The mean-field Langevin dynamics (MFLD) is a nonlinear generalization of the Langevin dynamics that incorporates a distribution-dependent drift.
Recent works have shown that MFLD globally minimizes an entropy-regularized convex functional in the space of measures.
We provide a framework to prove a uniform-in-time propagation of chaos for MFLD that takes into account the errors due to finite-particle approximation, time-discretization, and gradient approximation.
arXiv Detail & Related papers (2023-06-12T16:28:11Z) - Utilising the CLT Structure in Stochastic Gradient based Sampling :
Improved Analysis and Faster Algorithms [14.174806471635403]
We consider approximations of sampling algorithms, such as Gradient Langevin Dynamics (SGLD) and the Random Batch Method (RBM) for Interacting Particle Dynamcs (IPD)
We observe that the noise introduced by the approximation is nearly Gaussian due to the Central Limit Theorem (CLT) while the driving Brownian motion is exactly Gaussian.
We harness this structure to absorb the approximation error inside the diffusion process, and obtain improved convergence guarantees for these algorithms.
arXiv Detail & Related papers (2022-06-08T10:17:40Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Resonance in Weight Space: Covariate Shift Can Drive Divergence of SGD
with Momentum [26.25434025410027]
Existing work has shown that SGDm with a decaying step-size can converge under Markovian temporal correlation.
In this work, we show that SGDm under covariate shift with a fixed step-size can be unstable and diverge.
We approximate the learning system as a time varying system of ordinary differential equations, and leverage existing theory to characterize the system's divergence/convergence as resonant/nonresonant modes.
arXiv Detail & Related papers (2022-03-22T18:38:13Z) - Hyperspectral Image Denoising Using Non-convex Local Low-rank and Sparse
Separation with Spatial-Spectral Total Variation Regularization [49.55649406434796]
We propose a novel non particular approach to robust principal component analysis for HSI denoising.
We develop accurate approximations to both rank and sparse components.
Experiments on both simulated and real HSIs demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-01-08T11:48:46Z) - Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth
Games: Convergence Analysis under Expected Co-coercivity [49.66890309455787]
We introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO.
We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size.
Our convergence guarantees hold under the arbitrary sampling paradigm, and we give insights into the complexity of minibatching.
arXiv Detail & Related papers (2021-06-30T18:32:46Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.