A Novel Normalized-Cut Solver with Nearest Neighbor Hierarchical
Initialization
- URL: http://arxiv.org/abs/2311.15214v1
- Date: Sun, 26 Nov 2023 07:11:58 GMT
- Title: A Novel Normalized-Cut Solver with Nearest Neighbor Hierarchical
Initialization
- Authors: Feiping Nie, Jitao Lu, Danyang Wu, Rong Wang, Xuelong Li
- Abstract summary: Normalized-Cut (N-Cut) is a famous model of spectral clustering.
Traditional N-Cut solvers are two-stage: 1) calculating the continuous spectral embedding of normalized Laplacian matrix; 2) discretization via $K$-means or spectral rotation.
We propose a novel N-Cut solver based on the famous coordinate descent method.
- Score: 107.07093621337084
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Normalized-Cut (N-Cut) is a famous model of spectral clustering. The
traditional N-Cut solvers are two-stage: 1) calculating the continuous spectral
embedding of normalized Laplacian matrix; 2) discretization via $K$-means or
spectral rotation. However, this paradigm brings two vital problems: 1)
two-stage methods solve a relaxed version of the original problem, so they
cannot obtain good solutions for the original N-Cut problem; 2) solving the
relaxed problem requires eigenvalue decomposition, which has $\mathcal{O}(n^3)$
time complexity ($n$ is the number of nodes). To address the problems, we
propose a novel N-Cut solver designed based on the famous coordinate descent
method. Since the vanilla coordinate descent method also has $\mathcal{O}(n^3)$
time complexity, we design various accelerating strategies to reduce the time
complexity to $\mathcal{O}(|E|)$ ($|E|$ is the number of edges). To avoid
reliance on random initialization which brings uncertainties to clustering, we
propose an efficient initialization method that gives deterministic outputs.
Extensive experiments on several benchmark datasets demonstrate that the
proposed solver can obtain larger objective values of N-Cut, meanwhile
achieving better clustering performance compared to traditional solvers.
Related papers
- Strictly Low Rank Constraint Optimization -- An Asymptotically
$\mathcal{O}(\frac{1}{t^2})$ Method [5.770309971945476]
We propose a class of non-text and non-smooth problems with textitrank regularization to promote sparsity in optimal solution.
We show that our algorithms are able to achieve a singular convergence of $Ofrac(t2)$, which is exactly same as Nesterov's optimal convergence for first-order methods on smooth convex problems.
arXiv Detail & Related papers (2023-07-04T16:55:41Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - A Block Coordinate Descent Method for Nonsmooth Composite Optimization
under Orthogonality Constraints [7.716156977428555]
Nonsmooth composite optimization with generality constraints has a broad spectrum of applications in statistical learning and data science.
textittextbfOBCD is a new Block Coordinate method for solving nonsmooth composite problems.
arXiv Detail & Related papers (2023-04-07T13:44:59Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Randomized Stochastic Variance-Reduced Methods for Stochastic Bilevel
Optimization [62.87181271021217]
We consider non-SBO problems that have many applications in machine learning.
This paper proposes fast randomized algorithms for non-SBO problems.
arXiv Detail & Related papers (2021-05-05T18:28:42Z) - Distributed Stochastic Consensus Optimization with Momentum for
Nonconvex Nonsmooth Problems [45.88640334610308]
This paper presents a new distributed optimization algorithm for non-smooth problems.
We show that the proposed algorithm can achieve an overcal communication.
Experiments are presented to illustrate the effectiveness of the proposed algorithms.
arXiv Detail & Related papers (2020-11-10T13:12:21Z) - Stochastic Multi-level Composition Optimization Algorithms with
Level-Independent Convergence Rates [12.783783498844022]
We study smooth multi-level composition optimization problems, where the objective function is a nested composition of $T$ functions.
We show that the first algorithm, which is a generalization of citeGhaRuswan20 to the $T$ level case, can achieve a sample complexity of $mathcalO (1/epsilon$6)
This is the first time that such an online algorithm designed for the (un) multi-level setting, obtains the same sample complexity under standard assumptions.
arXiv Detail & Related papers (2020-08-24T15:57:50Z) - Provably Convergent Working Set Algorithm for Non-Convex Regularized
Regression [0.0]
This paper proposes a working set algorithm for non-regular regularizers with convergence guarantees.
Our results demonstrate high gain compared to the full problem solver for both block-coordinates or a gradient solver.
arXiv Detail & Related papers (2020-06-24T07:40:31Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.