On the Global Solution of Soft k-Means
- URL: http://arxiv.org/abs/2212.03589v1
- Date: Wed, 7 Dec 2022 12:06:55 GMT
- Title: On the Global Solution of Soft k-Means
- Authors: Feiping Nie, Hong Chen, Rong Wang, Xuelong Li
- Abstract summary: This paper presents an algorithm to solve the Soft k-Means problem globally.
A new model, named Minimal Volume Soft kMeans (MVSkM), is proposed to address solutions non-uniqueness issue.
- Score: 159.23423824953412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an algorithm to solve the Soft k-Means problem globally.
Unlike Fuzzy c-Means, Soft k-Means (SkM) has a matrix factorization-type
objective and has been shown to have a close relation with the popular
probability decomposition-type clustering methods, e.g., Left Stochastic
Clustering (LSC). Though some work has been done for solving the Soft k-Means
problem, they usually use an alternating minimization scheme or the projected
gradient descent method, which cannot guarantee global optimality since the
non-convexity of SkM. In this paper, we present a sufficient condition for a
feasible solution of Soft k-Means problem to be globally optimal and show the
output of the proposed algorithm satisfies it. Moreover, for the Soft k-Means
problem, we provide interesting discussions on stability, solutions
non-uniqueness, and connection with LSC. Then, a new model, named Minimal
Volume Soft k-Means (MVSkM), is proposed to address the solutions
non-uniqueness issue. Finally, experimental results support our theoretical
results.
Related papers
- Reweighted Solutions for Weighted Low Rank Approximation [47.790126028106734]
Weighted low rank approximation (WLRA) is an important yet challenging primitive with applications ranging from statistical analysis to signal processing.
In this work, we introduce a new relaxed solution to WLRA which outputs a matrix that is not necessarily low rank, but can be stored using very few parameters.
Our central idea is to use the weight matrix itself to reweight a low rank solution, which gives an extremely simple algorithm with remarkable empirical performance.
arXiv Detail & Related papers (2024-06-04T15:50:35Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Multi-Prototypes Convex Merging Based K-Means Clustering Algorithm [20.341309224377866]
Multi-prototypes convex merging based K-Means clustering algorithm (MCKM) is presented.
MCKM is an efficient and explainable clustering algorithm for escaping the undesirable local minima of K-Means problem without given k first.
arXiv Detail & Related papers (2023-02-14T13:57:33Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Stability and Generalization for Markov Chain Stochastic Gradient
Methods [49.981789906200035]
We provide a comprehensive generalization analysis of MC-SGMs for both minimization and minimax problems.
We establish the optimal excess population risk bounds for both smooth and non-smooth cases.
We develop the first nearly optimal convergence rates for convex-concave problems both in expectation and with high probability.
arXiv Detail & Related papers (2022-09-16T15:42:51Z) - Factorization Approach for Low-complexity Matrix Completion Problems:
Exponential Number of Spurious Solutions and Failure of Gradient Methods [18.16094563534453]
It is well-known that the Burer-Monteiro (B-M) factorization approach can efficiently solve low-rank matrix optimization problems under the RIP condition.
It is natural to ask whether B-M factorization-based methods can succeed on any low-rank matrix optimization problems with a low information-theoretic complexity.
arXiv Detail & Related papers (2021-10-19T21:52:14Z) - Saddle Point Optimization with Approximate Minimization Oracle and its
Application to Robust Berthing Control [7.347989843033034]
We propose an approach to saddle point optimization relying only on oracles that solve minimization problems approximately.
We analyze its convergence property on a strongly convex--concave problem and show its linear convergence toward the global min-max saddle point.
An implementation of the developed approach using the (1+1)-CMA-ES as the minimization oracle, namely Adversarial-CMA-ES, is shown to outperform several existing approaches on test problems.
arXiv Detail & Related papers (2021-05-25T00:08:47Z) - A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained
Optimization [25.73397307080647]
We present a new model of a general convex or non objective machine machine objectives.
We propose an algorithm that solves a constraint with gradually relaxed point levels of each subproblem.
We demonstrate the effectiveness of our new numerical scale problems.
arXiv Detail & Related papers (2020-10-23T05:24:05Z) - Probabilistic K-means Clustering via Nonlinear Programming [13.026121785720395]
Probabilistic K-Means (PKM) is a nonlinear programming model constrained on linear equalities and linear inequalities.
In theory, we can solve the model by active gradient projection, while inefficiently.
By experiments, we evaluate the performance of PKM and how well the proposed methods solve it in five aspects.
arXiv Detail & Related papers (2020-01-10T02:40:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.