Rank-one partitioning: formalization, illustrative examples, and a new
cluster enhancing strategy
- URL: http://arxiv.org/abs/2009.00365v1
- Date: Tue, 1 Sep 2020 11:37:28 GMT
- Title: Rank-one partitioning: formalization, illustrative examples, and a new
cluster enhancing strategy
- Authors: Charlotte Laclau, Franck Iutzeler, Ievgen Redko
- Abstract summary: We introduce and formalize a rank-one partitioning learning paradigm that unifies partitioning methods.
We propose a novel algorithmic solution for the partitioning problem based on rank-one matrix factorization and denoising of piecewise constant signals.
- Score: 17.166794984161967
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce and formalize a rank-one partitioning learning
paradigm that unifies partitioning methods that proceed by summarizing a data
set using a single vector that is further used to derive the final clustering
partition. Using this unification as a starting point, we propose a novel
algorithmic solution for the partitioning problem based on rank-one matrix
factorization and denoising of piecewise constant signals. Finally, we propose
an empirical demonstration of our findings and demonstrate the robustness of
the proposed denoising step. We believe that our work provides a new point of
view for several unsupervised learning techniques that helps to gain a deeper
understanding about the general mechanisms of data partitioning.
Related papers
- Unified Multi-View Orthonormal Non-Negative Graph Based Clustering
Framework [74.25493157757943]
We formulate a novel clustering model, which exploits the non-negative feature property and incorporates the multi-view information into a unified joint learning framework.
We also explore, for the first time, the multi-model non-negative graph-based approach to clustering data based on deep features.
arXiv Detail & Related papers (2022-11-03T08:18:27Z) - Federated Representation Learning via Maximal Coding Rate Reduction [109.26332878050374]
We propose a methodology to learn low-dimensional representations from a dataset that is distributed among several clients.
Our proposed method, which we refer to as FLOW, utilizes MCR2 as the objective of choice, hence resulting in representations that are both between-class discriminative and within-class compressible.
arXiv Detail & Related papers (2022-10-01T15:43:51Z) - Leachable Component Clustering [10.377914682543903]
In this work, a novel approach to clustering of incomplete data, termed leachable component clustering, is proposed.
The proposed method handles data imputation with Bayes alignment, and collects the lost patterns in theory.
Experiments on several artificial incomplete data sets demonstrate that, the proposed method is able to present superior performance compared with other state-of-the-art algorithms.
arXiv Detail & Related papers (2022-08-28T13:13:17Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - A Proposition-Level Clustering Approach for Multi-Document Summarization [82.4616498914049]
We revisit the clustering approach, grouping together propositions for more precise information alignment.
Our method detects salient propositions, clusters them into paraphrastic clusters, and generates a representative sentence for each cluster by fusing its propositions.
Our summarization method improves over the previous state-of-the-art MDS method in the DUC 2004 and TAC 2011 datasets.
arXiv Detail & Related papers (2021-12-16T10:34:22Z) - An iterative coordinate descent algorithm to compute sparse low-rank
approximations [2.271697531183735]
We describe a new algorithm to build a few sparse principal components from a given data matrix.
We show the performance of the proposed algorithm to recover sparse principal components on various datasets from the literature.
arXiv Detail & Related papers (2021-07-30T13:11:37Z) - Weighted Sparse Subspace Representation: A Unified Framework for
Subspace Clustering, Constrained Clustering, and Active Learning [0.3553493344868413]
We first propose a novel spectral-based subspace clustering algorithm that seeks to represent each point as a sparse convex combination of a few nearby points.
We then extend the algorithm to constrained clustering and active learning settings.
Our motivation for developing such a framework stems from the fact that typically either a small amount of labelled data is available in advance; or it is possible to label some points at a cost.
arXiv Detail & Related papers (2021-06-08T13:39:43Z) - Multi-view Clustering via Deep Matrix Factorization and Partition
Alignment [43.56334737599984]
We propose a novel multi-view clustering algorithm via deep matrix decomposition and partition alignment.
An alternating optimization algorithm is developed to solve the optimization problem with proven convergence.
arXiv Detail & Related papers (2021-05-01T15:06:57Z) - Graph-Embedded Subspace Support Vector Data Description [98.78559179013295]
We propose a novel subspace learning framework for one-class classification.
The proposed framework presents the problem in the form of graph embedding.
We demonstrate improved performance against the baselines and the recently proposed subspace learning methods for one-class classification.
arXiv Detail & Related papers (2021-04-29T14:30:48Z) - Partition-based formulations for mixed-integer optimization of trained
ReLU neural networks [66.88252321870085]
This paper introduces a class of mixed-integer formulations for trained ReLU neural networks.
At one extreme, one partition per input recovers the convex hull of a node, i.e., the tightest possible formulation for each node.
arXiv Detail & Related papers (2021-02-08T17:27:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.