Granular Computing: An Augmented Scheme of Degranulation Through a
Modified Partition Matrix
- URL: http://arxiv.org/abs/2004.03379v1
- Date: Fri, 3 Apr 2020 03:20:09 GMT
- Title: Granular Computing: An Augmented Scheme of Degranulation Through a
Modified Partition Matrix
- Authors: Kaijie Xu, Witold Pedrycz, Zhiwu Li, and Mengdao Xing
- Abstract summary: Information granules forming an abstract and efficient characterization of large volumes of numeric data have been considered as the fundamental constructs of Granular Computing.
Previous studies have shown that there is a relationship between the reconstruction error and the performance of the granulation process.
To enhance the quality of degranulation, in this study, we develop an augmented scheme through modifying the partition matrix.
- Score: 86.89353217469754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an important technology in artificial intelligence Granular Computing
(GrC) has emerged as a new multi-disciplinary paradigm and received much
attention in recent years. Information granules forming an abstract and
efficient characterization of large volumes of numeric data have been
considered as the fundamental constructs of GrC. By generating prototypes and
partition matrix, fuzzy clustering is a commonly encountered way of information
granulation. Degranulation involves data reconstruction completed on a basis of
the granular representatives. Previous studies have shown that there is a
relationship between the reconstruction error and the performance of the
granulation process. Typically, the lower the degranulation error is, the
better performance of granulation is. However, the existing methods of
degranulation usually cannot restore the original numeric data, which is one of
the important reasons behind the occurrence of the reconstruction error. To
enhance the quality of degranulation, in this study, we develop an augmented
scheme through modifying the partition matrix. By proposing the augmented
scheme, we dwell on a novel collection of granulation-degranulation mechanisms.
In the constructed approach, the prototypes can be expressed as the product of
the dataset matrix and the partition matrix. Then, in the degranulation
process, the reconstructed numeric data can be decomposed into the product of
the partition matrix and the matrix of prototypes. Both the granulation and
degranulation are regarded as generalized rotation between the data subspace
and the prototype subspace with the partition matrix and the fuzzification
factor. By modifying the partition matrix, the new partition matrix is
constructed through a series of matrix operations. We offer a thorough analysis
of the developed scheme. The experimental results are in agreement with the
underlying conceptual framework
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Generalized Low-Rank Matrix Completion Model with Overlapping Group Error Representation [3.457484690890009]
The low-rank matrix completion (LRMC) technology has achieved remarkable results in low-level visual tasks.
There is an underlying assumption that the real-world matrix data is low-rank in LRMC.
The real matrix data does not satisfy the strict low-rank property, which undoubtedly present serious challenges for the above-mentioned matrix recovery methods.
arXiv Detail & Related papers (2024-07-11T14:01:57Z) - Input Guided Multiple Deconstruction Single Reconstruction neural network models for Matrix Factorization [0.0]
This paper develops two models based on the concept of Non-negative Matrix Factorization (NMF)
They aim to deal with high-dimensional data by discovering its low rank approximation by determining a unique pair of factor matrices.
The superiority of low dimensional embedding over that of the original data justifying the need for dimension reduction has been established.
arXiv Detail & Related papers (2024-05-22T08:41:32Z) - Large-scale gradient-based training of Mixtures of Factor Analyzers [67.21722742907981]
This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional training by gradient descent.
We prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed.
Besides the theoretical analysis and matrices, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
arXiv Detail & Related papers (2023-08-26T06:12:33Z) - Multi-modal Multi-view Clustering based on Non-negative Matrix
Factorization [0.0]
We propose a study on multi-modal clustering algorithms and present a novel method called multi-modal multi-view non-negative matrix factorization.
The experimental results show the value of the proposed approach, which was evaluated using a variety of data sets.
arXiv Detail & Related papers (2023-08-09T08:06:03Z) - Mode-wise Principal Subspace Pursuit and Matrix Spiked Covariance Model [13.082805815235975]
We introduce a novel framework called Mode-wise Principal Subspace Pursuit (MOP-UP) to extract hidden variations in both the row and column dimensions for matrix data.
The effectiveness and practical merits of the proposed framework are demonstrated through experiments on both simulated and real datasets.
arXiv Detail & Related papers (2023-07-02T13:59:47Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Augmentation of the Reconstruction Performance of Fuzzy C-Means with an
Optimized Fuzzification Factor Vector [99.19847674810079]
Fuzzy C-Means (FCM) is one of the most frequently used methods to construct information granules.
In this paper, we augment the FCM-based degranulation mechanism by introducing a vector of fuzzification factors.
Experiments completed for both synthetic and publicly available datasets show that the proposed approach outperforms the generic data reconstruction approach.
arXiv Detail & Related papers (2020-04-13T04:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.