Nonlinear Dimensionality Reduction for Data Visualization: An
Unsupervised Fuzzy Rule-based Approach
- URL: http://arxiv.org/abs/2004.03922v1
- Date: Wed, 8 Apr 2020 10:33:06 GMT
- Title: Nonlinear Dimensionality Reduction for Data Visualization: An
Unsupervised Fuzzy Rule-based Approach
- Authors: Suchismita Das and Nikhil R. Pal
- Abstract summary: We propose an unsupervised fuzzy rule-based dimensionality reduction method primarily for data visualization.
We use a first-order Takagi-Sugeno type model and generate rule antecedents using clusters in the input data.
We apply the proposed method on three synthetic and three real-world data sets and visually compare the results with four other standard data visualization methods.
- Score: 5.5612170847190665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Here, we propose an unsupervised fuzzy rule-based dimensionality reduction
method primarily for data visualization. It considers the following important
issues relevant to dimensionality reduction-based data visualization: (i)
preservation of neighborhood relationships, (ii) handling data on a non-linear
manifold, (iii) the capability of predicting projections for new test data
points, (iv) interpretability of the system, and (v) the ability to reject test
points if required. For this, we use a first-order Takagi-Sugeno type model. We
generate rule antecedents using clusters in the input data. In this context, we
also propose a new variant of the Geodesic c-means clustering algorithm. We
estimate the rule parameters by minimizing an error function that preserves the
inter-point geodesic distances (distances over the manifold) as Euclidean
distances on the projected space. We apply the proposed method on three
synthetic and three real-world data sets and visually compare the results with
four other standard data visualization methods. The obtained results show that
the proposed method behaves desirably and performs better than or comparable to
the methods compared with. The proposed method is found to be robust to the
initial conditions. The predictability of the proposed method for test points
is validated by experiments. We also assess the ability of our method to reject
output points when it should. Then, we extend this concept to provide a general
framework for learning an unsupervised fuzzy model for data projection with
different objective functions. To the best of our knowledge, this is the first
attempt to manifold learning using unsupervised fuzzy modeling.
Related papers
- Efficient Prior Calibration From Indirect Data [5.588334720483076]
This paper is concerned with learning the prior model from data, in particular, learning the prior from multiple realizations of indirect data obtained through the noisy observation process.
An efficient residual-based neural operator approximation of the forward model is proposed and it is shown that this may be learned concurrently with the pushforward map.
arXiv Detail & Related papers (2024-05-28T08:34:41Z) - DF2: Distribution-Free Decision-Focused Learning [53.2476224456902]
Decision-focused learning (DFL) has recently emerged as a powerful approach for predictthen-optimize problems.
Existing end-to-end DFL methods are hindered by three significant bottlenecks: model error, sample average approximation error, and distribution-based parameterization of the expected objective.
We present DF2 -- the first textit-free decision-focused learning method explicitly designed to address these three bottlenecks.
arXiv Detail & Related papers (2023-08-11T00:44:46Z) - Normal Transformer: Extracting Surface Geometry from LiDAR Points
Enhanced by Visual Semantics [6.516912796655748]
This paper presents a technique for estimating the normal from 3D point clouds and 2D colour images.
We have developed a transformer neural network that learns to utilise the hybrid information of visual semantic and 3D geometric data.
arXiv Detail & Related papers (2022-11-19T03:55:09Z) - Laplacian-based Cluster-Contractive t-SNE for High Dimensional Data
Visualization [20.43471678277403]
We propose LaptSNE, a new graph-based dimensionality reduction method based on t-SNE.
Specifically, LaptSNE leverages the eigenvalue information of the graph Laplacian to shrink the potential clusters in the low-dimensional embedding.
We show how to calculate the gradient analytically, which may be of broad interest when considering optimization with Laplacian-composited objective.
arXiv Detail & Related papers (2022-07-25T14:10:24Z) - 6D Rotation Representation For Unconstrained Head Pose Estimation [2.1485350418225244]
We address the problem of ambiguous rotation labels by introducing the rotation matrix formalism for our ground truth data.
This way, our method can learn the full rotation appearance which is contrary to previous approaches that restrict the pose prediction to a narrow-angle.
Experiments on the public AFLW2000 and BIWI datasets demonstrate that our proposed method significantly outperforms other state-of-the-art methods by up to 20%.
arXiv Detail & Related papers (2022-02-25T08:41:13Z) - Riemannian classification of EEG signals with missing values [67.90148548467762]
This paper proposes two strategies to handle missing data for the classification of electroencephalograms.
The first approach estimates the covariance from imputed data with the $k$-nearest neighbors algorithm; the second relies on the observed data by leveraging the observed-data likelihood within an expectation-maximization algorithm.
As results show, the proposed strategies perform better than the classification based on observed data and allow to keep a high accuracy even when the missing data ratio increases.
arXiv Detail & Related papers (2021-10-19T14:24:50Z) - Bayesian Imaging With Data-Driven Priors Encoded by Neural Networks:
Theory, Methods, and Algorithms [2.266704469122763]
This paper proposes a new methodology for performing Bayesian inference in imaging inverse problems where the prior knowledge is available in the form of training data.
We establish the existence and well-posedness of the associated posterior moments under easily verifiable conditions.
A model accuracy analysis suggests that the Bayesian probability probabilities reported by the data-driven models are also remarkably accurate under a frequentist definition.
arXiv Detail & Related papers (2021-03-18T11:34:08Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Deep Dimension Reduction for Supervised Representation Learning [51.10448064423656]
We propose a deep dimension reduction approach to learning representations with essential characteristics.
The proposed approach is a nonparametric generalization of the sufficient dimension reduction method.
We show that the estimated deep nonparametric representation is consistent in the sense that its excess risk converges to zero.
arXiv Detail & Related papers (2020-06-10T14:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.