Evaluation of the impact of the indiscernibility relation on the
fuzzy-rough nearest neighbours algorithm
- URL: http://arxiv.org/abs/2211.14134v1
- Date: Fri, 25 Nov 2022 14:17:56 GMT
- Title: Evaluation of the impact of the indiscernibility relation on the
fuzzy-rough nearest neighbours algorithm
- Authors: Henri Bollaert and Chris Cornelis
- Abstract summary: Fuzzy-rough nearest neighbours (FRNN) is a classification algorithm based on the classical k-nearest neighbours algorithm.
In this paper, we investigate the impact of the indiscernibility relation on the performance of FRNN classification.
- Score: 1.4213973379473654
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Fuzzy rough sets are well-suited for working with vague, imprecise or
uncertain information and have been succesfully applied in real-world
classification problems. One of the prominent representatives of this theory is
fuzzy-rough nearest neighbours (FRNN), a classification algorithm based on the
classical k-nearest neighbours algorithm. The crux of FRNN is the
indiscernibility relation, which measures how similar two elements in the data
set of interest are. In this paper, we investigate the impact of this
indiscernibility relation on the performance of FRNN classification. In
addition to relations based on distance functions and kernels, we also explore
the effect of distance metric learning on FRNN for the first time. Furthermore,
we also introduce an asymmetric, class-specific relation based on the
Mahalanobis distance which uses the correlation within each class, and which
shows a significant improvement over the regular Mahalanobis distance, but is
still beaten by the Manhattan distance. Overall, the Neighbourhood Components
Analysis algorithm is found to be the best performer, trading speed for
accuracy.
Related papers
- Adaptive $k$-nearest neighbor classifier based on the local estimation of the shape operator [49.87315310656657]
We introduce a new adaptive $k$-nearest neighbours ($kK$-NN) algorithm that explores the local curvature at a sample to adaptively defining the neighborhood size.
Results on many real-world datasets indicate that the new $kK$-NN algorithm yields superior balanced accuracy compared to the established $k$-NN method.
arXiv Detail & Related papers (2024-09-08T13:08:45Z) - Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Revisiting Rotation Averaging: Uncertainties and Robust Losses [51.64986160468128]
We argue that the main problem of current methods is the minimized cost function that is only weakly connected with the input data via the estimated epipolar.
We propose to better model the underlying noise distributions by directly propagating the uncertainty from the point correspondences into the rotation averaging.
arXiv Detail & Related papers (2023-03-09T11:51:20Z) - Robust affine point matching via quadratic assignment on Grassmannians [50.366876079978056]
Robust Affine Matching with Grassmannians (RoAM) is a new algorithm to perform affine registration of point clouds.
The algorithm is based on minimizing the Frobenius distance between two elements of the Grassmannian.
arXiv Detail & Related papers (2023-03-05T15:27:24Z) - DNNR: Differential Nearest Neighbors Regression [8.667550264279166]
K-nearest neighbors (KNN) is one of the earliest and most established algorithms in machine learning.
For regression tasks, KNN averages the targets within a neighborhood which poses a number of challenges.
We propose Differential Nearest Neighbors Regression (DNNR) that addresses both issues simultaneously.
arXiv Detail & Related papers (2022-05-17T15:22:53Z) - Riemannian classification of EEG signals with missing values [67.90148548467762]
This paper proposes two strategies to handle missing data for the classification of electroencephalograms.
The first approach estimates the covariance from imputed data with the $k$-nearest neighbors algorithm; the second relies on the observed data by leveraging the observed-data likelihood within an expectation-maximization algorithm.
As results show, the proposed strategies perform better than the classification based on observed data and allow to keep a high accuracy even when the missing data ratio increases.
arXiv Detail & Related papers (2021-10-19T14:24:50Z) - How to Design Robust Algorithms using Noisy Comparison Oracle [12.353002222958605]
Metric based comparison operations are fundamental to studying various clustering techniques.
In this paper, we study various problems that include finding maximum, nearest/farthest neighbor search.
We give robust algorithms for k-center clustering and agglomerative hierarchical clustering.
arXiv Detail & Related papers (2021-05-12T16:58:09Z) - Learning with Group Noise [106.56780716961732]
We propose a novel Max-Matching method for learning with group noise.
The performance on arange of real-world datasets in the area of several learning paradigms demonstrates the effectiveness of Max-Matching.
arXiv Detail & Related papers (2021-03-17T06:57:10Z) - Nearest Neighbor Search Under Uncertainty [19.225091554227948]
Nearest Neighbor Search (NNS) is a central task in knowledge representation, learning, and reasoning.
This paper studies NNS under Uncertainty (NNSU)
arXiv Detail & Related papers (2021-03-08T20:20:01Z) - Leveraging Reinforcement Learning for evaluating Robustness of KNN
Search Algorithms [0.0]
The problem of finding K-nearest neighbors in the given dataset for a given query point has been worked upon since several years.
In this paper, we survey some novel K-Nearest Neighbor Search approaches that tackles the problem of Search from the perspectives of computations.
In order to evaluate the robustness of a KNNS approach against adversarial points, we propose a generic Reinforcement Learning based framework for the same.
arXiv Detail & Related papers (2021-02-10T16:10:58Z) - A Weighted Mutual k-Nearest Neighbour for Classification Mining [4.538870924201896]
kNN is a very effective Instance based learning method, and it is easy to implement.
In this paper, we propose a new learning algorithm which performs the task of anomaly detection and removal of pseudo neighbours from the dataset.
arXiv Detail & Related papers (2020-05-14T18:11:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.