A Novel Pseudo Nearest Neighbor Classification Method Using Local Harmonic Mean Distance
- URL: http://arxiv.org/abs/2405.06238v2
- Date: Tue, 28 May 2024 01:48:32 GMT
- Title: A Novel Pseudo Nearest Neighbor Classification Method Using Local Harmonic Mean Distance
- Authors: Junzhuo Chen, Zhixin Lu, Shitong Kang,
- Abstract summary: This article introduces a novel KNN-based classification method called LMPHNN.
LMPHNN improves classification performance based on LMPNN rules and HMD.
It achieves an average precision of 97%, surpassing other methods by 14%.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the realm of machine learning, the KNN classification algorithm is widely recognized for its simplicity and efficiency. However, its sensitivity to the K value poses challenges, especially with small sample sizes or outliers, impacting classification performance. This article introduces a novel KNN-based classifier called LMPHNN (Novel Pseudo Nearest Neighbor Classification Method Using Local Harmonic Mean Distance). LMPHNN leverages harmonic mean distance (HMD) to improve classification performance based on LMPNN rules and HMD. The classifier begins by identifying k nearest neighbors for each class and generates distinct local vectors as prototypes. Pseudo nearest neighbors (PNNs) are then created based on the local mean for each class, determined by comparing the HMD of the sample with the initial k group. Classification is determined by calculating the Euclidean distance between the query sample and PNNs, based on the local mean of these categories. Extensive experiments on various real UCI datasets and combined datasets compare LMPHNN with seven KNN-based classifiers, using precision, recall, accuracy, and F1 as evaluation metrics. LMPHNN achieves an average precision of 97%, surpassing other methods by 14%. The average recall improves by 12%, with an average accuracy enhancement of 5%. Additionally, LMPHNN demonstrates a 13% higher average F1 value compared to other methods. In summary, LMPHNN outperforms other classifiers, showcasing lower sensitivity with small sample sizes.
Related papers
- Adaptive $k$-nearest neighbor classifier based on the local estimation of the shape operator [49.87315310656657]
We introduce a new adaptive $k$-nearest neighbours ($kK$-NN) algorithm that explores the local curvature at a sample to adaptively defining the neighborhood size.
Results on many real-world datasets indicate that the new $kK$-NN algorithm yields superior balanced accuracy compared to the established $k$-NN method.
arXiv Detail & Related papers (2024-09-08T13:08:45Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Information Modified K-Nearest Neighbor [4.916646834691489]
K-Nearest Neighbors (KNN) is the classification of samples based on the majority through their nearest neighbors.
Many KNN methodologies introduce complex algorithms that do not significantly outperform the traditional KNN.
We present a proposed information-modified KNN (IMKNN) to improve the performance of the KNN algorithm.
We conduct experiments on 12 widely-used datasets, achieving 11.05%, 12.42%, and 12.07% in accuracy, precision, and recall performance, respectively.
arXiv Detail & Related papers (2023-12-04T16:10:34Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Local Sample-weighted Multiple Kernel Clustering with Consensus
Discriminative Graph [73.68184322526338]
Multiple kernel clustering (MKC) is committed to achieving optimal information fusion from a set of base kernels.
This paper proposes a novel local sample-weighted multiple kernel clustering model.
Experimental results demonstrate that our LSWMKC possesses better local manifold representation and outperforms existing kernel or graph-based clustering algo-rithms.
arXiv Detail & Related papers (2022-07-05T05:00:38Z) - Nearest Neighbor Zero-Shot Inference [68.56747574377215]
kNN-Prompt is a technique to use k-nearest neighbor (kNN) retrieval augmentation for zero-shot inference with language models (LMs)
fuzzy verbalizers leverage the sparse kNN distribution for downstream tasks by automatically associating each classification label with a set of natural language tokens.
Experiments show that kNN-Prompt is effective for domain adaptation with no further training, and that the benefits of retrieval increase with the size of the model used for kNN retrieval.
arXiv Detail & Related papers (2022-05-27T07:00:59Z) - Dynamic Ensemble Selection Using Fuzzy Hyperboxes [10.269997499911668]
This paper presents a new dynamic ensemble selection (DES) framework based on fuzzy hyperboxes called FH-DES.
Each hyperbox can represent a group of samples using only two data points (Min and Max corners)
For the first time, misclassified samples are used to estimate the competence of the classifiers, which has not been observed in previous fusion approaches.
arXiv Detail & Related papers (2022-05-20T21:06:46Z) - Adaptive Nearest Neighbor Machine Translation [60.97183408140499]
kNN-MT combines pre-trained neural machine translation with token-level k-nearest-neighbor retrieval.
Traditional kNN algorithm simply retrieves a same number of nearest neighbors for each target token.
We propose Adaptive kNN-MT to dynamically determine the number of k for each target token.
arXiv Detail & Related papers (2021-05-27T09:27:42Z) - KNN Classification with One-step Computation [10.381276986079865]
A one-step computation is proposed to replace the lazy part of KNN classification.
The proposed approach is experimentally evaluated, and demonstrated that the one-step KNN classification is efficient and promising.
arXiv Detail & Related papers (2020-12-09T13:34:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.