Novel Fine-Tuned Attribute Weighted Na\"ive Bayes NLoS Classifier for
UWB Positioning
- URL: http://arxiv.org/abs/2304.11067v1
- Date: Fri, 14 Apr 2023 09:59:18 GMT
- Title: Novel Fine-Tuned Attribute Weighted Na\"ive Bayes NLoS Classifier for
UWB Positioning
- Authors: Fuhu Che, Qasim Zeeshan Ahmed, Fahd Ahmed Khan, and Faheem A. Khan
- Abstract summary: We propose a novel Fine-Tuned attribute Weighted Na"ive Bayes (FT-WNB) to identify the Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) signals in an Indoor Positioning System (IPS)
The FT-WNB classifier assigns each signal feature a specific weight and fine-tunes its probabilities to address the mismatch between the predicted and actual class.
- Score: 0.6299766708197881
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel Fine-Tuned attribute Weighted Na\"ive Bayes
(FT-WNB) classifier to identify the Line-of-Sight (LoS) and Non-Line-of-Sight
(NLoS) for UltraWide Bandwidth (UWB) signals in an Indoor Positioning System
(IPS). The FT-WNB classifier assigns each signal feature a specific weight and
fine-tunes its probabilities to address the mismatch between the predicted and
actual class. The performance of the FT-WNB classifier is compared with the
state-of-the-art Machine Learning (ML) classifiers such as minimum Redundancy
Maximum Relevance (mRMR)- $k$-Nearest Neighbour (KNN), Support Vector Machine
(SVM), Decision Tree (DT), Na\"ive Bayes (NB), and Neural Network (NN). It is
demonstrated that the proposed classifier outperforms other algorithms by
achieving a high NLoS classification accuracy of $99.7\%$ with imbalanced data
and $99.8\%$ with balanced data. The experimental results indicate that our
proposed FT-WNB classifier significantly outperforms the existing
state-of-the-art ML methods for LoS and NLoS signals in IPS in the considered
scenario.
Related papers
- Role of Locality and Weight Sharing in Image-Based Tasks: A Sample Complexity Separation between CNNs, LCNs, and FCNs [42.551773746803946]
Vision tasks are characterized by the properties of locality and translation invariance.
The superior performance of convolutional neural networks (CNNs) on these tasks is widely attributed to the inductive bias of locality and weight sharing baked into their architecture.
Existing attempts to quantify the statistical benefits of these biases in CNNs over locally connected neural networks (LCNs) and fully connected neural networks (FCNs) fall into one of the following categories.
arXiv Detail & Related papers (2024-03-23T03:57:28Z) - Class-Balanced and Reinforced Active Learning on Graphs [13.239043161351482]
Graph neural networks (GNNs) have demonstrated significant success in various applications, such as node classification, link prediction, and graph classification.
Active learning for GNNs aims to query the valuable samples from the unlabeled data for annotation to maximize the GNNs' performance at a lower cost.
Most existing algorithms for reinforced active learning in GNNs may lead to a highly imbalanced class distribution, especially in highly skewed class scenarios.
We propose a novel class-balanced and reinforced active learning framework for GNNs, namely, GCBR. It learns an optimal policy to acquire class-balanced and informative nodes
arXiv Detail & Related papers (2024-02-15T16:37:14Z) - Feature-Based Generalized Gaussian Distribution Method for NLoS
Detection in Ultra-Wideband (UWB) Indoor Positioning System [3.5522191686718725]
Non-Line-of-Sight (NLoS) propagation condition is a crucial factor affecting the precision of the localization in the Ultra-Wideband (UWB) Indoor Positioning System (IPS)
It is difficult for existing Machine Learning approaches to maintain a high classification accuracy when the database contains a small number of NLoS signals and a large number of Line-of-Sight signals.
We propose feature-based Gaussian Distribution (GD) and Generalized Gaussian Distribution (GGD) NLoS detection algorithms.
arXiv Detail & Related papers (2023-04-14T11:51:12Z) - NP-Match: Towards a New Probabilistic Model for Semi-Supervised Learning [86.60013228560452]
Semi-supervised learning (SSL) has been widely explored in recent years, and it is an effective way of leveraging unlabeled data.
In this work, we adjust neural processes (NPs) to the semi-supervised image classification task, resulting in a new method named NP-Match.
NP-Match implicitly compares data points when making predictions, and as a result, the prediction of each unlabeled data point is affected by the labeled data points.
arXiv Detail & Related papers (2023-01-31T11:44:45Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Adaptive Nearest Neighbor Machine Translation [60.97183408140499]
kNN-MT combines pre-trained neural machine translation with token-level k-nearest-neighbor retrieval.
Traditional kNN algorithm simply retrieves a same number of nearest neighbors for each target token.
We propose Adaptive kNN-MT to dynamically determine the number of k for each target token.
arXiv Detail & Related papers (2021-05-27T09:27:42Z) - Self-Distribution Binary Neural Networks [18.69165083747967]
We study the binary neural networks (BNNs) of which both the weights and activations are binary (i.e., 1-bit representation)
We propose Self-Distribution Binary Neural Network (SD-BNN)
Experiments on CIFAR-10 and ImageNet datasets show that the proposed SD-BNN consistently outperforms the state-of-the-art (SOTA) BNNs.
arXiv Detail & Related papers (2021-03-03T13:39:52Z) - Delving Deep into Label Smoothing [112.24527926373084]
Label smoothing is an effective regularization tool for deep neural networks (DNNs)
We present an Online Label Smoothing (OLS) strategy, which generates soft labels based on the statistics of the model prediction for the target category.
arXiv Detail & Related papers (2020-11-25T08:03:11Z) - Distributionally Robust Weighted $k$-Nearest Neighbors [21.537952410507483]
Learning a robust classifier from a few samples remains a key challenge in machine learning.
In this paper, we study a minimax distributionally robust formulation of weighted $k$-nearest neighbors.
We develop an algorithm, textttDr.k-NN, that efficiently solves this functional optimization problem.
arXiv Detail & Related papers (2020-06-07T00:34:33Z) - OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax
Layer [77.90012156266324]
This paper aims to find a subspace of neural networks that can facilitate a large decision margin.
We propose the Orthogonal Softmax Layer (OSL), which makes the weight vectors in the classification layer remain during both the training and test processes.
Experimental results demonstrate that the proposed OSL has better performance than the methods used for comparison on four small-sample benchmark datasets.
arXiv Detail & Related papers (2020-04-20T02:41:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.