Validation of Practicality for CSI Sensing Utilizing Machine Learning
- URL: http://arxiv.org/abs/2409.07495v1
- Date: Mon, 9 Sep 2024 09:25:08 GMT
- Title: Validation of Practicality for CSI Sensing Utilizing Machine Learning
- Authors: Tomoya Tanaka, Ayumu Yabuki, Mizuki Funakoshi, Ryo Yonemoto,
- Abstract summary: We develop and evaluate five distinct machine learning models for recognizing human postures.
We analyze how the accuracy of these models varied with different amounts of training data.
We evaluate the models' performance in a setting distinct from the one used for data collection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we leveraged Channel State Information (CSI), commonly utilized in WLAN communication, as training data to develop and evaluate five distinct machine learning models for recognizing human postures: standing, sitting, and lying down. The models we employed were: (i) Linear Discriminant Analysis, (ii) Naive Bayes-Support Vector Machine, (iii) Kernel-Support Vector Machine, (iv) Random Forest, and (v) Deep Learning. We systematically analyzed how the accuracy of these models varied with different amounts of training data. Additionally, to assess their spatial generalization capabilities, we evaluated the models' performance in a setting distinct from the one used for data collection. The experimental findings indicated that while two models -- (ii) Naive Bayes-Support Vector Machine and (v) Deep Learning -- achieved 85% or more accuracy in the original setting, their accuracy dropped to approximately 30% when applied in a different environment. These results underscore that although CSI-based machine learning models can attain high accuracy within a consistent spatial structure, their performance diminishes considerably with changes in spatial conditions, highlighting a significant challenge in their generalization capabilities.
Related papers
- Study of Dropout in PointPillars with 3D Object Detection [0.0]
3D object detection is critical for autonomous driving, leveraging deep learning techniques to interpret LiDAR data.
This study provides an analysis of enhancing the performance of PointPillars model under various dropout rates.
arXiv Detail & Related papers (2024-09-01T09:30:54Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - A Comprehensive Evaluation and Analysis Study for Chinese Spelling Check [53.152011258252315]
We show that using phonetic and graphic information reasonably is effective for Chinese Spelling Check.
Models are sensitive to the error distribution of the test set, which reflects the shortcomings of models.
The commonly used benchmark, SIGHAN, can not reliably evaluate models' performance.
arXiv Detail & Related papers (2023-07-25T17:02:38Z) - Towards Understanding How Data Augmentation Works with Imbalanced Data [17.478900028887537]
We study the effect of data augmentation on three different classifiers, convolutional neural networks, support vector machines, and logistic regression models.
Our research indicates that DA, when applied to imbalanced data, produces substantial changes in model weights, support vectors and feature selection.
We hypothesize that DA works by facilitating variances in data, so that machine learning models can associate changes in the data with labels.
arXiv Detail & Related papers (2023-04-12T15:01:22Z) - Revisiting Classifier: Transferring Vision-Language Models for Video
Recognition [102.93524173258487]
Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research.
In this study, we focus on transferring knowledge for video classification tasks.
We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning.
arXiv Detail & Related papers (2022-07-04T10:00:47Z) - Leveraging Intrinsic Gradient Information for Machine Learning Model
Training [4.682734815593623]
derivatives of target variables with respect to inputs can be leveraged to improve the accuracy of differentiable machine learning models.
Four key ideas are explored: (1) Improving the predictive accuracy of linear regression models and feed-forward neural networks (NNs); (2) Using the difference between the performance of feedforward NNs trained with and without gradient information to tune NN complexity; and (4) Using gradient information to improve generative image models.
arXiv Detail & Related papers (2021-11-30T20:50:45Z) - A Systematic Evaluation of Domain Adaptation in Facial Expression
Recognition [0.0]
This paper provides a systematic evaluation of domain adaptation in facial expression recognition.
We use state-of-the-art transfer learning techniques and six commonly-used facial expression datasets.
We find sobering results that the accuracy of transfer learning is not high, and varies idiosyncratically with the target dataset.
arXiv Detail & Related papers (2021-06-29T14:41:19Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Improving the Performance of Fine-Grain Image Classifiers via Generative
Data Augmentation [0.5161531917413706]
We develop Data Augmentation from Proficient Pre-Training of Robust Generative Adrial Networks (DAPPER GAN)
DAPPER GAN is an ML analytics support tool that automatically generates novel views of training images.
We experimentally evaluate this technique on the Stanford Cars dataset, demonstrating improved vehicle make and model classification accuracy.
arXiv Detail & Related papers (2020-08-12T15:29:11Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.