Human Activity Analysis and Recognition from Smartphones using Machine
Learning Techniques
- URL: http://arxiv.org/abs/2103.16490v1
- Date: Tue, 30 Mar 2021 16:46:40 GMT
- Title: Human Activity Analysis and Recognition from Smartphones using Machine
Learning Techniques
- Authors: Jakaria Rabbi, Md. Tahmid Hasan Fuad, Md. Abdul Awal
- Abstract summary: Human Activity Recognition (HAR) is considered a valuable research topic in the last few decades.
In our paper, we analyze data using machine learning models to recognize human activities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human Activity Recognition (HAR) is considered a valuable research topic in
the last few decades. Different types of machine learning models are used for
this purpose, and this is a part of analyzing human behavior through machines.
It is not a trivial task to analyze the data from wearable sensors for complex
and high dimensions. Nowadays, researchers mostly use smartphones or smart home
sensors to capture these data. In our paper, we analyze these data using
machine learning models to recognize human activities, which are now widely
used for many purposes such as physical and mental health monitoring. We apply
different machine learning models and compare performances. We use Logistic
Regression (LR) as the benchmark model for its simplicity and excellent
performance on a dataset, and to compare, we take Decision Tree (DT), Support
Vector Machine (SVM), Random Forest (RF), and Artificial Neural Network (ANN).
Additionally, we select the best set of parameters for each model by grid
search. We use the HAR dataset from the UCI Machine Learning Repository as a
standard dataset to train and test the models. Throughout the analysis, we can
see that the Support Vector Machine performed (average accuracy 96.33%) far
better than the other methods. We also prove that the results are statistically
significant by employing statistical significance test methods.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Explainable AI for Comparative Analysis of Intrusion Detection Models [20.683181384051395]
This research analyzes various machine learning models to the tasks of binary and multi-class classification for intrusion detection from network traffic.
We trained all models to the accuracy of 90% on the UNSW-NB15 dataset.
We also discover that Random Forest provides the best performance in terms of accuracy, time efficiency and robustness.
arXiv Detail & Related papers (2024-06-14T03:11:01Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - "Task-relevant autoencoding" enhances machine learning for human
neuroscience [0.0]
In human neuroscience, machine learning can help reveal lower-dimensional neural representations relevant to subjects' behavior.
We developed a Task-Relevant Autoencoder via Enhancement (TRACE), and tested its ability to extract behaviorally-relevant, separable representations.
TRACE outperformed all models nearly unilaterally, showing up to 12% increased classification accuracy and up to 56% improvement in discovering "cleaner", task-relevant representations.
arXiv Detail & Related papers (2022-08-17T18:44:39Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - Human Activity Recognition models using Limited Consumer Device Sensors
and Machine Learning [0.0]
Human activity recognition has grown in popularity with its increase of applications within daily lifestyles and medical environments.
This paper presents the findings of different models that are limited to train using sensor data from smartphones and smartwatches.
Results show promise for models trained strictly using limited sensor data collected from only smartphones and smartwatches coupled with traditional machine learning concepts and algorithms.
arXiv Detail & Related papers (2022-01-21T06:54:05Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - It's the Best Only When It Fits You Most: Finding Related Models for
Serving Based on Dynamic Locality Sensitive Hashing [1.581913948762905]
Preparation of training data is often a bottleneck in the lifecycle of deploying a deep learning model for production or research.
This paper proposes an end-to-end process of searching related models for serving based on the similarity of the target dataset and the training datasets of the available models.
arXiv Detail & Related papers (2020-10-13T22:52:13Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Unsupervised Pre-trained Models from Healthy ADLs Improve Parkinson's
Disease Classification of Gait Patterns [3.5939555573102857]
We show how to extract features relevant to accelerometer gait data for Parkinson's disease classification.
Our pre-trained source model consists of a convolutional autoencoder, and the target classification model is a simple multi-layer perceptron model.
We explore two different pre-trained source models, trained using different activity groups, and analyze the influence the choice of pre-trained model has over the task of Parkinson's disease classification.
arXiv Detail & Related papers (2020-05-06T04:08:19Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.