Automated Human Activity Recognition by Colliding Bodies
Optimization-based Optimal Feature Selection with Recurrent Neural Network
- URL: http://arxiv.org/abs/2010.03324v3
- Date: Fri, 19 Nov 2021 09:26:02 GMT
- Title: Automated Human Activity Recognition by Colliding Bodies
Optimization-based Optimal Feature Selection with Recurrent Neural Network
- Authors: Pankaj Khatiwada, Ayan Chatterjee, Matrika Subedi
- Abstract summary: Human Activity Recognition (HAR) is considered to be an efficient model in pervasive computation from sensor readings.
This paper tempts to implement the HAR system using deep learning with the data collected from smart sensors that are publicly available in the UC Irvine Machine Learning Repository (UCI)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In smart healthcare, Human Activity Recognition (HAR) is considered to be an
efficient model in pervasive computation from sensor readings. The Ambient
Assisted Living (AAL) in the home or community helps the people in providing
independent care and enhanced living quality. However, many AAL models were
restricted using many factors that include computational cost and system
complexity. Moreover, the HAR concept has more relevance because of its
applications. Hence, this paper tempts to implement the HAR system using deep
learning with the data collected from smart sensors that are publicly available
in the UC Irvine Machine Learning Repository (UCI). The proposed model involves
three processes: (1) Data collection, (b) Optimal feature selection, (c)
Recognition. The data gathered from the benchmark repository is initially
subjected to optimal feature selection that helps to select the most
significant features. The proposed optimal feature selection is based on a new
meta-heuristic algorithm called Colliding Bodies Optimization (CBO). An
objective function derived by the recognition accuracy is used for
accomplishing the optimal feature selection. Here, the deep learning model
called Recurrent Neural Network (RNN) is used for activity recognition. The
proposed model on the concerned benchmark dataset outperforms existing learning
methods, providing high performance compared to the conventional models.
Related papers
- IGANN Sparse: Bridging Sparsity and Interpretability with Non-linear Insight [4.010646933005848]
IGANN Sparse is a novel machine learning model from the family of generalized additive models.
It promotes sparsity through a non-linear feature selection process during training.
This ensures interpretability through improved model sparsity without sacrificing predictive performance.
arXiv Detail & Related papers (2024-03-17T22:44:36Z) - Sample Complexity of Preference-Based Nonparametric Off-Policy
Evaluation with Deep Networks [58.469818546042696]
We study the sample efficiency of OPE with human preference and establish a statistical guarantee for it.
By appropriately selecting the size of a ReLU network, we show that one can leverage any low-dimensional manifold structure in the Markov decision process.
arXiv Detail & Related papers (2023-10-16T16:27:06Z) - FAStEN: An Efficient Adaptive Method for Feature Selection and Estimation in High-Dimensional Functional Regressions [7.674715791336311]
We propose a new, flexible and ultra-efficient approach to perform feature selection in a sparse function-on-function regression problem.
We show how to extend it to the scalar-on-function framework.
We present an application to brain fMRI data from the AOMIC PIOP1 study.
arXiv Detail & Related papers (2023-03-26T19:41:17Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Meta-Wrapper: Differentiable Wrapping Operator for User Interest
Selection in CTR Prediction [97.99938802797377]
Click-through rate (CTR) prediction, whose goal is to predict the probability of the user to click on an item, has become increasingly significant in recommender systems.
Recent deep learning models with the ability to automatically extract the user interest from his/her behaviors have achieved great success.
We propose a novel approach under the framework of the wrapper method, which is named Meta-Wrapper.
arXiv Detail & Related papers (2022-06-28T03:28:15Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Fair Feature Subset Selection using Multiobjective Genetic Algorithm [0.0]
We present a feature subset selection approach that improves both fairness and accuracy objectives.
We use statistical disparity as a fairness metric and F1-Score as a metric for model performance.
Our experiments on the most commonly used fairness benchmark datasets show that using the evolutionary algorithm we can effectively explore the trade-off between fairness and accuracy.
arXiv Detail & Related papers (2022-04-30T22:51:19Z) - i-Razor: A Differentiable Neural Input Razor for Feature Selection and
Dimension Search in DNN-Based Recommender Systems [8.992480061695138]
Noisy features and inappropriate embedding dimension assignments can deteriorate the performance of recommender systems.
We propose a differentiable neural input razor (i-Razor) that enables joint optimization of feature selection and dimension search.
arXiv Detail & Related papers (2022-04-01T08:30:06Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.