On Neural Inertial Classification Networks for Pedestrian Activity Recognition
- URL: http://arxiv.org/abs/2502.17520v1
- Date: Sun, 23 Feb 2025 08:15:26 GMT
- Title: On Neural Inertial Classification Networks for Pedestrian Activity Recognition
- Authors: Zeev Yampolsky, Ofir Kruzel, Victoria Khalfin Fekson, Itzik Klein,
- Abstract summary: Inertial sensors are crucial for recognizing pedestrian activity.<n>Recent advances in deep learning have greatly improved inertial sensing performance and robustness.<n>Different domains and platforms use deep-learning techniques to enhance network performance, but there is no common benchmark.<n>The aim of this paper is to fill this gap by defining and analyzing ten data-driven techniques for improving neural inertial classification networks.
- Score: 2.374912052693646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inertial sensors are crucial for recognizing pedestrian activity. Recent advances in deep learning have greatly improved inertial sensing performance and robustness. Different domains and platforms use deep-learning techniques to enhance network performance, but there is no common benchmark. The latter is crucial for fair comparison and evaluation within a standardized framework. The aim of this paper is to fill this gap by defining and analyzing ten data-driven techniques for improving neural inertial classification networks. In order to accomplish this, we focused on three aspects of neural networks: network architecture, data augmentation, and data preprocessing. The experiments were conducted across four datasets collected from 78 participants. In total, over 936 minutes of inertial data sampled between 50-200Hz were analyzed. Data augmentation through rotation and multi-head architecture consistently yields the most significant improvements. Additionally, this study outlines benchmarking strategies for enhancing neural inertial classification networks.
Related papers
- NIDS Neural Networks Using Sliding Time Window Data Processing with Trainable Activations and its Generalization Capability [0.0]
This paper presents neural networks for network intrusion detection systems (NIDS) that operate on flow data preprocessed with a time window.
It requires only eleven features which do not rely on deep packet inspection and can be found in most NIDS datasets and easily obtained from conventional flow collectors.
The reported training accuracy exceeds 99% for the proposed method with as little as twenty neural network input features.
arXiv Detail & Related papers (2024-10-24T11:36:19Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Robust Localization of Key Fob Using Channel Impulse Response of Ultra
Wide Band Sensors for Keyless Entry Systems [12.313730356985019]
Using neural networks for localization of key fob within and surrounding a car as a security feature for keyless entry is fast emerging.
The model's performance improved by 67% at certain ranges of adversarial magnitude for fast gradient sign method and 37% each for basic iterative method and projected gradient descent method.
arXiv Detail & Related papers (2024-01-16T22:35:14Z) - Data Augmentations in Deep Weight Spaces [89.45272760013928]
We introduce a novel augmentation scheme based on the Mixup method.
We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate.
arXiv Detail & Related papers (2023-11-15T10:43:13Z) - Entropy-based Guidance of Deep Neural Networks for Accelerated Convergence and Improved Performance [0.8749675983608172]
We derive new mathematical results to measure the changes in entropy as fully-connected and convolutional neural networks process data.
By measuring the change in entropy as networks process data effectively, patterns critical to a well-performing network can be visualized and identified.
Experiments in image compression, image classification, and image segmentation on benchmark datasets demonstrate these losses guide neural networks to learn rich latent data representations in fewer dimensions.
arXiv Detail & Related papers (2023-08-28T23:33:07Z) - Hidden Classification Layers: Enhancing linear separability between
classes in neural networks layers [0.0]
We investigate the impact on deep network performances of a training approach.
We propose a neural network architecture which induces an error function involving the outputs of all the network layers.
arXiv Detail & Related papers (2023-06-09T10:52:49Z) - Convolution, aggregation and attention based deep neural networks for
accelerating simulations in mechanics [1.0154623955833253]
We demonstrate three types of neural network architectures for efficient learning of deformations of solid bodies.
The first two are based on the recently proposed CNN U-NET and MAgNET frameworks which have shown promising performance for learning on mesh-based data.
The third architecture is Perceiver IO, a very recent architecture that belongs to the family of attention-based neural networks.
arXiv Detail & Related papers (2022-12-01T13:10:56Z) - A Law of Data Separation in Deep Learning [41.58856318262069]
We study the fundamental question of how deep neural networks process data in the intermediate layers.
Our finding is a simple and quantitative law that governs how deep neural networks separate data according to class membership.
arXiv Detail & Related papers (2022-10-31T02:25:38Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - A Comprehensive Survey on Community Detection with Deep Learning [93.40332347374712]
A community reveals the features and connections of its members that are different from those in other communities in a network.
This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods.
The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders.
arXiv Detail & Related papers (2021-05-26T14:37:07Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z) - Subset Sampling For Progressive Neural Network Learning [106.12874293597754]
Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data.
We propose to speed up this process by exploiting subsets of training data at each incremental training step.
Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably.
arXiv Detail & Related papers (2020-02-17T18:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.