Wireless Localisation in WiFi using Novel Deep Architectures
- URL: http://arxiv.org/abs/2010.08658v1
- Date: Fri, 16 Oct 2020 22:48:29 GMT
- Title: Wireless Localisation in WiFi using Novel Deep Architectures
- Authors: Peizheng Li, Han Cui, Aftab Khan, Usman Raza, Robert Piechocki, Angela
Doufexi, Tim Farnham
- Abstract summary: This paper studies the indoor localisation of WiFi devices based on a commodity chipset and standard channel sounding.
We present a novel shallow neural network (SNN) in which features are extracted from the channel state information corresponding to WiFi subcarriers received on different antennas.
- Score: 4.541069830146568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the indoor localisation of WiFi devices based on a
commodity chipset and standard channel sounding. First, we present a novel
shallow neural network (SNN) in which features are extracted from the channel
state information (CSI) corresponding to WiFi subcarriers received on different
antennas and used to train the model. The single-layer architecture of this
localisation neural network makes it lightweight and easy-to-deploy on devices
with stringent constraints on computational resources. We further investigate
for localisation the use of deep learning models and design novel architectures
for convolutional neural network (CNN) and long-short term memory (LSTM). We
extensively evaluate these localisation algorithms for continuous tracking in
indoor environments. Experimental results prove that even an SNN model, after a
careful handcrafted feature extraction, can achieve accurate localisation.
Meanwhile, using a well-organised architecture, the neural network models can
be trained directly with raw data from the CSI and localisation features can be
automatically extracted to achieve accurate position estimates. We also found
that the performance of neural network-based methods are directly affected by
the number of anchor access points (APs) regardless of their structure. With
three APs, all neural network models proposed in this paper can obtain
localisation accuracy of around 0.5 metres. In addition the proposed deep NN
architecture reduces the data pre-processing time by 6.5 hours compared with a
shallow NN using the data collected in our testbed. In the deployment phase,
the inference time is also significantly reduced to 0.1 ms per sample. We also
demonstrate the generalisation capability of the proposed method by evaluating
models using different target movement characteristics to the ones in which
they were trained.
Related papers
- Simultaneous Weight and Architecture Optimization for Neural Networks [6.2241272327831485]
We introduce a novel neural network training framework that transforms the process by learning architecture and parameters simultaneously with gradient descent.
Central to our approach is a multi-scale encoder-decoder, in which the encoder embeds pairs of neural networks with similar functionalities close to each other.
Experiments demonstrate that our framework can discover sparse and compact neural networks maintaining a high performance.
arXiv Detail & Related papers (2024-10-10T19:57:36Z) - Efficient Model Adaptation for Continual Learning at the Edge [15.334881190102895]
Most machine learning (ML) systems assume stationary and matching data distributions during training and deployment.
Data distributions often shift over time due to changes in environmental factors, sensor characteristics, and task-of-interest.
This paper presents theAdaptor-Reconfigurator (EAR) framework for efficient continual learning under domain shifts.
arXiv Detail & Related papers (2023-08-03T23:55:17Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - CondenseNeXt: An Ultra-Efficient Deep Neural Network for Embedded
Systems [0.0]
A Convolutional Neural Network (CNN) is a class of Deep Neural Network (DNN) widely used in the analysis of visual images captured by an image sensor.
In this paper, we propose a neoteric variant of deep convolutional neural network architecture to ameliorate the performance of existing CNN architectures for real-time inference on embedded systems.
arXiv Detail & Related papers (2021-12-01T18:20:52Z) - An optimised deep spiking neural network architecture without gradients [7.183775638408429]
We present an end-to-end trainable modular event-driven neural architecture that uses local synaptic and threshold adaptation rules.
The architecture represents a highly abstracted model of existing Spiking Neural Network (SNN) architectures.
arXiv Detail & Related papers (2021-09-27T05:59:12Z) - Self-Learning for Received Signal Strength Map Reconstruction with
Neural Architecture Search [63.39818029362661]
We present a model based on Neural Architecture Search (NAS) and self-learning for received signal strength ( RSS) map reconstruction.
The approach first finds an optimal NN architecture and simultaneously train the deduced model over some ground-truth measurements of a given ( RSS) map.
Experimental results show that signal predictions of this second model outperforms non-learning based state-of-the-art techniques and NN models with no architecture search.
arXiv Detail & Related papers (2021-05-17T12:19:22Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Multi-Tones' Phase Coding (MTPC) of Interaural Time Difference by
Spiking Neural Network [68.43026108936029]
We propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment.
We implement this algorithm in a real-time robotic system with a microphone array.
The experiment results show a mean error azimuth of 13 degrees, which surpasses the accuracy of the other biologically plausible neuromorphic approach for sound source localization.
arXiv Detail & Related papers (2020-07-07T08:22:56Z) - Centimeter-Level Indoor Localization using Channel State Information
with Recurrent Neural Networks [12.193558591962754]
This paper proposes the neural network method to estimate the centimeter-level indoor positioning with real CSI data collected from linear antennas.
It utilizes an amplitude of channel response or a correlation matrix as the input, which can highly reduce the data size and suppress the noise.
Also, it makes use of the consistency in the user motion trajectory via Recurrent Neural Network (RNN) and signal-noise ratio (SNR) information, which can further improve the estimation accuracy.
arXiv Detail & Related papers (2020-02-04T17:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.