Complexity Reduction in Machine Learning-Based Wireless Positioning:
Minimum Description Features
- URL: http://arxiv.org/abs/2402.09580v1
- Date: Wed, 14 Feb 2024 21:03:08 GMT
- Title: Complexity Reduction in Machine Learning-Based Wireless Positioning:
Minimum Description Features
- Authors: Myeung Suk Oh, Anindya Bijoy Das, Taejoon Kim, David J. Love, and
Christopher G. Brinton
- Abstract summary: We design a positioning neural network (P-NN) that substantially reduces the complexity of deep learning-based wireless positioning algorithms.
Our feature selection is based on maximum power measurements and their temporal locations to convey information needed to conduct WP.
Numerical results show that P-NN achieves a significant advantage in performance-complexity tradeoff over deep learning baselines.
- Score: 20.53418520833158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A recent line of research has been investigating deep learning approaches to
wireless positioning (WP). Although these WP algorithms have demonstrated high
accuracy and robust performance against diverse channel conditions, they also
have a major drawback: they require processing high-dimensional features, which
can be prohibitive for mobile applications. In this work, we design a
positioning neural network (P-NN) that substantially reduces the complexity of
deep learning-based WP through carefully crafted minimum description features.
Our feature selection is based on maximum power measurements and their temporal
locations to convey information needed to conduct WP. We also develop a novel
methodology for adaptively selecting the size of feature space, which optimizes
over balancing the expected amount of useful information and classification
capability, quantified using information-theoretic measures on the signal bin
selection. Numerical results show that P-NN achieves a significant advantage in
performance-complexity tradeoff over deep learning baselines that leverage the
full power delay profile (PDP).
Related papers
- Minimum Description Feature Selection for Complexity Reduction in Machine Learning-based Wireless Positioning [20.53418520833158]
We design a novel positioning neural network (P-NN) that utilizes the minimum description features to substantially reduce the complexity of deep learning-based WP.
We improve P-NN's learning ability by intelligently processing two different types of inputs: sparse image and measurement matrices.
Numerical results show that P-NN achieves a significant advantage in performance-complexity tradeoff over deep learning baselines.
arXiv Detail & Related papers (2024-04-21T21:47:54Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Semi-Parametric Inducing Point Networks and Neural Processes [15.948270454686197]
Semi-parametric inducing point networks (SPIN) can query the training set at inference time in a compute-efficient manner.
SPIN attains linear complexity via a cross-attention mechanism between datapoints inspired by inducing point methods.
In our experiments, SPIN reduces memory requirements, improves accuracy across a range of meta-learning tasks, and improves state-of-the-art performance on an important practical problem, genotype imputation.
arXiv Detail & Related papers (2022-05-24T01:42:46Z) - Semi-supervised Network Embedding with Differentiable Deep Quantisation [81.49184987430333]
We develop d-SNEQ, a differentiable quantisation method for network embedding.
d-SNEQ incorporates a rank loss to equip the learned quantisation codes with rich high-order information.
It is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed.
arXiv Detail & Related papers (2021-08-20T11:53:05Z) - JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data [86.8949732640035]
We propose JUMBO, an MBO algorithm that sidesteps limitations by querying additional data.
We show that it achieves no-regret under conditions analogous to GP-UCB.
Empirically, we demonstrate significant performance improvements over existing approaches on two real-world optimization problems.
arXiv Detail & Related papers (2021-06-02T05:03:38Z) - Contextual HyperNetworks for Novel Feature Adaptation [43.49619456740745]
Contextual HyperNetwork (CHN) generates parameters for extending the base model to a new feature.
At prediction time, the CHN requires only a single forward pass through a neural network, yielding a significant speed-up.
We show that this system obtains improved few-shot learning performance for novel features over existing imputation and meta-learning baselines.
arXiv Detail & Related papers (2021-04-12T23:19:49Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.