RSSI Estimation for Constrained Indoor Wireless Networks using ANN
- URL: http://arxiv.org/abs/2404.15337v1
- Date: Wed, 10 Apr 2024 02:48:13 GMT
- Title: RSSI Estimation for Constrained Indoor Wireless Networks using ANN
- Authors: Samrah Arif, M. Arif Khan, Sabih Ur Rehman,
- Abstract summary: This research establishes two distinct LP-IoT wireless channel estimation models using Artificial Neural Networks (ANN)
Both models have been constructed to enhance LP-IoT communication by lowering the estimation error in the LP-IoT wireless channel.
The findings demonstrate that our suggested approaches attain remarkable precision in channel estimation, with an improvement in MSE of $88.29%$ of the Feature-based model and $97.46%$ of the Sequence-based model over existing research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the expanding field of the Internet of Things (IoT), wireless channel estimation is a significant challenge. This is specifically true for low-power IoT (LP-IoT) communication, where efficiency and accuracy are extremely important. This research establishes two distinct LP-IoT wireless channel estimation models using Artificial Neural Networks (ANN): a Feature-based ANN model and a Sequence-based ANN model. Both models have been constructed to enhance LP-IoT communication by lowering the estimation error in the LP-IoT wireless channel. The Feature-based model aims to capture complex patterns of measured Received Signal Strength Indicator (RSSI) data using environmental characteristics. The Sequence-based approach utilises predetermined categorisation techniques to estimate the RSSI sequence of specifically selected environment characteristics. The findings demonstrate that our suggested approaches attain remarkable precision in channel estimation, with an improvement in MSE of $88.29\%$ of the Feature-based model and $97.46\%$ of the Sequence-based model over existing research. Additionally, the comparative analysis of these techniques with traditional and other Deep Learning (DL)-based techniques also highlights the superior performance of our developed models and their potential in real-world IoT applications.
Related papers
- Learning Latent Wireless Dynamics from Channel State Information [31.080933663717257]
We propose a novel data-driven machine learning (ML) technique to model and predict the dynamics of the wireless propagation environment in latent space.
We present numerical evaluations on measured data and show that the proposed JEPA displays a two-fold increase in accuracy over benchmarks.
arXiv Detail & Related papers (2024-09-16T07:15:46Z) - Deep learning approaches to indoor wireless channel estimation for low-power communication [0.0]
This paper presents two Fully Connected Neural Networks (FCNNs)-based Low Power (LP-IoT) channel estimation models, leveraging RSSI for accurate channel estimation in LP-IoT communication.
Our Model A exhibits a remarkable 99.02% reduction in Mean Squared Error (MSE), and Model B demonstrates a notable 90.03% MSE reduction compared to the benchmarks set by current studies.
arXiv Detail & Related papers (2024-05-21T00:36:34Z) - Machine Learning-Based Channel Prediction for RIS-assisted MIMO Systems With Channel Aging [11.867884158309373]
Reconfigurable intelligent surfaces (RISs) have emerged as a promising technology to enhance the performance of sixth-generation (6G) and beyond communication systems.
The passive nature of RISs and their large number of reflecting elements pose challenges to the channel estimation process.
We propose an extended channel estimation framework for RIS-assisted multiple-input multiple-output (MIMO) systems based on a convolutional neural network (CNN) integrated with an autoregressive (AR) predictor.
arXiv Detail & Related papers (2024-05-09T19:45:49Z) - Deep-Learning-Based Channel Estimation for IRS-Assisted ISAC System [30.354309578350584]
Integrated sensing and communication (ISAC) and intelligent reflecting surface (IRS) are viewed as promising technologies for future generations of wireless networks.
This paper investigates the channel estimation problem in an IRS-assisted ISAC system.
A deep-learning framework is proposed to estimate the sensing and communication (S&C) channels in such a system.
arXiv Detail & Related papers (2024-01-29T14:14:39Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Learning to Estimate RIS-Aided mmWave Channels [50.15279409856091]
We focus on uplink cascaded channel estimation, where known and fixed base station combining and RIS phase control matrices are considered for collecting observations.
To boost the estimation performance and reduce the training overhead, the inherent channel sparsity of mmWave channels is leveraged in the deep unfolding method.
It is verified that the proposed deep unfolding network architecture can outperform the least squares (LS) method with a relatively smaller training overhead and online computational complexity.
arXiv Detail & Related papers (2021-07-27T06:57:56Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers [104.01415343139901]
We propose a deep detector entitled LoRD-Net for recovering information symbols from one-bit measurements.
LoRD-Net has a task-based architecture dedicated to recovering the underlying signal of interest.
We evaluate the proposed receiver architecture for one-bit signal recovery in wireless communications.
arXiv Detail & Related papers (2021-02-05T04:26:05Z) - Robust Attack Detection Approach for IIoT Using Ensemble Classifier [0.0]
The objective is to develop a two-phase anomaly detection model to enhance the reliability of an IIoT network.
The proposed model is tested on standard IoT attack outliers such as WUSTL_IIOT-2018, N_BaIoT, and Bot_IoT.
The results also demonstrate that the proposed model outperforms traditional techniques and thus improves the reliability of an IIoT network.
arXiv Detail & Related papers (2021-01-30T07:21:44Z) - Data-Driven Random Access Optimization in Multi-Cell IoT Networks with
NOMA [78.60275748518589]
Non-orthogonal multiple access (NOMA) is a key technology to enable massive machine type communications (mMTC) in 5G networks and beyond.
In this paper, NOMA is applied to improve the random access efficiency in high-density spatially-distributed multi-cell wireless IoT networks.
A novel formulation of random channel access management is proposed, in which the transmission probability of each IoT device is tuned to maximize the geometric mean of users' expected capacity.
arXiv Detail & Related papers (2021-01-02T15:21:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.