MASSFormer: Mobility-Aware Spectrum Sensing using Transformer-Driven
Tiered Structure
- URL: http://arxiv.org/abs/2409.17546v1
- Date: Thu, 26 Sep 2024 05:25:25 GMT
- Title: MASSFormer: Mobility-Aware Spectrum Sensing using Transformer-Driven
Tiered Structure
- Authors: Dimpal Janu, Sandeep Mandia, Kuldeep Singh and Sandeep Kumar
- Abstract summary: We develop a mobility-aware transformer-driven structure (MASSFormer) based cooperative sensing method.
Our method considers a dynamic scenario involving mobile primary users (PUs) and secondary users (SUs)
The proposed method is tested under imperfect reporting channel scenarios to show robustness.
- Score: 3.6194127685460553
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we develop a novel mobility-aware transformer-driven tiered
structure (MASSFormer) based cooperative spectrum sensing method that
effectively models the spatio-temporal dynamics of user movements. Unlike
existing methods, our method considers a dynamic scenario involving mobile
primary users (PUs) and secondary users (SUs)and addresses the complexities
introduced by user mobility. The transformer architecture utilizes an attention
mechanism, enabling the proposed method to adeptly model the temporal dynamics
of user mobility by effectively capturing long-range dependencies within the
input data. The proposed method first computes tokens from the sequence of
covariance matrices (CMs) for each SU and processes them in parallel using the
SUtransformer network to learn the spatio-temporal features at SUlevel.
Subsequently, the collaborative transformer network learns the group-level PU
state from all SU-level feature representations. The attention-based sequence
pooling method followed by the transformer encoder adjusts the contributions of
all tokens. The main goal of predicting the PU states at each SU-level and
group-level is to improve detection performance even more. We conducted a
sufficient amount of simulations and compared the detection performance of
different SS methods. The proposed method is tested under imperfect reporting
channel scenarios to show robustness. The efficacy of our method is validated
with the simulation results demonstrating its higher performance compared with
existing methods in terms of detection probability, sensing error, and
classification accuracy.
Related papers
- PointMT: Efficient Point Cloud Analysis with Hybrid MLP-Transformer Architecture [46.266960248570086]
This study tackles the quadratic complexity of the self-attention mechanism by introducing a complexity local attention mechanism for effective feature aggregation.
We also introduce a parameter-free channel temperature adaptation mechanism that adaptively adjusts the attention weight distribution in each channel.
We show that PointMT achieves performance comparable to state-of-the-art methods while maintaining an optimal balance between performance and accuracy.
arXiv Detail & Related papers (2024-08-10T10:16:03Z) - Pyramid Hierarchical Transformer for Hyperspectral Image Classification [1.9427851979929982]
We propose a pyramid-based hierarchical transformer (PyFormer)
This innovative approach organizes input data hierarchically into segments, each representing distinct abstraction levels.
Results underscore the superiority of the proposed method over traditional approaches.
arXiv Detail & Related papers (2024-04-23T11:41:19Z) - Real-Time Motion Prediction via Heterogeneous Polyline Transformer with
Relative Pose Encoding [121.08841110022607]
Existing agent-centric methods have demonstrated outstanding performance on public benchmarks.
We introduce the K-nearest neighbor attention with relative pose encoding (KNARPE), a novel attention mechanism allowing the pairwise-relative representation to be used by Transformers.
By sharing contexts among agents and reusing the unchanged contexts, our approach is as efficient as scene-centric methods, while performing on par with state-of-the-art agent-centric methods.
arXiv Detail & Related papers (2023-10-19T17:59:01Z) - Sequence-to-Sequence Model with Transformer-based Attention Mechanism
and Temporal Pooling for Non-Intrusive Load Monitoring [0.0]
The paper aims to improve the accuracy of Non-Intrusive Load Monitoring (NILM) by using a deep learning-based method.
The proposed method uses a Seq2Seq model with a transformer-based attention mechanism to capture the long-term dependencies of NILM data.
arXiv Detail & Related papers (2023-06-08T08:04:56Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - Optimizing Non-Autoregressive Transformers with Contrastive Learning [74.46714706658517]
Non-autoregressive Transformers (NATs) reduce the inference latency of Autoregressive Transformers (ATs) by predicting words all at once rather than in sequential order.
In this paper, we propose to ease the difficulty of modality learning via sampling from the model distribution instead of the data distribution.
arXiv Detail & Related papers (2023-05-23T04:20:13Z) - STMT: A Spatial-Temporal Mesh Transformer for MoCap-Based Action Recognition [50.064502884594376]
We study the problem of human action recognition using motion capture (MoCap) sequences.
We propose a novel Spatial-Temporal Mesh Transformer (STMT) to directly model the mesh sequences.
The proposed method achieves state-of-the-art performance compared to skeleton-based and point-cloud-based models.
arXiv Detail & Related papers (2023-03-31T16:19:27Z) - SeqCo-DETR: Sequence Consistency Training for Self-Supervised Object
Detection with Transformers [18.803007408124156]
We propose SeqCo-DETR, a Sequence Consistency-based self-supervised method for object DEtection with TRansformers.
Our method achieves state-of-the-art results on MS COCO (45.8 AP) and PASCAL VOC (64.1 AP), demonstrating the effectiveness of our approach.
arXiv Detail & Related papers (2023-03-15T09:36:58Z) - ProFormer: Learning Data-efficient Representations of Body Movement with
Prototype-based Feature Augmentation and Visual Transformers [31.908276711898548]
Methods for data-efficient recognition from body poses increasingly leverage skeleton sequences structured as image-like arrays.
We look at this paradigm from the perspective of transformer networks, for the first time exploring visual transformers as data-efficient encoders of skeleton movement.
In our pipeline, body pose sequences cast as image-like representations are converted into patch embeddings and then passed to a visual transformer backbone optimized with deep metric learning.
arXiv Detail & Related papers (2022-02-23T11:11:54Z) - Real-Time Scene Text Detection with Differentiable Binarization and
Adaptive Scale Fusion [62.269219152425556]
segmentation-based scene text detection methods have drawn extensive attention in the scene text detection field.
We propose a Differentiable Binarization (DB) module that integrates the binarization process into a segmentation network.
An efficient Adaptive Scale Fusion (ASF) module is proposed to improve the scale robustness by fusing features of different scales adaptively.
arXiv Detail & Related papers (2022-02-21T15:30:14Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.