Signal Transformer: Complex-valued Attention and Meta-Learning for
Signal Recognition
- URL: http://arxiv.org/abs/2106.04392v1
- Date: Sat, 5 Jun 2021 03:57:41 GMT
- Title: Signal Transformer: Complex-valued Attention and Meta-Learning for
Signal Recognition
- Authors: Yihong Dong, Ying Peng, Muqiao Yang, Songtao Lu and Qingjiang Shi
- Abstract summary: We propose a Complex-valued Attentional MEta Learner (CAMEL) for the problem few of general nonvalued problems with theoretical convergence guarantees.
This paper shows the superiority of the proposed data recognition experiments when the state is abundant small data.
- Score: 33.178794056273304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have been shown as a class of useful tools for
addressing signal recognition issues in recent years, especially for
identifying the nonlinear feature structures of signals. However, this power of
most deep learning techniques heavily relies on an abundant amount of training
data, so the performance of classic neural nets decreases sharply when the
number of training data samples is small or unseen data are presented in the
testing phase. This calls for an advanced strategy, i.e., model-agnostic
meta-learning (MAML), which is able to capture the invariant representation of
the data samples or signals. In this paper, inspired by the special structure
of the signal, i.e., real and imaginary parts consisted in practical
time-series signals, we propose a Complex-valued Attentional MEta Learner
(CAMEL) for the problem of few-shot signal recognition by leveraging attention
and meta-learning in the complex domain. To the best of our knowledge, this is
also the first complex-valued MAML that can find the first-order stationary
points of general nonconvex problems with theoretical convergence guarantees.
Extensive experiments results showcase the superiority of the proposed CAMEL
compared with the state-of-the-art methods.
Related papers
- Semi-Supervised Learning via Swapped Prediction for Communication Signal
Recognition [11.325643693823828]
Training a deep neural network on small datasets with few labels generally falls into overfitting, resulting in degenerated performance.
We develop a semi-supervised learning (SSL) method that effectively utilizes a large collection of more readily available unlabeled signal data to improve generalization.
arXiv Detail & Related papers (2023-11-14T14:08:55Z) - Multi-task Learning for Radar Signal Characterisation [48.265859815346985]
This paper presents an approach for tackling radar signal classification and characterisation as a multi-task learning (MTL) problem.
We propose the IQ Signal Transformer (IQST) among several reference architectures that allow for simultaneous optimisation of multiple regression and classification tasks.
We demonstrate the performance of our proposed MTL model on a synthetic radar dataset, while also providing a first-of-its-kind benchmark for radar signal characterisation.
arXiv Detail & Related papers (2023-06-19T12:01:28Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Modulation Pattern Detection Using Complex Convolutions in Deep Learning [0.0]
Classifying modulation patterns is challenging because noise and channel impairments affect the signals.
We study the implementation and use of complex convolutions in a series of convolutional neural network architectures.
arXiv Detail & Related papers (2020-10-14T02:43:11Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z) - Multilinear Compressive Learning with Prior Knowledge [106.12874293597754]
Multilinear Compressive Learning (MCL) framework combines Multilinear Compressive Sensing and Machine Learning into an end-to-end system.
Key idea behind MCL is the assumption of the existence of a tensor subspace which can capture the essential features from the signal for the downstream learning task.
In this paper, we propose a novel solution to address both of the aforementioned requirements, i.e., How to find those tensor subspaces in which the signals of interest are highly separable?
arXiv Detail & Related papers (2020-02-17T19:06:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.