Complex-Valued Neural Networks for Data-Driven Signal Processing and
Signal Understanding
- URL: http://arxiv.org/abs/2309.07948v1
- Date: Thu, 14 Sep 2023 16:55:28 GMT
- Title: Complex-Valued Neural Networks for Data-Driven Signal Processing and
Signal Understanding
- Authors: Josiah W. Smith
- Abstract summary: Complex-valued neural networks have emerged boasting superior modeling performance for many tasks across the signal processing, sensing, and communications arenas.
This paper overviews a package built on PyTorch with the intention of implementing light-weight interfaces for common complex-valued neural network operations and architectures.
- Score: 1.2691047660244337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex-valued neural networks have emerged boasting superior modeling
performance for many tasks across the signal processing, sensing, and
communications arenas. However, developing complex-valued models currently
demands development of basic deep learning operations, such as linear or
convolution layers, as modern deep learning frameworks like PyTorch and Tensor
flow do not adequately support complex-valued neural networks. This paper
overviews a package built on PyTorch with the intention of implementing
light-weight interfaces for common complex-valued neural network operations and
architectures. Similar to natural language understanding (NLU), which as
recently made tremendous leaps towards text-based intelligence, RF Signal
Understanding (RFSU) is a promising field extending conventional signal
processing algorithms using a hybrid approach of signal mechanics-based insight
with data-driven modeling power. Notably, we include efficient implementations
for linear, convolution, and attention modules in addition to activation
functions and normalization layers such as batchnorm and layernorm.
Additionally, we include efficient implementations of manifold-based
complex-valued neural network layers that have shown tremendous promise but
remain relatively unexplored in many research contexts. Although there is an
emphasis on 1-D data tensors, due to a focus on signal processing,
communications, and radar data, many of the routines are implemented for 2-D
and 3-D data as well. Specifically, the proposed approach offers a useful set
of tools and documentation for data-driven signal processing research and
practical implementation.
Related papers
- TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous
spiking neural network processor [2.9175555050594975]
We present a brain-inspired platform for prototyping real-time event-based Spiking Neural Networks (SNNs)
The system proposed supports the direct emulation of dynamic and realistic neural processing phenomena such as short-term plasticity, NMDA gating, AMPA diffusion, homeostasis, spike frequency adaptation, conductance-based dendritic compartments and spike transmission delays.
The flexibility to emulate different biologically plausible neural networks, and the chip's ability to monitor both population and single neuron signals in real-time, allow to develop and validate complex models of neural processing for both basic research and edge-computing applications.
arXiv Detail & Related papers (2023-10-01T03:48:16Z) - OpenHLS: High-Level Synthesis for Low-Latency Deep Neural Networks for
Experimental Science [0.6571063542099524]
We present an open source, lightweight, compiler framework for translating high-level representations of deep neural networks to low-level representations.
We show OpenHLS is able to produce an implementation of the network with a throughput 4.8 $mu$s/sample, which is approximately a 4$times$ improvement over the existing implementation.
arXiv Detail & Related papers (2023-02-13T23:25:55Z) - Learning with Multigraph Convolutional Filters [153.20329791008095]
We introduce multigraph convolutional neural networks (MGNNs) as stacked and layered structures where information is processed according to an MSP model.
We also develop a procedure for tractable computation of filter coefficients in the MGNNs and a low cost method to reduce the dimensionality of the information transferred between layers.
arXiv Detail & Related papers (2022-10-28T17:00:50Z) - An intertwined neural network model for EEG classification in
brain-computer interfaces [0.6696153817334769]
The brain computer interface (BCI) is a nonstimulatory direct and occasionally bidirectional communication link between the brain and a computer or an external device.
We present a deep neural network architecture specifically engineered to provide state-of-the-art performance in multiclass motor imagery classification.
arXiv Detail & Related papers (2022-08-04T09:00:34Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Multi-task Learning Approach for Modulation and Wireless Signal
Classification for 5G and Beyond: Edge Deployment via Model Compression [1.218340575383456]
Future communication networks must address the scarce spectrum to accommodate growth of heterogeneous wireless devices.
We exploit the potential of deep neural networks based multi-task learning framework to simultaneously learn modulation and signal classification tasks.
We provide a comprehensive heterogeneous wireless signals dataset for public use.
arXiv Detail & Related papers (2022-02-26T14:51:02Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Supervised Learning with First-to-Spike Decoding in Multilayer Spiking
Neural Networks [0.0]
We propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems.
The proposed learning rule supports multiple spikes fired by hidden neurons, and yet is stable by relying on firstspike responses generated by a deterministic output layer.
We also explore several distinct spike-based encoding strategies in order to form compact representations of input data.
arXiv Detail & Related papers (2020-08-16T15:34:48Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.