Deep HyperNetwork-Based MIMO Detection
- URL: http://arxiv.org/abs/2002.02750v2
- Date: Mon, 10 Feb 2020 07:45:45 GMT
- Title: Deep HyperNetwork-Based MIMO Detection
- Authors: Mathieu Goutay, Fay\c{c}al Ait Aoudia, Jakob Hoydis
- Abstract summary: Conventional algorithms are either too complex to be practical or suffer from poor performance.
Recent approaches tried to address those challenges by implementing the detector as a deep neural network.
In this work, we address both issues by training an additional neural network (NN), referred to as the hypernetwork, which takes as input the channel matrix and generates the weights of the neural NN-based detector.
- Score: 10.433286163090179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimal symbol detection for multiple-input multiple-output (MIMO) systems is
known to be an NP-hard problem. Conventional heuristic algorithms are either
too complex to be practical or suffer from poor performance. Recently, several
approaches tried to address those challenges by implementing the detector as a
deep neural network. However, they either still achieve unsatisfying
performance on practical spatially correlated channels, or are computationally
demanding since they require retraining for each channel realization. In this
work, we address both issues by training an additional neural network (NN),
referred to as the hypernetwork, which takes as input the channel matrix and
generates the weights of the neural NN-based detector. Results show that the
proposed approach achieves near state-of-the-art performance without the need
for re-training.
Related papers
- NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions [2.7086888205833968]
Field-Programmable Gate Array (FPGA) accelerators have proven successful in handling latency- and resource-critical deep neural network (DNN) inference tasks.
We propose relaxing the boundaries of neurons and mapping entire sub-networks to a single LUT.
We validate our proposed method on a known latency-critical task, jet substructure tagging, and on the classical computer vision task, digit classification using MNIST.
arXiv Detail & Related papers (2024-02-29T16:10:21Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Signal Detection in MIMO Systems with Hardware Imperfections: Message
Passing on Neural Networks [101.59367762974371]
In this paper, we investigate signal detection in multiple-input-multiple-output (MIMO) communication systems with hardware impairments.
It is difficult to train a deep neural network (DNN) with limited pilot signals, hindering its practical applications.
We design an efficient message passing based Bayesian signal detector, leveraging the unitary approximate message passing (UAMP) algorithm.
arXiv Detail & Related papers (2022-10-08T04:32:58Z) - Navigating Local Minima in Quantized Spiking Neural Networks [3.1351527202068445]
Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms.
These networks face challenges when trained using error backpropagation, due to the absence of gradient signals when applying hard thresholds.
This paper presents a systematic evaluation of a cosine-annealed LR schedule coupled with weight-independent adaptive moment estimation.
arXiv Detail & Related papers (2022-02-15T06:42:25Z) - Training Quantized Deep Neural Networks via Cooperative Coevolution [27.967480639403796]
We propose a new method for quantizing deep neural networks (DNNs)
Under the framework of cooperative coevolution, we use the estimation of distribution algorithm to search for the low-bits weights.
Experiments show that our method can train 4 bit ResNet-20 on the Cifar-10 dataset without sacrificing accuracy.
arXiv Detail & Related papers (2021-12-23T09:13:13Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Robust MIMO Detection using Hypernetworks with Learned Regularizers [28.917679125825]
We propose a method that tries to strike a balance between symbol error rate (SER) performance and generality of channels.
Our method is based on hypernetworks that generate the parameters of a neural network-based detector that works well on a specific channel.
arXiv Detail & Related papers (2021-10-13T22:07:13Z) - SpikeMS: Deep Spiking Neural Network for Motion Segmentation [7.491944503744111]
textitSpikeMS is the first deep encoder-decoder SNN architecture for the real-world large-scale problem of motion segmentation.
We show that textitSpikeMS is capable of textitincremental predictions, or predictions from smaller amounts of test data than it is trained on.
arXiv Detail & Related papers (2021-05-13T21:34:55Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.