DeepRx MIMO: Convolutional MIMO Detection with Learned Multiplicative
Transformations
- URL: http://arxiv.org/abs/2010.16283v1
- Date: Fri, 30 Oct 2020 14:11:40 GMT
- Title: DeepRx MIMO: Convolutional MIMO Detection with Learned Multiplicative
Transformations
- Authors: Dani Korpi, Mikko Honkala, Janne M.J. Huttunen, Vesa Starck
- Abstract summary: We present a deep learning-based receiver architecture that consists of a ResNet-based convolutional neural network, also known as DeepRx, combined with a so-called transformation layer, all trained together.
To the best of our knowledge, these are some of the first results showing such high performance for a fully learned receiver.
- Score: 7.775752249659354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep learning has been proposed as a potential technique for
improving the physical layer performance of radio receivers. Despite the large
amount of encouraging results, most works have not considered spatial
multiplexing in the context of multiple-input and multiple-output (MIMO)
receivers. In this paper, we present a deep learning-based MIMO receiver
architecture that consists of a ResNet-based convolutional neural network, also
known as DeepRx, combined with a so-called transformation layer, all trained
together. We propose two novel alternatives for the transformation layer: a
maximal ratio combining-based transformation, or a fully learned
transformation. The former relies more on expert knowledge, while the latter
utilizes learned multiplicative layers. Both proposed transformation layers are
shown to clearly outperform the conventional baseline receiver, especially with
sparse pilot configurations. To the best of our knowledge, these are some of
the first results showing such high performance for a fully learned MIMO
receiver.
Related papers
- H-DenseFormer: An Efficient Hybrid Densely Connected Transformer for
Multimodal Tumor Segmentation [5.999728323822383]
In this paper, we propose a hybrid densely connected network for tumor segmentation, named H-DenseFormer.
Specifically, H-DenseFormer integrates a Transformer-based Multi-path Parallel Embedding (MPE) module that can take an arbitrary number of modalities as input.
The experimental results show that our proposed method outperforms the existing state-of-the-art methods while having lower computational complexity.
arXiv Detail & Related papers (2023-07-04T05:31:09Z) - Characterization of anomalous diffusion through convolutional
transformers [0.8984888893275713]
We propose a new transformer based neural network architecture for the characterization of anomalous diffusion.
Our new architecture, the Convolutional Transformer (ConvTransformer), uses a bi-layered convolutional neural network to extract features from our diffusive trajectories.
We show that the ConvTransformer is able to outperform the previous state of the art at determining the underlying diffusive regime in short trajectories.
arXiv Detail & Related papers (2022-10-10T18:53:13Z) - Rich CNN-Transformer Feature Aggregation Networks for Super-Resolution [50.10987776141901]
Recent vision transformers along with self-attention have achieved promising results on various computer vision tasks.
We introduce an effective hybrid architecture for super-resolution (SR) tasks, which leverages local features from CNNs and long-range dependencies captured by transformers.
Our proposed method achieves state-of-the-art SR results on numerous benchmark datasets.
arXiv Detail & Related papers (2022-03-15T06:52:25Z) - TransCMD: Cross-Modal Decoder Equipped with Transformer for RGB-D
Salient Object Detection [86.94578023985677]
In this work, we rethink this task from the perspective of global information alignment and transformation.
Specifically, the proposed method (TransCMD) cascades several cross-modal integration units to construct a top-down transformer-based information propagation path.
Experimental results on seven RGB-D SOD benchmark datasets demonstrate that a simple two-stream encoder-decoder framework can surpass the state-of-the-art purely CNN-based methods.
arXiv Detail & Related papers (2021-12-04T15:45:34Z) - Large Scale Audio Understanding without Transformers/ Convolutions/
BERTs/ Mixers/ Attention/ RNNs or .... [4.594159253008448]
This paper presents a way of doing large scale audio understanding without traditional state of the art neural architectures.
Our approach does not have any convolutions, recurrence, attention, transformers or other approaches such as BERT.
A classification head (a feed-forward layer), similar to the approach in SimCLR is trained on a learned representation.
arXiv Detail & Related papers (2021-10-07T05:00:26Z) - Machine Learning-enhanced Receive Processing for MU-MIMO OFDM Systems [15.423422040627331]
Machine learning can be used to improve multi-user multiple-input multiple-output (MU-MIMO) receive processing.
We propose a new strategy which preserves the benefits of a conventional receiver, but enhances specific parts with ML components.
arXiv Detail & Related papers (2021-06-30T14:02:27Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Transformers Solve the Limited Receptive Field for Monocular Depth
Prediction [82.90445525977904]
We propose TransDepth, an architecture which benefits from both convolutional neural networks and transformers.
This is the first paper which applies transformers into pixel-wise prediction problems involving continuous labels.
arXiv Detail & Related papers (2021-03-22T18:00:13Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - DO-Conv: Depthwise Over-parameterized Convolutional Layer [66.46704754669169]
We propose to augment a convolutional layer with an additional depthwise convolution, where each input channel is convolved with a different 2D kernel.
We show with extensive experiments that the mere replacement of conventional convolutional layers with DO-Conv layers boosts the performance of CNNs.
arXiv Detail & Related papers (2020-06-22T06:57:10Z) - Learning to map between ferns with differentiable binary embedding
networks [4.827284036182784]
We present a novel concept that enables the application of differentiable random ferns in end-to-end networks.
It can then be used as multiplication-free convolutional layer alternative in deep network architectures.
arXiv Detail & Related papers (2020-05-26T08:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.