Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding
- URL: http://arxiv.org/abs/2110.12059v2
- Date: Tue, 26 Oct 2021 16:38:27 GMT
- Title: Two-Timescale End-to-End Learning for Channel Acquisition and Hybrid
Precoding
- Authors: Qiyu Hu, Yunlong Cai, Kai Kang, Guanding Yu, Jakob Hoydis, Yonina C.
Eldar
- Abstract summary: We propose an end-to-end deep learning-based joint transceiver design algorithm for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems.
We develop a DNN architecture that maps the received pilots into feedback bits at the receiver, and then further maps the feedback bits into the hybrid precoder at the transmitter.
- Score: 94.40747235081466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an end-to-end deep learning-based joint transceiver
design algorithm for millimeter wave (mmWave) massive multiple-input
multiple-output (MIMO) systems, which consists of deep neural network
(DNN)-aided pilot training, channel feedback, and hybrid analog-digital (HAD)
precoding. Specifically, we develop a DNN architecture that maps the received
pilots into feedback bits at the receiver, and then further maps the feedback
bits into the hybrid precoder at the transmitter. To reduce the signaling
overhead and channel state information (CSI) mismatch caused by the
transmission delay, a two-timescale DNN composed of a long-term DNN and a
short-term DNN is developed. The analog precoders are designed by the long-term
DNN based on the CSI statistics and updated once in a frame consisting of a
number of time slots. In contrast, the digital precoders are optimized by the
short-term DNN at each time slot based on the estimated low-dimensional
equivalent CSI matrices. A two-timescale training method is also developed for
the proposed DNN with a binary layer. We then analyze the generalization
ability and signaling overhead for the proposed DNN based algorithm. Simulation
results show that our proposed technique significantly outperforms conventional
schemes in terms of bit-error rate performance with reduced signaling overhead
and shorter pilot sequences.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - DCP: Learning Accelerator Dataflow for Neural Network via Propagation [52.06154296196845]
This work proposes an efficient data-centric approach, named Dataflow Code Propagation (DCP), to automatically find the optimal dataflow for DNN layers in seconds without human effort.
DCP learns a neural predictor to efficiently update the dataflow codes towards the desired gradient directions to minimize various optimization objectives.
For example, without using additional training data, DCP surpasses the GAMMA method that performs a full search using thousands of samples.
arXiv Detail & Related papers (2024-10-09T05:16:44Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Attention-based Feature Compression for CNN Inference Offloading in Edge
Computing [93.67044879636093]
This paper studies the computational offloading of CNN inference in device-edge co-inference systems.
We propose a novel autoencoder-based CNN architecture (AECNN) for effective feature extraction at end-device.
Experiments show that AECNN can compress the intermediate data by more than 256x with only about 4% accuracy loss.
arXiv Detail & Related papers (2022-11-24T18:10:01Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - DNN Training Acceleration via Exploring GPGPU Friendly Sparsity [16.406482603838157]
We propose the Approximate Random Dropout that replaces the conventional random dropout of neurons and synapses with a regular and online generated row-based or tile-based dropout patterns.
We then develop a SGD-based Search Algorithm that produces the distribution of row-based or tile-based dropout patterns to compensate for the potential accuracy loss.
We also propose the sensitivity-aware dropout method to dynamically drop the input feature maps based on their sensitivity so as to achieve greater forward and backward training acceleration.
arXiv Detail & Related papers (2022-03-11T01:32:03Z) - TCTN: A 3D-Temporal Convolutional Transformer Network for Spatiotemporal
Predictive Learning [1.952097552284465]
We propose an algorithm named 3D-temporal convolutional transformer (TCTN), where a transformer-based encoder with temporal convolutional layers is employed to capture short-term and long-term dependencies.
Our proposed algorithm can be easy to implement and trained much faster compared with RNN-based methods thanks to the parallel mechanism of Transformer.
arXiv Detail & Related papers (2021-12-02T10:05:01Z) - Secure Precoding in MIMO-NOMA: A Deep Learning Approach [11.44224857047629]
A novel signaling design for secure transmission over two-user multiple-input multiple-output non-orthogonal multiple access channel using deep neural networks (DNNs) is proposed.
The proposed DNN linearly precodes each user's signal before superimposing them and achieves near-optimal performance with significantly lower run time.
arXiv Detail & Related papers (2021-10-14T02:15:29Z) - Estimating Traffic Speeds using Probe Data: A Deep Neural Network
Approach [1.5469452301122177]
This paper presents a dedicated Deep Neural Network architecture that reconstructs space-time traffic speeds on freeways given sparse data.
A large set of empirical Floating-Car Data (FCD) collected on German freeway A9 during two months is utilized.
The results show that the DNN is able to apply learned patterns, and reconstructs moving as well as stationary congested traffic with high accuracy.
arXiv Detail & Related papers (2021-04-19T23:32:12Z) - DCT-SNN: Using DCT to Distribute Spatial Information over Time for
Learning Low-Latency Spiking Neural Networks [7.876001630578417]
Spiking Neural Networks (SNNs) offer a promising alternative to traditional deep learning frameworks.
SNNs suffer from high inference latency which is a major bottleneck to their deployment.
We propose a scalable time-based encoding scheme that utilizes the Discrete Cosine Transform (DCT) to reduce the number of timesteps required for inference.
arXiv Detail & Related papers (2020-10-05T05:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.