Multi-task Deep Neural Networks for Massive MIMO CSI Feedback
- URL: http://arxiv.org/abs/2204.12442v1
- Date: Mon, 18 Apr 2022 12:43:05 GMT
- Title: Multi-task Deep Neural Networks for Massive MIMO CSI Feedback
- Authors: Boyuan Zhang, Haozhen Li, Xin Liang, Xinyu Gu, Lin Zhang
- Abstract summary: A multi-task learning-based approach is proposed to improve the feasibility of the feedback network.
The experimental results indicate that the proposed multi-task learning approach can achieve comprehensive feedback performance.
- Score: 4.985679007615566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has been widely applied for the channel state information (CSI)
feedback in frequency division duplexing (FDD) massive multiple-input
multiple-output (MIMO) system. For the typical supervised training of the
feedback model, the requirements of large amounts of task-specific labeled data
can hardly be satisfied, and the huge training costs and storage usage of the
model in multiple scenarios are hindrance for model application. In this
letter, a multi-task learning-based approach is proposed to improve the
feasibility of the feedback network. An encoder-shared feedback architecture
and the corresponding training scheme are further proposed to facilitate the
implementation of the multi-task learning approach. The experimental results
indicate that the proposed multi-task learning approach can achieve
comprehensive feedback performance with considerable reduction of training cost
and storage usage of the feedback model.
Related papers
- A Wireless Foundation Model for Multi-Task Prediction [50.21098141769079]
We propose a unified foundation model for multi-task prediction in wireless networks that supports diverse prediction intervals.<n>After trained on large-scale datasets, the proposed foundation model demonstrates strong generalization to unseen scenarios and zero-shot performance on new tasks.
arXiv Detail & Related papers (2025-07-08T12:37:55Z) - MultiMAE Meets Earth Observation: Pre-training Multi-modal Multi-task Masked Autoencoders for Earth Observation Tasks [11.359741665798195]
This paper explores a more flexible multi-modal, multi-task pre-training strategy for Earth Observation (EO) data.<n>Specifically, we adopt a Multi-modal Multi-task Masked Autoencoder (MultiMAE) that we pre-train by reconstructing diverse input modalities.<n>Our approach exhibits significant flexibility, handling diverse input configurations without requiring modality-specific pre-trained models.
arXiv Detail & Related papers (2025-05-20T22:24:36Z) - Hierarchical and Decoupled BEV Perception Learning Framework for Autonomous Driving [52.808273563372126]
This paper proposes a novel hierarchical BEV perception paradigm, aiming to provide a library of fundamental perception modules and user-friendly graphical interface.
We conduct the Pretrain-Finetune strategy to effectively utilize large scale public datasets and streamline development processes.
We also present a Multi-Module Learning (MML) approach, enhancing performance through synergistic and iterative training of multiple models.
arXiv Detail & Related papers (2024-07-17T11:17:20Z) - TVE: Learning Meta-attribution for Transferable Vision Explainer [76.68234965262761]
We introduce a Transferable Vision Explainer (TVE) that can effectively explain various vision models in downstream tasks.
TVE is realized through a pre-training process on large-scale datasets towards learning the meta-attribution.
This meta-attribution leverages the versatility of generic backbone encoders to comprehensively encode the attribution knowledge for the input instance, which enables TVE to seamlessly transfer to explain various downstream tasks.
arXiv Detail & Related papers (2023-12-23T21:49:23Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - HiNet: Novel Multi-Scenario & Multi-Task Learning with Hierarchical Information Extraction [50.40732146978222]
Multi-scenario & multi-task learning has been widely applied to many recommendation systems in industrial applications.
We propose a Hierarchical information extraction Network (HiNet) for multi-scenario and multi-task recommendation.
HiNet achieves a new state-of-the-art performance and significantly outperforms existing solutions.
arXiv Detail & Related papers (2023-03-10T17:24:41Z) - Feature Decomposition for Reducing Negative Transfer: A Novel Multi-task
Learning Method for Recommender System [35.165907482126464]
We propose a novel multi-task learning method termed Feature Decomposition Network (FDN)
The key idea of the proposed FDN is reducing the phenomenon of feature redundancy by explicitly decomposing features into task-specific features and task-shared features with carefully designed constraints.
Experimental results show that our proposed FDN can outperform the state-of-the-art (SOTA) methods by a noticeable margin.
arXiv Detail & Related papers (2023-02-10T03:08:37Z) - Effective Adaptation in Multi-Task Co-Training for Unified Autonomous
Driving [103.745551954983]
In this paper, we investigate the transfer performance of various types of self-supervised methods, including MoCo and SimCLR, on three downstream tasks.
We find that their performances are sub-optimal or even lag far behind the single-task baseline.
We propose a simple yet effective pretrain-adapt-finetune paradigm for general multi-task training.
arXiv Detail & Related papers (2022-09-19T12:15:31Z) - Interactive Machine Learning for Image Captioning [8.584932159968002]
We propose an approach for interactive learning for an image captioning model.
We envision a system that exploits human feedback as good as possible by multiplying the feedback using data augmentation methods.
arXiv Detail & Related papers (2022-02-28T09:02:32Z) - Multi-task MR Imaging with Iterative Teacher Forcing and Re-weighted
Deep Learning [14.62432715967572]
We develop a re-weighted multi-task deep learning method to learn prior knowledge from the existing big dataset.
We then utilize them to assist simultaneous MR reconstruction and segmentation from the under-sampled k-space data.
Results show that the proposed method possesses encouraging capabilities for simultaneous and accurate MR reconstruction and segmentation.
arXiv Detail & Related papers (2020-11-27T09:08:05Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - Deep Learning for Massive MIMO Channel State Acquisition and Feedback [7.111650988432555]
Massive multiple-input multiple-output (MIMO) systems are a main enabler of the excessive throughput requirements in 5G and future generation wireless networks.
They require accurate and timely channel state information (CSI), which is acquired by a training process.
This paper provides an overview of how neural networks (NNs) can be used in the training process to improve the performance by reducing the CSI acquisition overhead and to reduce complexity.
arXiv Detail & Related papers (2020-02-17T13:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.