SplitAMC: Split Learning for Robust Automatic Modulation Classification
- URL: http://arxiv.org/abs/2304.12200v1
- Date: Mon, 17 Apr 2023 12:15:59 GMT
- Title: SplitAMC: Split Learning for Robust Automatic Modulation Classification
- Authors: Jihoon Park, Seungeun Oh, Seong-Lyun Kim
- Abstract summary: We develop a novel AMC method based on a split learning (SL) framework, coined SplitAMC, that can achieve high accuracy even in poor channel conditions.
Numerical evaluations validate that SplitAMC outperforms CentAMC and FedeAMC in terms of accuracy for all SNRs as well as latency.
- Score: 5.31117862338528
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic modulation classification (AMC) is a technology that identifies a
modulation scheme without prior signal information and plays a vital role in
various applications, including cognitive radio and link adaptation. With the
development of deep learning (DL), DL-based AMC methods have emerged, while
most of them focus on reducing computational complexity in a centralized
structure. This centralized learning-based AMC (CentAMC) violates data privacy
in the aspect of direct transmission of client-side raw data. Federated
learning-based AMC (FedeAMC) can bypass this issue by exchanging model
parameters, but causes large resultant latency and client-side computational
load. Moreover, both CentAMC and FedeAMC are vulnerable to large-scale noise
occured in the wireless channel between the client and the server. To this end,
we develop a novel AMC method based on a split learning (SL) framework, coined
SplitAMC, that can achieve high accuracy even in poor channel conditions, while
guaranteeing data privacy and low latency. In SplitAMC, each client can benefit
from data privacy leakage by exchanging smashed data and its gradient instead
of raw data, and has robustness to noise with the help of high scale of smashed
data. Numerical evaluations validate that SplitAMC outperforms CentAMC and
FedeAMC in terms of accuracy for all SNRs as well as latency.
Related papers
- PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - FedADMM-InSa: An Inexact and Self-Adaptive ADMM for Federated Learning [1.802525429431034]
We propose an inexact and self-adaptive FedADMM algorithm, termed FedADMM-InSa.
The convergence of the resulting inexact ADMM is proved under the assumption of strongly convex loss functions.
Our proposed algorithm can reduce the clients' local computational load significantly and also accelerate the learning process compared to the vanilla FedADMM.
arXiv Detail & Related papers (2024-02-21T18:19:20Z) - AMC-Net: An Effective Network for Automatic Modulation Classification [22.871024969842335]
We propose a novel AMC-Net that improves recognition by denoising the input signal in the frequency domain while performing multi-scale and effective feature extraction.
Experiments on two representative datasets demonstrate that our model performs better in efficiency and effectiveness than the most current methods.
arXiv Detail & Related papers (2023-04-02T04:26:30Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - ACDC: Online Unsupervised Cross-Domain Adaptation [15.72925931271688]
We propose ACDC, an adversarial unsupervised domain adaptation framework.
ACDC encapsulates three modules into a single model: A denoising autoencoder that extracts features, an adversarial module that performs domain conversion, and an estimator that learns the source stream and predicts the target stream.
Our experimental results under the prequential test-then-train protocol indicate an improvement in target accuracy over the baseline methods, achieving more than a 10% increase in some cases.
arXiv Detail & Related papers (2021-10-04T11:08:32Z) - Federated Noisy Client Learning [105.00756772827066]
Federated learning (FL) collaboratively aggregates a shared global model depending on multiple local clients.
Standard FL methods ignore the noisy client issue, which may harm the overall performance of the aggregated model.
We propose Federated Noisy Client Learning (Fed-NCL), which is a plug-and-play algorithm and contains two main components.
arXiv Detail & Related papers (2021-06-24T11:09:17Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z) - Harnessing Wireless Channels for Scalable and Privacy-Preserving
Federated Learning [56.94644428312295]
Wireless connectivity is instrumental in enabling federated learning (FL)
Channel randomnessperturbs each worker inversions model update while multiple workers updates incur significant interference on bandwidth.
In A-FADMM, all workers upload their model updates to the parameter server using a single channel via analog transmissions.
This not only saves communication bandwidth, but also hides each worker's exact model update trajectory from any eavesdropper.
arXiv Detail & Related papers (2020-07-03T16:31:15Z) - Efficient Federated Learning over Multiple Access Channel with
Differential Privacy Constraints [9.251773744318118]
We study the problem of federated learning (FL) through digital communication between clients and a parameter server (PS) over a multiple access channel (MAC)
We propose a novel scheme in which a distributed digital gradient (D-DSGD) is performed by each client.
The performance of the scheme is evaluated in terms of the convergence rate and DP level for a given MAC capacity.
arXiv Detail & Related papers (2020-05-15T20:38:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.