Dynamic Online Modulation Recognition using Incremental Learning
- URL: http://arxiv.org/abs/2312.04718v1
- Date: Thu, 7 Dec 2023 21:56:26 GMT
- Title: Dynamic Online Modulation Recognition using Incremental Learning
- Authors: Ali Owfi, Ali Abbasi, Fatemeh Afghah, Jonathan Ashdown, Kurt Turck
- Abstract summary: Conventional deep learning (DL) models often fall short in online dynamic contexts.
We show that modulation recognition frameworks based on Incremental Learning (IL) effectively prevent catastrophic forgetting.
Our results demonstrate that modulation recognition frameworks based on IL effectively prevent catastrophic forgetting, enabling models to perform robustly in dynamic scenarios.
- Score: 6.6953472972255
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modulation recognition is a fundamental task in communication systems as the
accurate identification of modulation schemes is essential for reliable signal
processing, interference mitigation for coexistent communication technologies,
and network optimization. Incorporating deep learning (DL) models into
modulation recognition has demonstrated promising results in various scenarios.
However, conventional DL models often fall short in online dynamic contexts,
particularly in class incremental scenarios where new modulation schemes are
encountered during online deployment. Retraining these models on all previously
seen modulation schemes is not only time-consuming but may also not be feasible
due to storage limitations. On the other hand, training solely on new
modulation schemes often results in catastrophic forgetting of previously
learned classes. This issue renders DL-based modulation recognition models
inapplicable in real-world scenarios because the dynamic nature of
communication systems necessitate the effective adaptability to new modulation
schemes. This paper addresses this challenge by evaluating the performance of
multiple Incremental Learning (IL) algorithms in dynamic modulation recognition
scenarios, comparing them against conventional DL-based modulation recognition.
Our results demonstrate that modulation recognition frameworks based on IL
effectively prevent catastrophic forgetting, enabling models to perform
robustly in dynamic scenarios.
Related papers
- In-Context Learning for Gradient-Free Receiver Adaptation: Principles, Applications, and Theory [54.92893355284945]
Deep learning-based wireless receivers offer the potential to dynamically adapt to varying channel environments.<n>Current adaptation strategies, including joint training, hypernetwork-based methods, and meta-learning, either demonstrate limited flexibility or necessitate explicit optimization through gradient descent.<n>This paper presents gradient-free adaptation techniques rooted in the emerging paradigm of in-context learning (ICL)
arXiv Detail & Related papers (2025-06-18T06:43:55Z) - Model Hemorrhage and the Robustness Limits of Large Language Models [119.46442117681147]
Large language models (LLMs) demonstrate strong performance across natural language processing tasks, yet undergo significant performance degradation when modified for deployment.
We define this phenomenon as model hemorrhage - performance decline caused by parameter alterations and architectural changes.
arXiv Detail & Related papers (2025-03-31T10:16:03Z) - Distilling Reinforcement Learning Algorithms for In-Context Model-Based Planning [39.53836535326121]
We propose Distillation for In-Context Planning (DICP), an in-context model-based RL framework where Transformers simultaneously learn environment dynamics and improve policy in-context.
Our results show that DICP achieves state-of-the-art performance while requiring significantly fewer environment interactions than baselines.
arXiv Detail & Related papers (2025-02-26T10:16:57Z) - Enhancing Online Continual Learning with Plug-and-Play State Space Model and Class-Conditional Mixture of Discretization [72.81319836138347]
Online continual learning (OCL) seeks to learn new tasks from data streams that appear only once, while retaining knowledge of previously learned tasks.
Most existing methods rely on replay, focusing on enhancing memory retention through regularization or distillation.
We introduce a plug-and-play module, S6MOD, which can be integrated into most existing methods and directly improve adaptability.
arXiv Detail & Related papers (2024-12-24T05:25:21Z) - Neural Port-Hamiltonian Differential Algebraic Equations for Compositional Learning of Electrical Networks [20.12750360095627]
We develop compositional learning algorithms for coupled dynamical systems.
We use neural networks to parametrize unknown terms in differential and algebraic components of a port-Hamiltonian DAE.
We train individual N-PHDAE models for separate grid components, before coupling them to accurately predict the behavior of larger-scale networks.
arXiv Detail & Related papers (2024-12-15T15:13:11Z) - Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning [113.89327264634984]
Few-shot class-incremental learning (FSCIL) confronts the challenge of integrating new classes into a model with minimal training samples.
Traditional methods widely adopt static adaptation relying on a fixed parameter space to learn from data that arrive sequentially.
We propose a dual selective SSM projector that dynamically adjusts the projection parameters based on the intermediate features for dynamic adaptation.
arXiv Detail & Related papers (2024-07-08T17:09:39Z) - Learning System Dynamics without Forgetting [60.08612207170659]
We investigate the problem of Continual Dynamics Learning (CDL), examining task configurations and evaluating the applicability of existing techniques.
We propose the Mode-switching Graph ODE (MS-GODE) model, which integrates the strengths LG-ODE and sub-network learning with a mode-switching module.
We construct a novel benchmark of biological dynamic systems for CDL, Bio-CDL, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Optimization of geological carbon storage operations with multimodal latent dynamic model and deep reinforcement learning [1.8549313085249324]
This study introduces the multimodal latent dynamic (MLD) model, a deep learning framework for fast flow prediction and well control optimization in GCS.
Unlike existing models, the MLD supports diverse input modalities, allowing comprehensive data interactions.
The approach outperforms traditional methods, achieving the highest NPV while reducing computational resources by over 60%.
arXiv Detail & Related papers (2024-06-07T01:30:21Z) - Enhancing Automatic Modulation Recognition through Robust Global Feature
Extraction [12.868218616042292]
Modulated signals exhibit long temporal dependencies.
Human experts analyze patterns in constellation diagrams to classify modulation schemes.
Classical convolutional-based networks excel at extracting local features but struggle to capture global relationships.
arXiv Detail & Related papers (2024-01-02T06:31:24Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - A Unified Framework for Alternating Offline Model Training and Policy
Learning [62.19209005400561]
In offline model-based reinforcement learning, we learn a dynamic model from historically collected data, and utilize the learned model and fixed datasets for policy learning.
We develop an iterative offline MBRL framework, where we maximize a lower bound of the true expected return.
With the proposed unified model-policy learning framework, we achieve competitive performance on a wide range of continuous-control offline reinforcement learning datasets.
arXiv Detail & Related papers (2022-10-12T04:58:51Z) - Momentum Pseudo-Labeling for Semi-Supervised Speech Recognition [55.362258027878966]
We present momentum pseudo-labeling (MPL) as a simple yet effective strategy for semi-supervised speech recognition.
MPL consists of a pair of online and offline models that interact and learn from each other, inspired by the mean teacher method.
The experimental results demonstrate that MPL effectively improves over the base model and is scalable to different semi-supervised scenarios.
arXiv Detail & Related papers (2021-06-16T16:24:55Z) - SafeAMC: Adversarial training for robust modulation recognition models [53.391095789289736]
In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
arXiv Detail & Related papers (2021-05-28T11:29:04Z) - Model-based Meta Reinforcement Learning using Graph Structured Surrogate
Models [40.08137765886609]
We show that our model, called a graph structured surrogate model (GSSM), outperforms state-of-the-art methods in predicting environment dynamics.
Our approach is able to obtain high returns, while allowing fast execution during deployment by avoiding test time policy gradient optimization.
arXiv Detail & Related papers (2021-02-16T17:21:55Z) - Modular Transfer Learning with Transition Mismatch Compensation for
Excessive Disturbance Rejection [29.01654847752415]
We propose a transfer learning framework that adapts a control policy for excessive disturbance rejection of an underwater robot.
A modular network of learning policies is applied, composed of a Generalized Control Policy (GCP) and an Online Disturbance Identification Model (ODI)
arXiv Detail & Related papers (2020-07-29T07:44:38Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.