AI/ML-Based Automatic Modulation Recognition: Recent Trends and Future Possibilities
- URL: http://arxiv.org/abs/2502.05315v2
- Date: Tue, 18 Feb 2025 03:43:28 GMT
- Title: AI/ML-Based Automatic Modulation Recognition: Recent Trends and Future Possibilities
- Authors: Elaheh Jafarigol, Behnoud Alaghband, Azadeh Gilanpour, Saeid Hosseinipoor, Mirhamed Mirmozafari,
- Abstract summary: We present a review of high-performance automatic modulation recognition (AMR) models proposed in the literature to classify various Radio Frequency (RF) modulation schemes.
We replicated these models and compared their performance in terms of accuracy across a range of signal-to-noise ratios.
- Score: 0.0
- License:
- Abstract: We present a review of high-performance automatic modulation recognition (AMR) models proposed in the literature to classify various Radio Frequency (RF) modulation schemes. We replicated these models and compared their performance in terms of accuracy across a range of signal-to-noise ratios. To ensure a fair comparison, we used the same dataset (RadioML-2016A), the same hardware, and a consistent definition of test accuracy as the evaluation metric, thereby providing a benchmark for future AMR studies. The hyperparameters were selected based on the authors' suggestions in the associated references to achieve results as close as possible to the originals. The replicated models are publicly accessible for further analysis of AMR models. We also present the test accuracies of the selected models versus their number of parameters, indicating their complexities. Building on this comparative analysis, we identify strategies to enhance these models' performance. Finally, we present potential opportunities for improvement, whether through novel architectures, data processing techniques, or training strategies, to further advance the capabilities of AMR models.
Related papers
- Optimizing Sequential Recommendation Models with Scaling Laws and Approximate Entropy [104.48511402784763]
Performance Law for SR models aims to theoretically investigate and model the relationship between model performance and data quality.
We propose Approximate Entropy (ApEn) to assess data quality, presenting a more nuanced approach compared to traditional data quantity metrics.
arXiv Detail & Related papers (2024-11-30T10:56:30Z) - Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Introducing Flexible Monotone Multiple Choice Item Response Theory Models and Bit Scales [0.0]
We present a new model for multiple choice data, the monotone multiple choice (MMC) model, which we fit using autoencoders.
We demonstrate empirically that the MMC model outperforms the traditional nominal response IRT model in terms of fit.
arXiv Detail & Related papers (2024-10-02T12:33:16Z) - Beyond Benchmarks: Evaluating Embedding Model Similarity for Retrieval Augmented Generation Systems [0.9976432338233169]
We evaluate the similarity of embedding models within the context of RAG systems.
We compare different families of embedding models, including proprietary ones, across five datasets.
We identify possible open-source alternatives to proprietary models, with Mistral exhibiting the highest similarity to OpenAI models.
arXiv Detail & Related papers (2024-07-11T08:24:16Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Deep Learning Models for Knowledge Tracing: Review and Empirical
Evaluation [2.423547527175807]
We review and evaluate a body of deep learning knowledge tracing (DLKT) models with openly available and widely-used data sets.
The evaluated DLKT models have been reimplemented for assessing and replicability of previously reported results.
arXiv Detail & Related papers (2021-12-30T14:19:27Z) - An Efficient Deep Learning Model for Automatic Modulation Recognition
Based on Parameter Estimation and Transformation [3.3941243094128035]
This letter proposes an efficient DL-AMR model based on phase parameter estimation and transformation.
Our model is more competitive in training time and test time than the benchmark models with similar recognition accuracy.
arXiv Detail & Related papers (2021-10-11T03:28:28Z) - How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating
and Auditing Generative Models [95.8037674226622]
We introduce a 3-dimensional evaluation metric that characterizes the fidelity, diversity and generalization performance of any generative model in a domain-agnostic fashion.
Our metric unifies statistical divergence measures with precision-recall analysis, enabling sample- and distribution-level diagnoses of model fidelity and diversity.
arXiv Detail & Related papers (2021-02-17T18:25:30Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Semi-nonparametric Latent Class Choice Model with a Flexible Class
Membership Component: A Mixture Model Approach [6.509758931804479]
The proposed model formulates the latent classes using mixture models as an alternative approach to the traditional random utility specification.
Results show that mixture models improve the overall performance of latent class choice models.
arXiv Detail & Related papers (2020-07-06T13:19:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.