Radio Signal Classification by Adversarially Robust Quantum Machine
Learning
- URL: http://arxiv.org/abs/2312.07821v1
- Date: Wed, 13 Dec 2023 01:11:35 GMT
- Title: Radio Signal Classification by Adversarially Robust Quantum Machine
Learning
- Authors: Yanqiu Wu, Eromanga Adermann, Chandra Thapa, Seyit Camtepe, Hajime
Suzuki and Muhammad Usman
- Abstract summary: This work applies QVCs to radio signal classification and studies their robustness to various adversarial attacks.
We also propose the novel application of the approximate amplitude encoding (AAE) technique to encode radio signal data efficiently.
- Score: 10.892401165756214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Radio signal classification plays a pivotal role in identifying the
modulation scheme used in received radio signals, which is essential for
demodulation and proper interpretation of the transmitted information.
Researchers have underscored the high susceptibility of ML algorithms for radio
signal classification to adversarial attacks. Such vulnerability could result
in severe consequences, including misinterpretation of critical messages,
interception of classified information, or disruption of communication
channels. Recent advancements in quantum computing have revolutionized theories
and implementations of computation, bringing the unprecedented development of
Quantum Machine Learning (QML). It is shown that quantum variational
classifiers (QVCs) provide notably enhanced robustness against classical
adversarial attacks in image classification. However, no research has yet
explored whether QML can similarly mitigate adversarial threats in the context
of radio signal classification. This work applies QVCs to radio signal
classification and studies their robustness to various adversarial attacks. We
also propose the novel application of the approximate amplitude encoding (AAE)
technique to encode radio signal data efficiently. Our extensive simulation
results present that attacks generated on QVCs transfer well to CNN models,
indicating that these adversarial examples can fool neural networks that they
are not explicitly designed to attack. However, the converse is not true. QVCs
primarily resist the attacks generated on CNNs. Overall, with comprehensive
simulations, our results shed new light on the growing field of QML by bridging
knowledge gaps in QAML in radio signal classification and uncovering the
advantages of applying QML methods in practical applications.
Related papers
- QML-IDS: Quantum Machine Learning Intrusion Detection System [1.2016264781280588]
We present QML-IDS, a novel Intrusion Detection System that combines quantum and classical computing techniques.
QML-IDS employs Quantum Machine Learning(QML) methodologies to analyze network patterns and detect attack activities.
We show that QML-IDS is effective at attack detection and performs well in binary and multiclass classification tasks.
arXiv Detail & Related papers (2024-10-07T13:07:41Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based
Wireless Communication Systems [23.183028451271745]
Magmaw is the first black-box attack methodology capable of generating universal adversarial perturbations for any multimodal signal transmitted over a wireless channel.
For proof-of-concept evaluation, we build a real-time wireless attack platform using a software-defined radio system.
Surprisingly, Magmaw is also effective against encrypted communication channels and conventional communications.
arXiv Detail & Related papers (2023-11-01T00:33:59Z) - Performance evaluation of Machine learning algorithms for Intrusion Detection System [0.40964539027092917]
This paper focuses on intrusion detection systems (IDSs) analysis using Machine Learning (ML) techniques.
We analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models.
arXiv Detail & Related papers (2023-10-01T06:35:37Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial
Attacks [70.51799606279883]
We introduce test-time adversarial attacks on deep neural networks (DNNs) for semantic communications.
We show that it is possible to change the semantics of the transferred information even when the reconstruction loss remains low.
arXiv Detail & Related papers (2022-12-20T17:13:22Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Real-time Over-the-air Adversarial Perturbations for Digital
Communications using Deep Neural Networks [0.0]
adversarial perturbations can be used by RF communications systems to avoid reactive-jammers and interception systems.
This work attempts to bridge this gap by defining class-specific and sample-independent adversarial perturbations.
We demonstrate the effectiveness of these attacks over-the-air across a physical channel using software-defined radios.
arXiv Detail & Related papers (2022-02-20T14:50:52Z) - Decentralizing Feature Extraction with Quantum Convolutional Neural
Network for Automatic Speech Recognition [101.69873988328808]
We build upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction.
An input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram.
The corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters.
The encoded features are then down-streamed to the local RNN model for the final recognition.
arXiv Detail & Related papers (2020-10-26T03:36:01Z) - Detecting Adversarial Examples for Speech Recognition via Uncertainty
Quantification [21.582072216282725]
Machine learning systems and, specifically, automatic speech recognition (ASR) systems are vulnerable to adversarial attacks.
In this paper, we focus on hybrid ASR systems and compare four acoustic models regarding their ability to indicate uncertainty under attack.
We are able to detect adversarial examples with an area under the receiving operator curve score of more than 0.99.
arXiv Detail & Related papers (2020-05-24T19:31:02Z) - Data-Driven Symbol Detection via Model-Based Machine Learning [117.58188185409904]
We review a data-driven framework to symbol detection design which combines machine learning (ML) and model-based algorithms.
In this hybrid approach, well-known channel-model-based algorithms are augmented with ML-based algorithms to remove their channel-model-dependence.
Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship.
arXiv Detail & Related papers (2020-02-14T06:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.