Reinforce Security: A Model-Free Approach Towards Secure Wiretap Coding
- URL: http://arxiv.org/abs/2106.00343v1
- Date: Tue, 1 Jun 2021 09:30:15 GMT
- Title: Reinforce Security: A Model-Free Approach Towards Secure Wiretap Coding
- Authors: Rick Fritschek, Rafael F. Schaefer, Gerhard Wunder
- Abstract summary: Deep learning techniques for approxing secure encoding functions have attracted considerable interest in wireless communications.
In this paper, the approach of reinforcement learning is studied and, in particular, the policy gradient method for a model-free approach of neural network-based secure encoding is investigated.
- Score: 30.74553644848033
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of deep learning-based techniques for approximating secure encoding
functions has attracted considerable interest in wireless communications due to
impressive results obtained for general coding and decoding tasks for wireless
communication systems. Of particular importance is the development of
model-free techniques that work without knowledge about the underlying channel.
Such techniques utilize for example generative adversarial networks to estimate
and model the conditional channel distribution, mutual information estimation
as a reward function, or reinforcement learning. In this paper, the approach of
reinforcement learning is studied and, in particular, the policy gradient
method for a model-free approach of neural network-based secure encoding is
investigated. Previously developed techniques for enforcing a certain co-set
structure on the encoding process can be combined with recent reinforcement
learning approaches. This new approach is evaluated by extensive simulations,
and it is demonstrated that the resulting decoding performance of an
eavesdropper is capped at a certain error level.
Related papers
- Active Learning of Deep Neural Networks via Gradient-Free Cutting Planes [40.68266398473983]
In this work, we investigate an active learning scheme via a novel cutting-plane method for ReLULU networks of arbitrary depth.
We demonstrate that these algorithms can be extended to deep neural networks despite their non-linear convergence.
We exemplify the effectiveness of our proposed active learning method against popular deep active learning baselines via both data experiments and classification on real datasets.
arXiv Detail & Related papers (2024-10-03T02:11:35Z) - A Rate-Distortion View of Uncertainty Quantification [36.85921945174863]
In supervised learning, understanding an input's proximity to the training data can help a model decide whether it has sufficient evidence for reaching a reliable prediction.
We introduce Distance Aware Bottleneck (DAB), a new method for enriching deep neural networks with this property.
arXiv Detail & Related papers (2024-06-16T01:33:22Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Hierarchically Structured Task-Agnostic Continual Learning [0.0]
We take a task-agnostic view of continual learning and develop a hierarchical information-theoretic optimality principle.
We propose a neural network layer, called the Mixture-of-Variational-Experts layer, that alleviates forgetting by creating a set of information processing paths.
Our approach can operate in a task-agnostic way, i.e., it does not require task-specific knowledge, as is the case with many existing continual learning algorithms.
arXiv Detail & Related papers (2022-11-14T19:53:15Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Verified Probabilistic Policies for Deep Reinforcement Learning [6.85316573653194]
We tackle the problem of verifying probabilistic policies for deep reinforcement learning.
We propose an abstraction approach, based on interval Markov decision processes, that yields guarantees on a policy's execution.
We present techniques to build and solve these models using abstract interpretation, mixed-integer linear programming, entropy-based refinement and probabilistic model checking.
arXiv Detail & Related papers (2022-01-10T23:55:04Z) - Feedback Coding for Active Learning [15.239252118069762]
We develop an optimal transport-based feedback coding scheme for the task of active example selection.
We evaluate APM on a variety of datasets and demonstrate learning performance comparable to existing active learning methods.
arXiv Detail & Related papers (2021-02-28T23:00:34Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Model-Based Machine Learning for Communications [110.47840878388453]
We review existing strategies for combining model-based algorithms and machine learning from a high level perspective.
We focus on symbol detection, which is one of the fundamental tasks of communication receivers.
arXiv Detail & Related papers (2021-01-12T19:55:34Z) - FedRec: Federated Learning of Universal Receivers over Fading Channels [92.15358738530037]
We propose a neural network-based symbol detection technique for downlink fading channels.
Multiple users collaborate to jointly learn a universal data-driven detector, hence the name FedRec.
The performance of the resulting receiver is shown to approach the MAP performance in diverse channel conditions without requiring knowledge of the fading statistics.
arXiv Detail & Related papers (2020-11-14T11:29:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.