Deep Chaos Synchronization
- URL: http://arxiv.org/abs/2104.08436v1
- Date: Sat, 17 Apr 2021 03:57:53 GMT
- Title: Deep Chaos Synchronization
- Authors: Majid Mobini, Georges Kaddoum (Senior Member, IEEE)
- Abstract summary: We introduce a novel Deep Chaos Synchronization (DCS) system using a Convolutional Neural Network (CNN)
We also provide a novel Recurrent Neural Network (RNN)-based chaotic synchronization system for comparative analysis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we address the problem of chaotic synchronization over a noisy
channel by introducing a novel Deep Chaos Synchronization (DCS) system using a
Convolutional Neural Network (CNN). Conventional Deep Learning (DL) based
communication strategies are extremely powerful but training on large data sets
is usually a difficult and time-consuming procedure. To tackle this challenge,
DCS does not require prior information or large data sets. In addition, we
provide a novel Recurrent Neural Network (RNN)-based chaotic synchronization
system for comparative analysis. The results show that the proposed DCS
architecture is competitive with RNN-based synchronization in terms of
robustness against noise, convergence, and training. Hence, with these
features, the DCS scheme will open the door for a new class of modulator
schemes and meet the robustness against noise, convergence, and training
requirements of the Ultra Reliable Low Latency Communications (URLLC) and
Industrial Internet of Things (IIoT).
Related papers
- Self-Organizing Recurrent Stochastic Configuration Networks for Nonstationary Data Modelling [3.8719670789415925]
Recurrent configuration networks (RSCNs) are a class of randomized models that have shown promise in modelling nonlinear dynamics.
This paper aims at developing a self-organizing version of RSCNs, termed as SORSCNs, to enhance the continuous learning ability of the network for modelling nonstationary data.
arXiv Detail & Related papers (2024-10-14T01:28:25Z) - Communication-Efficient Distributed Deep Learning via Federated Dynamic Averaging [1.4748100900619232]
Federated Dynamic Averaging (FDA) is a communication-efficient DDL strategy.
FDA reduces communication cost by orders of magnitude, compared to both traditional and cutting-edge algorithms.
arXiv Detail & Related papers (2024-05-31T16:34:11Z) - Interference Cancellation GAN Framework for Dynamic Channels [74.22393885274728]
We introduce an online training framework that can adapt to any changes in the channel.
Our framework significantly outperforms recent neural network models on highly dynamic channels.
arXiv Detail & Related papers (2022-08-17T02:01:18Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Locally Asynchronous Stochastic Gradient Descent for Decentralised Deep
Learning [0.0]
Local Asynchronous SGD (LASGD) is an asynchronous decentralized algorithm that relies on All Reduce for model synchronization.
We empirically validate LASGD's performance on image classification tasks on the ImageNet dataset.
arXiv Detail & Related papers (2022-03-24T14:25:15Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - STDPG: A Spatio-Temporal Deterministic Policy Gradient Agent for Dynamic
Routing in SDN [6.27420060051673]
Dynamic routing in software-defined networking (SDN) can be viewed as a centralized decision-making problem.
We propose a novel model-free framework for dynamic routing in SDN, which is referred to as SDN-temporal deterministic policy gradient (STDPG) agent.
STDPG achieves better routing solutions in terms of average end-to-end delay.
arXiv Detail & Related papers (2020-04-21T07:19:07Z) - Asynchronous Decentralized Learning of a Neural Network [49.15799302636519]
We exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario.
Asynchronous decentralized SSFN relaxes the communication bottleneck by allowing one node activation and one side communication, which reduces the communication overhead significantly.
We compare asynchronous dSSFN with traditional synchronous dSSFN in the experimental results, which shows the competitive performance of asynchronous dSSFN, especially when the communication network is sparse.
arXiv Detail & Related papers (2020-04-10T15:53:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.