Generative Adversarial Networks (GANs) in Networking: A Comprehensive
Survey & Evaluation
- URL: http://arxiv.org/abs/2105.04184v1
- Date: Mon, 10 May 2021 08:28:36 GMT
- Title: Generative Adversarial Networks (GANs) in Networking: A Comprehensive
Survey & Evaluation
- Authors: Hojjat Navidan, Parisa Fard Moshiri, Mohammad Nabati, Reza Shahbazian,
Seyed Ali Ghorashi, Vahid Shah-Mansouri and David Windridge
- Abstract summary: Generative Adversarial Networks (GANs) constitute an extensively researched machine learning sub-field.
GANs are typically used to generate or transform synthetic images.
In this paper, we demonstrate how this branch of machine learning can benefit multiple aspects of computer and communication networks.
- Score: 5.196831100533835
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite the recency of their conception, Generative Adversarial Networks
(GANs) constitute an extensively researched machine learning sub-field for the
creation of synthetic data through deep generative modeling. GANs have
consequently been applied in a number of domains, most notably computer vision,
in which they are typically used to generate or transform synthetic images.
Given their relative ease of use, it is therefore natural that researchers in
the field of networking (which has seen extensive application of deep learning
methods) should take an interest in GAN-based approaches. The need for a
comprehensive survey of such activity is therefore urgent. In this paper, we
demonstrate how this branch of machine learning can benefit multiple aspects of
computer and communication networks, including mobile networks, network
analysis, internet of things, physical layer, and cybersecurity. In doing so,
we shall provide a novel evaluation framework for comparing the performance of
different models in non-image applications, applying this to a number of
reference network datasets.
Related papers
- Leveraging advances in machine learning for the robust classification and interpretation of networks [0.0]
Simulation approaches involve selecting a suitable network generative model such as Erd"os-R'enyi or small-world.
We utilize advances in interpretable machine learning to classify simulated networks by our generative models based on various network attributes.
arXiv Detail & Related papers (2024-03-20T00:24:23Z) - Pruning neural network models for gene regulatory dynamics using data and domain knowledge [24.670514977455202]
We propose DASH, a framework that guides network pruning by using domain-specific structural information in model fitting.
We show that DASH, using knowledge about gene interaction partners within the putative regulatory network, outperforms general pruning methods by a large margin.
arXiv Detail & Related papers (2024-03-05T23:02:55Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - A Comprehensive Survey on Applications of Transformers for Deep Learning
Tasks [60.38369406877899]
Transformer is a deep neural network that employs a self-attention mechanism to comprehend the contextual relationships within sequential data.
transformer models excel in handling long dependencies between input sequence elements and enable parallel processing.
Our survey encompasses the identification of the top five application domains for transformer-based models.
arXiv Detail & Related papers (2023-06-11T23:13:51Z) - Low-Rank Representations Towards Classification Problem of Complex
Networks [0.0]
Complex networks representing social interactions, brain activities, molecular structures have been studied widely to be able to understand and predict their characteristics as graphs.
Models and algorithms for these networks are used in real-life applications, such as search engines, and recommender systems.
We study the performance of such low-rank representations of real-life networks on a network classification problem.
arXiv Detail & Related papers (2022-10-20T19:56:18Z) - The Multiple Subnetwork Hypothesis: Enabling Multidomain Learning by
Isolating Task-Specific Subnetworks in Feedforward Neural Networks [0.0]
We identify a methodology and network representational structure which allows a pruned network to employ previously unused weights to learn subsequent tasks.
We show that networks trained using our approaches are able to learn multiple tasks, which may be related or unrelated, in parallel or in sequence without sacrificing performance on any task or exhibiting catastrophic forgetting.
arXiv Detail & Related papers (2022-07-18T15:07:13Z) - A Comprehensive Survey on Community Detection with Deep Learning [93.40332347374712]
A community reveals the features and connections of its members that are different from those in other communities in a network.
This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods.
The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders.
arXiv Detail & Related papers (2021-05-26T14:37:07Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Generative Adversarial Networks (GANs): An Overview of Theoretical
Model, Evaluation Metrics, and Recent Developments [9.023847175654602]
Generative Adversarial Network (GAN) is an effective method to produce samples of large-scale data distribution.
GANs provide an appropriate way to learn deep representations without widespread use of labeled training data.
In GANs, the generative model is estimated via a competitive process where the generator and discriminator networks are trained simultaneously.
arXiv Detail & Related papers (2020-05-27T05:56:53Z) - A Review on Generative Adversarial Networks: Algorithms, Theory, and
Applications [154.4832792036163]
Generative adversarial networks (GANs) are a hot research topic recently.
GANs have been widely studied since 2014, and a large number of algorithms have been proposed.
This paper provides a review on various GANs methods from the perspectives of algorithms, theory, and applications.
arXiv Detail & Related papers (2020-01-20T01:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.