Latent Network Embedding via Adversarial Auto-encoders
- URL: http://arxiv.org/abs/2109.15257v1
- Date: Thu, 30 Sep 2021 16:49:46 GMT
- Title: Latent Network Embedding via Adversarial Auto-encoders
- Authors: Minglong Lei and Yong Shi and Lingfeng Niu
- Abstract summary: We propose a latent network embedding model based on adversarial graph auto-encoders.
Under this framework, the problem of discovering latent structures is formulated as inferring the latent ties from partial observations.
- Score: 15.656374849760734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph auto-encoders have proved to be useful in network embedding task.
However, current models only consider explicit structures and fail to explore
the informative latent structures cohered in networks. To address this issue,
we propose a latent network embedding model based on adversarial graph
auto-encoders. Under this framework, the problem of discovering latent
structures is formulated as inferring the latent ties from partial
observations. A latent transmission matrix that describes the strengths of
existing edges and latent ties is derived based on influence cascades sampled
by simulating diffusion processes over networks. Besides, since the inference
process may bring extra noises, we introduce an adversarial training that works
as regularization to dislodge noises and improve the model robustness.
Extensive experiments on link prediction and node classification tasks show
that the proposed model achieves superior results compared with baseline
models.
Related papers
- Valid Bootstraps for Networks with Applications to Network Visualisation [0.0]
Quantifying uncertainty in networks is an important step in modelling relationships and interactions between entities.
We consider the challenge of bootstrapping an inhomogeneous random graph when only a single observation of the network is made.
We propose a principled, novel, distribution-free network bootstrap using k-nearest neighbour smoothing.
arXiv Detail & Related papers (2024-10-28T10:22:22Z) - Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - Fitting Low-rank Models on Egocentrically Sampled Partial Networks [4.111899441919165]
We propose an approach to fit general low-rank models for egocentrically sampled networks.
This method offers the first theoretical guarantee for egocentric partial network estimation.
We evaluate the technique on several synthetic and real-world networks and show that it delivers competitive performance in link prediction tasks.
arXiv Detail & Related papers (2023-03-09T03:20:44Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Online Estimation and Community Detection of Network Point Processes for
Event Streams [12.211623200731788]
A common goal in network modeling is to uncover the latent community structure present among nodes.
We propose a fast online variational inference algorithm for estimating the latent structure underlying dynamic event arrivals on a network.
We demonstrate that online inference can obtain comparable performance, in terms of community recovery, to non-online variants.
arXiv Detail & Related papers (2020-09-03T15:39:55Z) - Intervention Generative Adversarial Networks [21.682592654097352]
We propose a novel approach for stabilizing the training process of Generative Adversarial Networks.
We refer to the resulting generative model as Intervention Generative Adversarial Networks (IVGAN)
arXiv Detail & Related papers (2020-08-09T11:51:54Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Progressive Graph Convolutional Networks for Semi-Supervised Node
Classification [97.14064057840089]
Graph convolutional networks have been successful in addressing graph-based tasks such as semi-supervised node classification.
We propose a method to automatically build compact and task-specific graph convolutional networks.
arXiv Detail & Related papers (2020-03-27T08:32:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.