Exploiting Transductive Property of Graph Convolutional Neural Networks
with Less Labeling Effort
- URL: http://arxiv.org/abs/2105.13765v1
- Date: Sat, 1 May 2021 05:33:31 GMT
- Title: Exploiting Transductive Property of Graph Convolutional Neural Networks
with Less Labeling Effort
- Authors: Yasir Kilic
- Abstract summary: The developing GCN model has made significant experimental contributions with Convolution filters applied to graph data.
Due to its transductive property, all of the data samples, which is partially labeled, are given as input to the model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, machine learning approaches on Graph data have become very popular.
It was observed that significant results were obtained by including implicit or
explicit logical connections between data samples that make up the data to the
model. In this context, the developing GCN model has made significant
experimental contributions with Convolution filters applied to graph data. This
model follows Transductive and Semi-Supervised Learning approach. Due to its
transductive property, all of the data samples, which is partially labeled, are
given as input to the model. Labeling, which is a cost, is very important.
Within the scope of this study, the following research question is tried to be
answered: If at least how many samples are labeled, the optimum model success
is achieved? In addition, some experimental contributions have been made on the
accuracy of the model, whichever sampling approach is used with fixed labeling
effort. According to the experiments, the success of the model can be increased
by using the local centrality metric.
Related papers
- DeFoG: Discrete Flow Matching for Graph Generation [45.037260759871124]
We propose DeFoG, a novel framework using discrete flow matching for graph generation.
DeFoG employs a flow-based approach that features an efficient linear noising process and a flexible denoising process.
We show that DeFoG achieves state-of-the-art results on synthetic and molecular datasets.
arXiv Detail & Related papers (2024-10-05T18:52:54Z) - Sub-graph Based Diffusion Model for Link Prediction [43.15741675617231]
Denoising Diffusion Probabilistic Models (DDPMs) represent a contemporary class of generative models with exceptional qualities.
We build a novel generative model for link prediction using a dedicated design to decompose the likelihood estimation process via the Bayesian formula.
Our proposed method presents numerous advantages: (1) transferability across datasets without retraining, (2) promising generalization on limited training data, and (3) robustness against graph adversarial attacks.
arXiv Detail & Related papers (2024-09-13T02:23:55Z) - Generative Expansion of Small Datasets: An Expansive Graph Approach [13.053285552524052]
We introduce an Expansive Synthesis model generating large-scale, information-rich datasets from minimal samples.
An autoencoder with self-attention layers and optimal transport refines distributional consistency.
Results show comparable performance, demonstrating the model's potential to augment training data effectively.
arXiv Detail & Related papers (2024-06-25T02:59:02Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - SaGess: Sampling Graph Denoising Diffusion Model for Scalable Graph
Generation [7.66297856898883]
SaGess is able to generate large real-world networks by augmenting a diffusion model (DiGress) with a generalized divide-and-conquer framework.
SaGess outperforms most of the one-shot state-of-the-art graph generating methods by a significant factor.
arXiv Detail & Related papers (2023-06-29T10:02:39Z) - From Spectral Graph Convolutions to Large Scale Graph Convolutional
Networks [0.0]
Graph Convolutional Networks (GCNs) have been shown to be a powerful concept that has been successfully applied to a large variety of tasks.
We study the theory that paved the way to the definition of GCN, including related parts of classical graph theory.
arXiv Detail & Related papers (2022-07-12T16:57:08Z) - Data-Free Adversarial Knowledge Distillation for Graph Neural Networks [62.71646916191515]
We propose the first end-to-end framework for data-free adversarial knowledge distillation on graph structured data (DFAD-GNN)
To be specific, our DFAD-GNN employs a generative adversarial network, which mainly consists of three components: a pre-trained teacher model and a student model are regarded as two discriminators, and a generator is utilized for deriving training graphs to distill knowledge from the teacher model into the student model.
Our DFAD-GNN significantly surpasses state-of-the-art data-free baselines in the graph classification task.
arXiv Detail & Related papers (2022-05-08T08:19:40Z) - Comparing Test Sets with Item Response Theory [53.755064720563]
We evaluate 29 datasets using predictions from 18 pretrained Transformer models on individual test examples.
We find that Quoref, HellaSwag, and MC-TACO are best suited for distinguishing among state-of-the-art models.
We also observe span selection task format, which is used for QA datasets like QAMR or SQuAD2.0, is effective in differentiating between strong and weak models.
arXiv Detail & Related papers (2021-06-01T22:33:53Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.