Network Embedding with Completely-imbalanced Labels
- URL: http://arxiv.org/abs/2007.03545v1
- Date: Tue, 7 Jul 2020 15:22:54 GMT
- Title: Network Embedding with Completely-imbalanced Labels
- Authors: Zheng Wang (1), Xiaojun Ye (2), Chaokun Wang (2), Jian Cui (1), Philip
S. Yu (3)((1) Department of Computer Science, University of Science and
Technology Beijing (2) School of Software, Tsinghua University,(3) Department
of Computer Science, University of Illinois at Chicago)
- Abstract summary: We propose two novel semi-supervised network embedding methods.
The first one is a shallow method named RSDNE. Specifically, to benefit from the completely-imbalanced labels, RSDNE guarantees both intra-class similarity and inter-class dissimilarity in an approximate way.
The other method is RECT which is a new class of graph neural networks. Different from RSDNE, to benefit from the completely-imbalanced labels, RECT explores the class-semantic knowledge. This enables RECT to handle networks with node features and multi-label setting.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Network embedding, aiming to project a network into a low-dimensional space,
is increasingly becoming a focus of network research. Semi-supervised network
embedding takes advantage of labeled data, and has shown promising performance.
However, existing semi-supervised methods would get unappealing results in the
completely-imbalanced label setting where some classes have no labeled nodes at
all. To alleviate this, we propose two novel semi-supervised network embedding
methods. The first one is a shallow method named RSDNE. Specifically, to
benefit from the completely-imbalanced labels, RSDNE guarantees both
intra-class similarity and inter-class dissimilarity in an approximate way. The
other method is RECT which is a new class of graph neural networks. Different
from RSDNE, to benefit from the completely-imbalanced labels, RECT explores the
class-semantic knowledge. This enables RECT to handle networks with node
features and multi-label setting. Experimental results on several real-world
datasets demonstrate the superiority of the proposed methods.
Related papers
- Domain-adaptive Message Passing Graph Neural Network [67.35534058138387]
Cross-network node classification (CNNC) aims to classify nodes in a label-deficient target network by transferring the knowledge from a source network with abundant labels.
We propose a domain-adaptive message passing graph neural network (DM-GNN), which integrates graph neural network (GNN) with conditional adversarial domain adaptation.
arXiv Detail & Related papers (2023-08-31T05:26:08Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Label-Enhanced Graph Neural Network for Semi-supervised Node
Classification [32.64730237473914]
We present a label-enhanced learning framework for Graph Neural Networks (GNNs)
It first models each label as a virtual center for intra-class nodes and then jointly learns the representations of both nodes and labels.
Our approach could not only smooth the representations of nodes belonging to the same class, but also explicitly encode the label semantics into the learning process of GNNs.
arXiv Detail & Related papers (2022-05-31T09:48:47Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - CLS: Cross Labeling Supervision for Semi-Supervised Learning [9.929229055862491]
Cross Labeling Supervision ( CLS) is a framework that generalizes the typical pseudo-labeling process.
CLS allows the creation of both pseudo and complementary labels to support both positive and negative learning.
arXiv Detail & Related papers (2022-02-17T08:09:40Z) - SSSNET: Semi-Supervised Signed Network Clustering [4.895808607591299]
We introduce a novel probabilistic balanced normalized cut loss for training nodes in a GNN framework for semi-supervised signed network clustering, called SSSNET.
The main novelty approach is a new take on the role of social balance theory for signed network embeddings.
arXiv Detail & Related papers (2021-10-13T10:36:37Z) - Joining datasets via data augmentation in the label space for neural
networks [6.036150783745836]
We propose a new technique leveraging artificially created knowledge graph, recurrent neural networks and policy gradient that successfully achieve the dataset joining in the label space.
Empirical results on both image and text classification justify the validity of our approach.
arXiv Detail & Related papers (2021-06-17T06:08:11Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - Semi-supervised deep learning based on label propagation in a 2D
embedded space [117.9296191012968]
Proposed solutions propagate labels from a small set of supervised images to a large set of unsupervised ones to train a deep neural network model.
We present a loop in which a deep neural network (VGG-16) is trained from a set with more correctly labeled samples along iterations.
As the labeled set improves along iterations, it improves the features of the neural network.
arXiv Detail & Related papers (2020-08-02T20:08:54Z) - Reliable Label Bootstrapping for Semi-Supervised Learning [19.841733658911767]
ReLaB is an unsupervised preprossessing algorithm which improves the performance of semi-supervised algorithms in extremely low supervision settings.
We show that the selection of the network architecture and the self-supervised algorithm are important factors to achieve successful label propagation.
We reach average error rates of $boldsymbol22.34$ with 1 random labeled sample per class on CIFAR-10 and lower this error to $boldsymbol8.46$ when the labeled sample in each class is highly representative.
arXiv Detail & Related papers (2020-07-23T08:51:37Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.