Heterogeneous Network Representation Learning: A Unified Framework with
Survey and Benchmark
- URL: http://arxiv.org/abs/2004.00216v3
- Date: Thu, 17 Dec 2020 01:44:03 GMT
- Title: Heterogeneous Network Representation Learning: A Unified Framework with
Survey and Benchmark
- Authors: Carl Yang, Yuxin Xiao, Yu Zhang, Yizhou Sun, Jiawei Han
- Abstract summary: We aim to provide a unified framework to summarize and evaluate existing research on heterogeneous network embedding (HNE)
As the first contribution, we provide a generic paradigm for the systematic categorization and analysis over the merits of various existing HNE algorithms.
As the second contribution, we create four benchmark datasets with various properties regarding scale, structure, attribute/label availability, and etcfrom different sources.
As the third contribution, we create friendly interfaces for 13 popular HNE algorithms, and provide all-around comparisons among them over multiple tasks and experimental settings.
- Score: 57.10850350508929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since real-world objects and their interactions are often multi-modal and
multi-typed, heterogeneous networks have been widely used as a more powerful,
realistic, and generic superclass of traditional homogeneous networks (graphs).
Meanwhile, representation learning (\aka~embedding) has recently been
intensively studied and shown effective for various network mining and
analytical tasks. In this work, we aim to provide a unified framework to deeply
summarize and evaluate existing research on heterogeneous network embedding
(HNE), which includes but goes beyond a normal survey. Since there has already
been a broad body of HNE algorithms, as the first contribution of this work, we
provide a generic paradigm for the systematic categorization and analysis over
the merits of various existing HNE algorithms. Moreover, existing HNE
algorithms, though mostly claimed generic, are often evaluated on different
datasets. Understandable due to the application favor of HNE, such indirect
comparisons largely hinder the proper attribution of improved task performance
towards effective data preprocessing and novel technical design, especially
considering the various ways possible to construct a heterogeneous network from
real-world application data. Therefore, as the second contribution, we create
four benchmark datasets with various properties regarding scale, structure,
attribute/label availability, and \etc.~from different sources, towards handy
and fair evaluations of HNE algorithms. As the third contribution, we carefully
refactor and amend the implementations and create friendly interfaces for 13
popular HNE algorithms, and provide all-around comparisons among them over
multiple tasks and experimental settings.
Related papers
- Informed deep hierarchical classification: a non-standard analysis inspired approach [0.0]
It consists in a multi-output deep neural network equipped with specific projection operators placed before each output layer.
The design of such an architecture, called lexicographic hybrid deep neural network (LH-DNN), has been possible by combining tools from different and quite distant research fields.
To assess the efficacy of the approach, the resulting network is compared against the B-CNN, a convolutional neural network tailored for hierarchical classification tasks.
arXiv Detail & Related papers (2024-09-25T14:12:50Z) - Scalable Multi-view Clustering via Explicit Kernel Features Maps [20.610589722626074]
A growing awareness of multi-view learning is a consequence of the increasing prevalence of multiple views in real-world applications.
An efficient optimization strategy is proposed, leveraging kernel feature maps to reduce the computational burden while maintaining good clustering performance.
We conduct extensive experiments on real-world benchmark networks of various sizes in order to evaluate the performance of our algorithm against state-of-the-art multi-view subspace clustering methods and attributed-network multi-view approaches.
arXiv Detail & Related papers (2024-02-07T12:35:31Z) - Meta Adaptive Task Sampling for Few-Domain Generalization [43.2043988610497]
Few-domain generalization (FDG) aims to learn a generalizable model from very few domains of novel tasks.
We propose a Meta Adaptive Task Sampling (MATS) procedure to differentiate base tasks according to their semantic and domain-shift similarity to the novel task.
arXiv Detail & Related papers (2023-05-25T01:44:09Z) - Multi-level Contrast Network for Wearables-based Joint Activity
Segmentation and Recognition [10.828099015828693]
Human activity recognition (HAR) with wearables is promising research that can be widely adopted in many smart healthcare applications.
Most HAR algorithms are susceptible to the multi-class windows problem that is essential yet rarely exploited.
We introduce the segmentation technology into HAR, yielding joint activity segmentation and recognition.
arXiv Detail & Related papers (2022-08-16T05:39:02Z) - MD-CSDNetwork: Multi-Domain Cross Stitched Network for Deepfake
Detection [80.83725644958633]
Current deepfake generation methods leave discriminative artifacts in the frequency spectrum of fake images and videos.
We present a novel approach, termed as MD-CSDNetwork, for combining the features in the spatial and frequency domains to mine a shared discriminative representation.
arXiv Detail & Related papers (2021-09-15T14:11:53Z) - Redefining Neural Architecture Search of Heterogeneous Multi-Network
Models by Characterizing Variation Operators and Model Components [71.03032589756434]
We investigate the effect of different variation operators in a complex domain, that of multi-network heterogeneous neural models.
We characterize both the variation operators, according to their effect on the complexity and performance of the model; and the models, relying on diverse metrics which estimate the quality of the different parts composing it.
arXiv Detail & Related papers (2021-06-16T17:12:26Z) - Landmark Regularization: Ranking Guided Super-Net Training in Neural
Architecture Search [70.57382341642418]
Weight sharing has become a de facto standard in neural architecture search because it enables the search to be done on commodity hardware.
Recent works have empirically shown a ranking disorder between the performance of stand-alone architectures and that of the corresponding shared-weight networks.
We propose a regularization term that aims to maximize the correlation between the performance rankings of the shared-weight network and that of the standalone architectures.
arXiv Detail & Related papers (2021-04-12T09:32:33Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Cross-Domain Facial Expression Recognition: A Unified Evaluation
Benchmark and Adversarial Graph Learning [85.6386289476598]
We develop a novel adversarial graph representation adaptation (AGRA) framework for cross-domain holistic-local feature co-adaptation.
We conduct extensive and fair evaluations on several popular benchmarks and show that the proposed AGRA framework outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2020-08-03T15:00:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.