An Adversarial Transfer Network for Knowledge Representation Learning
- URL: http://arxiv.org/abs/2104.14757v1
- Date: Fri, 30 Apr 2021 05:07:25 GMT
- Title: An Adversarial Transfer Network for Knowledge Representation Learning
- Authors: Huijuan Wang, Shuangyin Li, Rong Pan
- Abstract summary: We propose an adversarial embedding transfer network ATransN, which transfers knowledge from one or more teacher knowledge graphs to a target one.
Specifically, we add soft constraints on aligned entity pairs and neighbours to the existing knowledge representation learning methods.
- Score: 11.013390624382257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge representation learning has received a lot of attention in the past
few years. The success of existing methods heavily relies on the quality of
knowledge graphs. The entities with few triplets tend to be learned with less
expressive power. Fortunately, there are many knowledge graphs constructed from
various sources, the representations of which could contain much information.
We propose an adversarial embedding transfer network ATransN, which transfers
knowledge from one or more teacher knowledge graphs to a target one through an
aligned entity set without explicit data leakage. Specifically, we add soft
constraints on aligned entity pairs and neighbours to the existing knowledge
representation learning methods. To handle the problem of possible distribution
differences between teacher and target knowledge graphs, we introduce an
adversarial adaption module. The discriminator of this module evaluates the
degree of consistency between the embeddings of an aligned entity pair. The
consistency score is then used as the weights of soft constraints. It is not
necessary to acquire the relations and triplets in teacher knowledge graphs
because we only utilize the entity representations. Knowledge graph completion
results show that ATransN achieves better performance against baselines without
transfer on three datasets, CN3l, WK3l, and DWY100k. The ablation study
demonstrates that ATransN can bring steady and consistent improvement in
different settings. The extension of combining other knowledge graph embedding
algorithms and the extension with three teacher graphs display the promising
generalization of the adversarial transfer network.
Related papers
- Inference over Unseen Entities, Relations and Literals on Knowledge Graphs [1.7474352892977463]
knowledge graph embedding models have been successfully applied in the transductive setting to tackle various challenging tasks.
We propose the attentive byte-pair encoding layer (BytE) to construct a triple embedding from a sequence of byte-pair encoded subword units of entities and relations.
BytE leads to massive feature reuse via weight tying, since it forces a knowledge graph embedding model to learn embeddings for subword units instead of entities and relations directly.
arXiv Detail & Related papers (2024-10-09T10:20:54Z) - Graph Relation Distillation for Efficient Biomedical Instance
Segmentation [80.51124447333493]
We propose a graph relation distillation approach for efficient biomedical instance segmentation.
We introduce two graph distillation schemes deployed at both the intra-image level and the inter-image level.
Experimental results on a number of biomedical datasets validate the effectiveness of our approach.
arXiv Detail & Related papers (2024-01-12T04:41:23Z) - Incorporating Domain Knowledge Graph into Multimodal Movie Genre
Classification with Self-Supervised Attention and Contrastive Learning [14.729059909487072]
We present a novel framework that exploits the knowledge graph from various perspectives to address the above problems.
We introduce an Attention Teacher module for reliable attention allocation based on self-supervised learning.
Finally, a Genre-Centroid Anchored Contrastive Learning module is proposed to strengthen the discriminative ability of fused features.
arXiv Detail & Related papers (2023-10-12T04:49:11Z) - End-to-End Learning on Multimodal Knowledge Graphs [0.0]
We propose a multimodal message passing network which learns end-to-end from the structure of graphs.
Our model uses dedicated (neural) encoders to naturally learn embeddings for node features belonging to five different types of modalities.
Our results indicate that end-to-end multimodal learning from any arbitrary knowledge graph is indeed possible.
arXiv Detail & Related papers (2023-09-03T13:16:18Z) - KMF: Knowledge-Aware Multi-Faceted Representation Learning for Zero-Shot
Node Classification [75.95647590619929]
Zero-Shot Node Classification (ZNC) has been an emerging and crucial task in graph data analysis.
We propose a Knowledge-Aware Multi-Faceted framework (KMF) that enhances the richness of label semantics.
A novel geometric constraint is developed to alleviate the problem of prototype drift caused by node information aggregation.
arXiv Detail & Related papers (2023-08-15T02:38:08Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Augmenting Knowledge Transfer across Graphs [16.50013525404218]
We present TRANSNET, a generic learning framework for augmenting knowledge transfer across graphs.
In particular, we introduce a novel notion named trinity signal that can naturally formulate various graph signals at different granularity.
We show that TRANSNET outperforms all existing approaches on seven benchmark datasets by a significant margin.
arXiv Detail & Related papers (2022-12-09T08:46:02Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.