Multi-view Contrastive Learning for Entity Typing over Knowledge Graphs
- URL: http://arxiv.org/abs/2310.12008v1
- Date: Wed, 18 Oct 2023 14:41:09 GMT
- Title: Multi-view Contrastive Learning for Entity Typing over Knowledge Graphs
- Authors: Zhiwei Hu, V\'ictor Guti\'errez-Basulto, Zhiliang Xiang, Ru Li, Jeff
Z. Pan
- Abstract summary: We propose a novel method called Multi-view Contrastive Learning for knowledge graph Entity Typing (MCLET)
MCLET effectively encodes the coarse-grained knowledge provided by clusters into entity and type embeddings.
- Score: 25.399684403558553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graph entity typing (KGET) aims at inferring plausible types of
entities in knowledge graphs. Existing approaches to KGET focus on how to
better encode the knowledge provided by the neighbors and types of an entity
into its representation. However, they ignore the semantic knowledge provided
by the way in which types can be clustered together. In this paper, we propose
a novel method called Multi-view Contrastive Learning for knowledge graph
Entity Typing (MCLET), which effectively encodes the coarse-grained knowledge
provided by clusters into entity and type embeddings. MCLET is composed of
three modules: i) Multi-view Generation and Encoder module, which encodes
structured information from entity-type, entity-cluster and cluster-type views;
ii) Cross-view Contrastive Learning module, which encourages different views to
collaboratively improve view-specific representations of entities and types;
iii) Entity Typing Prediction module, which integrates multi-head attention and
a Mixture-of-Experts strategy to infer missing entity types. Extensive
experiments show the strong performance of MCLET compared to the
state-of-the-art
Related papers
- COTET: Cross-view Optimal Transport for Knowledge Graph Entity Typing [27.28214706269035]
Knowledge graph entity typing aims to infer missing entity type instances in knowledge graphs.
Previous research has predominantly centered around leveraging contextual information associated with entities.
This paper introduces Cross-view Optimal Transport for knowledge graph Entity Typing.
arXiv Detail & Related papers (2024-05-22T12:53:12Z) - EIGEN: Expert-Informed Joint Learning Aggregation for High-Fidelity
Information Extraction from Document Images [27.36816896426097]
Information Extraction from document images is challenging due to the high variability of layout formats.
We propose a novel approach, EIGEN, which combines rule-based methods with deep learning models using data programming approaches.
We empirically show that our EIGEN framework can significantly improve the performance of state-of-the-art deep models with the availability of very few labeled data instances.
arXiv Detail & Related papers (2023-11-23T13:20:42Z) - Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models [64.24227572048075]
We propose a Knowledge-Aware Prompt Tuning (KAPT) framework for vision-language models.
Our approach takes inspiration from human intelligence in which external knowledge is usually incorporated into recognizing novel categories of objects.
arXiv Detail & Related papers (2023-08-22T04:24:45Z) - Towards Better Entity Linking with Multi-View Enhanced Distillation [30.554387215553238]
This paper proposes a Multi-View Enhanced Distillation (MVD) framework for entity linking.
MVD can effectively transfer knowledge of multiple fine-grained and mention-relevant parts within entities from cross-encoders to dual-encoders.
Experiments show our method achieves state-of-the-art performance on several entity linking benchmarks.
arXiv Detail & Related papers (2023-05-27T05:15:28Z) - Multi-modal Contrastive Representation Learning for Entity Alignment [57.92705405276161]
Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs.
We propose MCLEA, a Multi-modal Contrastive Learning based Entity Alignment model.
In particular, MCLEA firstly learns multiple individual representations from multiple modalities, and then performs contrastive learning to jointly model intra-modal and inter-modal interactions.
arXiv Detail & Related papers (2022-09-02T08:59:57Z) - VGSE: Visually-Grounded Semantic Embeddings for Zero-Shot Learning [113.50220968583353]
We propose to discover semantic embeddings containing discriminative visual properties for zero-shot learning.
Our model visually divides a set of images from seen classes into clusters of local image regions according to their visual similarity.
We demonstrate that our visually-grounded semantic embeddings further improve performance over word embeddings across various ZSL models by a large margin.
arXiv Detail & Related papers (2022-03-20T03:49:02Z) - Boosting Entity-aware Image Captioning with Multi-modal Knowledge Graph [96.95815946327079]
It is difficult to learn the association between named entities and visual cues due to the long-tail distribution of named entities.
We propose a novel approach that constructs a multi-modal knowledge graph to associate the visual objects with named entities.
arXiv Detail & Related papers (2021-07-26T05:50:41Z) - AutoETER: Automated Entity Type Representation for Knowledge Graph
Embedding [40.900070190077024]
We develop a novel Knowledge Graph Embedding (KGE) framework with Automated Entity TypE Representation (AutoETER)
Our approach could model and infer all the relation patterns and complex relations.
Experiments on four datasets demonstrate the superior performance of our model compared to state-of-the-art baselines on link prediction tasks.
arXiv Detail & Related papers (2020-09-25T04:27:35Z) - Connecting Embeddings for Knowledge Graph Entity Typing [22.617375045752084]
Knowledge graph (KG) entity typing aims at inferring possible missing entity type instances in KG.
We propose a novel approach for KG entity typing which is trained by jointly utilizing local typing knowledge from existing entity type assertions and global triple knowledge from KGs.
arXiv Detail & Related papers (2020-07-21T15:00:01Z) - Interpretable Entity Representations through Large-Scale Typing [61.4277527871572]
We present an approach to creating entity representations that are human readable and achieve high performance out of the box.
Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types.
We show that it is possible to reduce the size of our type set in a learning-based way for particular domains.
arXiv Detail & Related papers (2020-04-30T23:58:03Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.