Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity
Typing
- URL: http://arxiv.org/abs/2208.10081v1
- Date: Mon, 22 Aug 2022 06:38:08 GMT
- Title: Type-enriched Hierarchical Contrastive Strategy for Fine-Grained Entity
Typing
- Authors: Xinyu Zuo, Haijin Liang, Ning Jing, Shuang Zeng, Zhou Fang and Yu Luo
- Abstract summary: Fine-grained entity typing aims to deduce specific semantic types of the entity mentions in text.
Few works directly model the type differences, that is, let models know the extent that one type is different from others.
Our method can directly model the differences between hierarchical types and improve the ability to distinguish multi-grained similar types.
- Score: 8.885149784531807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fine-grained entity typing (FET) aims to deduce specific semantic types of
the entity mentions in text. Modern methods for FET mainly focus on learning
what a certain type looks like. And few works directly model the type
differences, that is, let models know the extent that one type is different
from others. To alleviate this problem, we propose a type-enriched hierarchical
contrastive strategy for FET. Our method can directly model the differences
between hierarchical types and improve the ability to distinguish multi-grained
similar types. On the one hand, we embed type into entity contexts to make type
information directly perceptible. On the other hand, we design a constrained
contrastive strategy on the hierarchical structure to directly model the type
differences, which can simultaneously perceive the distinguishability between
types at different granularity. Experimental results on three benchmarks, BBN,
OntoNotes, and FIGER show that our method achieves significant performance on
FET by effectively modeling type differences.
Related papers
- Seed-Guided Fine-Grained Entity Typing in Science and Engineering
Domains [51.02035914828596]
We study the task of seed-guided fine-grained entity typing in science and engineering domains.
We propose SEType which first enriches the weak supervision by finding more entities for each seen type from an unlabeled corpus.
It then matches the enriched entities to unlabeled text to get pseudo-labeled samples and trains a textual entailment model that can make inferences for both seen and unseen types.
arXiv Detail & Related papers (2024-01-23T22:36:03Z) - Multi-view Contrastive Learning for Entity Typing over Knowledge Graphs [25.399684403558553]
We propose a novel method called Multi-view Contrastive Learning for knowledge graph Entity Typing (MCLET)
MCLET effectively encodes the coarse-grained knowledge provided by clusters into entity and type embeddings.
arXiv Detail & Related papers (2023-10-18T14:41:09Z) - OntoType: Ontology-Guided and Pre-Trained Language Model Assisted Fine-Grained Entity Typing [25.516304052884397]
Fine-grained entity typing (FET) assigns entities in text with context-sensitive, fine-grained semantic types.
OntoType follows a type ontological structure, from coarse to fine, ensembles multiple PLM prompting results to generate a set of type candidates.
Our experiments on the Ontonotes, FIGER, and NYT datasets demonstrate that our method outperforms the state-of-the-art zero-shot fine-grained entity typing methods.
arXiv Detail & Related papers (2023-05-21T00:32:37Z) - Prototype-based Embedding Network for Scene Graph Generation [105.97836135784794]
Current Scene Graph Generation (SGG) methods explore contextual information to predict relationships among entity pairs.
Due to the diverse visual appearance of numerous possible subject-object combinations, there is a large intra-class variation within each predicate category.
Prototype-based Embedding Network (PE-Net) models entities/predicates with prototype-aligned compact and distinctive representations.
PL is introduced to help PE-Net efficiently learn such entitypredicate matching, and Prototype Regularization (PR) is devised to relieve the ambiguous entity-predicate matching.
arXiv Detail & Related papers (2023-03-13T13:30:59Z) - Hierarchical Variational Memory for Few-shot Learning Across Domains [120.87679627651153]
We introduce a hierarchical prototype model, where each level of the prototype fetches corresponding information from the hierarchical memory.
The model is endowed with the ability to flexibly rely on features at different semantic levels if the domain shift circumstances so demand.
We conduct thorough ablation studies to demonstrate the effectiveness of each component in our model.
arXiv Detail & Related papers (2021-12-15T15:01:29Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Category Contrast for Unsupervised Domain Adaptation in Visual Tasks [92.9990560760593]
We propose a novel Category Contrast technique (CaCo) that introduces semantic priors on top of instance discrimination for visual UDA tasks.
CaCo is complementary to existing UDA methods and generalizable to other learning setups such as semi-supervised learning, unsupervised model adaptation, etc.
arXiv Detail & Related papers (2021-06-05T12:51:35Z) - Modeling Fine-Grained Entity Types with Box Embeddings [32.85605894725522]
We study the ability of box embeddings to represent hierarchies of fine-grained entity type labels.
We compare our approach with a strong vector-based typing model, and observe state-of-the-art performance on several entity typing benchmarks.
arXiv Detail & Related papers (2021-01-02T00:59:10Z) - Interpretable Entity Representations through Large-Scale Typing [61.4277527871572]
We present an approach to creating entity representations that are human readable and achieve high performance out of the box.
Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types.
We show that it is possible to reduce the size of our type set in a learning-based way for particular domains.
arXiv Detail & Related papers (2020-04-30T23:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.