Ontology Enrichment for Effective Fine-grained Entity Typing
- URL: http://arxiv.org/abs/2310.07795v1
- Date: Wed, 11 Oct 2023 18:30:37 GMT
- Title: Ontology Enrichment for Effective Fine-grained Entity Typing
- Authors: Siru Ouyang, Jiaxin Huang, Pranav Pillai, Yunyi Zhang, Yu Zhang,
Jiawei Han
- Abstract summary: Fine-grained entity typing (FET) is the task of identifying specific entity types at a fine-grained level for entity mentions based on their contextual information.
Conventional methods for FET require extensive human annotation, which is time-consuming and costly.
We develop a coarse-to-fine typing algorithm that exploits the enriched information by training an entailment model with contrasting topics and instance-based augmented training samples.
- Score: 45.356694904518626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fine-grained entity typing (FET) is the task of identifying specific entity
types at a fine-grained level for entity mentions based on their contextual
information. Conventional methods for FET require extensive human annotation,
which is time-consuming and costly. Recent studies have been developing weakly
supervised or zero-shot approaches. We study the setting of zero-shot FET where
only an ontology is provided. However, most existing ontology structures lack
rich supporting information and even contain ambiguous relations, making them
ineffective in guiding FET. Recently developed language models, though
promising in various few-shot and zero-shot NLP tasks, may face challenges in
zero-shot FET due to their lack of interaction with task-specific ontology. In
this study, we propose OnEFET, where we (1) enrich each node in the ontology
structure with two types of extra information: instance information for
training sample augmentation and topic information to relate types to contexts,
and (2) develop a coarse-to-fine typing algorithm that exploits the enriched
information by training an entailment model with contrasting topics and
instance-based augmented training samples. Our experiments show that OnEFET
achieves high-quality fine-grained entity typing without human annotation,
outperforming existing zero-shot methods by a large margin and rivaling
supervised methods.
Related papers
- COTET: Cross-view Optimal Transport for Knowledge Graph Entity Typing [27.28214706269035]
Knowledge graph entity typing aims to infer missing entity type instances in knowledge graphs.
Previous research has predominantly centered around leveraging contextual information associated with entities.
This paper introduces Cross-view Optimal Transport for knowledge graph Entity Typing.
arXiv Detail & Related papers (2024-05-22T12:53:12Z) - Seed-Guided Fine-Grained Entity Typing in Science and Engineering
Domains [51.02035914828596]
We study the task of seed-guided fine-grained entity typing in science and engineering domains.
We propose SEType which first enriches the weak supervision by finding more entities for each seen type from an unlabeled corpus.
It then matches the enriched entities to unlabeled text to get pseudo-labeled samples and trains a textual entailment model that can make inferences for both seen and unseen types.
arXiv Detail & Related papers (2024-01-23T22:36:03Z) - Automated Few-shot Classification with Instruction-Finetuned Language
Models [76.69064714392165]
We show that AuT-Few outperforms state-of-the-art few-shot learning methods.
We also show that AuT-Few is the best ranking method across datasets on the RAFT few-shot benchmark.
arXiv Detail & Related papers (2023-05-21T21:50:27Z) - OntoType: Ontology-Guided and Pre-Trained Language Model Assisted Fine-Grained Entity Typing [25.516304052884397]
Fine-grained entity typing (FET) assigns entities in text with context-sensitive, fine-grained semantic types.
OntoType follows a type ontological structure, from coarse to fine, ensembles multiple PLM prompting results to generate a set of type candidates.
Our experiments on the Ontonotes, FIGER, and NYT datasets demonstrate that our method outperforms the state-of-the-art zero-shot fine-grained entity typing methods.
arXiv Detail & Related papers (2023-05-21T00:32:37Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Generative Entity Typing with Curriculum Learning [18.43562065432877]
We propose a novel generative entity typing (GET) paradigm.
Given a text with an entity mention, the multiple types for the role that the entity plays in the text are generated with a pre-trained language model.
Our experiments justify the superiority of our GET model over the state-of-the-art entity typing models.
arXiv Detail & Related papers (2022-10-06T13:32:50Z) - Ultra-fine Entity Typing with Indirect Supervision from Natural Language
Inference [28.78215056129358]
This work presents LITE, a new approach that formulates entity typing as a natural language inference (NLI) problem.
Experiments show that, with limited training data, LITE obtains state-of-the-art performance on the UFET task.
arXiv Detail & Related papers (2022-02-12T23:56:26Z) - Prompt-Learning for Fine-Grained Entity Typing [40.983849729537795]
We investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios.
We propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types.
arXiv Detail & Related papers (2021-08-24T09:39:35Z) - Topic Adaptation and Prototype Encoding for Few-Shot Visual Storytelling [81.33107307509718]
We propose a topic adaptive storyteller to model the ability of inter-topic generalization.
We also propose a prototype encoding structure to model the ability of intra-topic derivation.
Experimental results show that topic adaptation and prototype encoding structure mutually bring benefit to the few-shot model.
arXiv Detail & Related papers (2020-08-11T03:55:11Z) - Interpretable Entity Representations through Large-Scale Typing [61.4277527871572]
We present an approach to creating entity representations that are human readable and achieve high performance out of the box.
Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types.
We show that it is possible to reduce the size of our type set in a learning-based way for particular domains.
arXiv Detail & Related papers (2020-04-30T23:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.