Ultra-fine Entity Typing with Indirect Supervision from Natural Language
Inference
- URL: http://arxiv.org/abs/2202.06167v1
- Date: Sat, 12 Feb 2022 23:56:26 GMT
- Title: Ultra-fine Entity Typing with Indirect Supervision from Natural Language
Inference
- Authors: Bangzheng Li, Wenpeng Yin, Muhao Chen
- Abstract summary: This work presents LITE, a new approach that formulates entity typing as a natural language inference (NLI) problem.
Experiments show that, with limited training data, LITE obtains state-of-the-art performance on the UFET task.
- Score: 28.78215056129358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of ultra-fine entity typing (UFET) seeks to predict diverse and
free-form words or phrases that describe the appropriate types of entities
mentioned in sentences. A key challenge for this task lies in the large amount
of types and the scarcity of annotated data per type. Existing systems
formulate the task as a multi-way classification problem and train directly or
distantly supervised classifiers. This causes two issues: (i) the classifiers
do not capture the type semantics since types are often converted into indices;
(ii) systems developed in this way are limited to predicting within a
pre-defined type set, and often fall short of generalizing to types that are
rarely seen or unseen in training. This work presents LITE, a new approach that
formulates entity typing as a natural language inference (NLI) problem, making
use of (i) the indirect supervision from NLI to infer type information
meaningfully represented as textual hypotheses and alleviate the data scarcity
issue, as well as (ii) a learning-to-rank objective to avoid the pre-defining
of a type set. Experiments show that, with limited training data, LITE obtains
state-of-the-art performance on the UFET task. In addition, LITE demonstrates
its strong generalizability, by not only yielding best results on other
fine-grained entity typing benchmarks, more importantly, a pre-trained LITE
system works well on new data containing unseen types.
Related papers
- Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars [66.823588073584]
Large language models (LLMs) have shown impressive capabilities in real-world applications.
The quality of these exemplars in the prompt greatly impacts performance.
Existing methods fail to adequately account for the impact of exemplar ordering on the performance.
arXiv Detail & Related papers (2024-05-25T08:23:05Z) - A Fixed-Point Approach to Unified Prompt-Based Counting [51.20608895374113]
This paper aims to establish a comprehensive prompt-based counting framework capable of generating density maps for objects indicated by various prompt types, such as box, point, and text.
Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks.
arXiv Detail & Related papers (2024-03-15T12:05:44Z) - Seed-Guided Fine-Grained Entity Typing in Science and Engineering
Domains [51.02035914828596]
We study the task of seed-guided fine-grained entity typing in science and engineering domains.
We propose SEType which first enriches the weak supervision by finding more entities for each seen type from an unlabeled corpus.
It then matches the enriched entities to unlabeled text to get pseudo-labeled samples and trains a textual entailment model that can make inferences for both seen and unseen types.
arXiv Detail & Related papers (2024-01-23T22:36:03Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Ontology Enrichment for Effective Fine-grained Entity Typing [45.356694904518626]
Fine-grained entity typing (FET) is the task of identifying specific entity types at a fine-grained level for entity mentions based on their contextual information.
Conventional methods for FET require extensive human annotation, which is time-consuming and costly.
We develop a coarse-to-fine typing algorithm that exploits the enriched information by training an entailment model with contrasting topics and instance-based augmented training samples.
arXiv Detail & Related papers (2023-10-11T18:30:37Z) - Mitigating Word Bias in Zero-shot Prompt-based Classifiers [55.60306377044225]
We show that matching class priors correlates strongly with the oracle upper bound performance.
We also demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.
arXiv Detail & Related papers (2023-09-10T10:57:41Z) - OntoType: Ontology-Guided and Pre-Trained Language Model Assisted Fine-Grained Entity Typing [25.516304052884397]
Fine-grained entity typing (FET) assigns entities in text with context-sensitive, fine-grained semantic types.
OntoType follows a type ontological structure, from coarse to fine, ensembles multiple PLM prompting results to generate a set of type candidates.
Our experiments on the Ontonotes, FIGER, and NYT datasets demonstrate that our method outperforms the state-of-the-art zero-shot fine-grained entity typing methods.
arXiv Detail & Related papers (2023-05-21T00:32:37Z) - Label-Descriptive Patterns and their Application to Characterizing
Classification Errors [31.272875287136426]
State-of-the-art deep learning methods achieve human-like performance on many tasks, but make errors nevertheless.
Characterizing these errors in easily interpretable terms gives insight into whether a model is prone to making systematic errors, but also gives a way to act and improve the model.
In this paper we propose a method that allows us to do so for arbitrary classifiers by mining a small set of patterns that together succinctly describe the input data that is partitioned according to correctness of prediction.
arXiv Detail & Related papers (2021-10-18T19:42:21Z) - Prompt-Learning for Fine-Grained Entity Typing [40.983849729537795]
We investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios.
We propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types.
arXiv Detail & Related papers (2021-08-24T09:39:35Z) - Ultra-Fine Entity Typing with Weak Supervision from a Masked Language
Model [39.031515304057585]
Recently there is an effort to extend fine-grained entity typing by using a richer and ultra-fine set of types.
We propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM)
Given a mention in a sentence, our approach constructs an input for the BERT so that it predicts context dependent hypernyms of the mention, which can be used as type labels.
arXiv Detail & Related papers (2021-06-08T04:43:28Z) - Interpretable Entity Representations through Large-Scale Typing [61.4277527871572]
We present an approach to creating entity representations that are human readable and achieve high performance out of the box.
Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types.
We show that it is possible to reduce the size of our type set in a learning-based way for particular domains.
arXiv Detail & Related papers (2020-04-30T23:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.