Automatic Context Pattern Generation for Entity Set Expansion
- URL: http://arxiv.org/abs/2207.08087v2
- Date: Tue, 19 Jul 2022 03:40:45 GMT
- Title: Automatic Context Pattern Generation for Entity Set Expansion
- Authors: Yinghui Li, Shulin Huang, Xinwei Zhang, Qingyu Zhou, Yangning Li,
Ruiyang Liu, Yunbo Cao, Hai-Tao Zheng, Ying Shen
- Abstract summary: We develop a module that automatically generates high-quality context patterns for entities.
We also propose the GAPA framework that leverages the aforementioned GenerAted PAtterns to expand target entities.
- Score: 40.535332689515656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entity Set Expansion (ESE) is a valuable task that aims to find entities of
the target semantic class described by given seed entities. Various NLP and IR
downstream applications have benefited from ESE due to its ability to discover
knowledge. Although existing bootstrapping methods have achieved great
progress, most of them still rely on manually pre-defined context patterns. A
non-negligible shortcoming of the pre-defined context patterns is that they
cannot be flexibly generalized to all kinds of semantic classes, and we call
this phenomenon as "semantic sensitivity". To address this problem, we devise a
context pattern generation module that utilizes autoregressive language models
(e.g., GPT-2) to automatically generate high-quality context patterns for
entities. In addition, we propose the GAPA, a novel ESE framework that
leverages the aforementioned GenerAted PAtterns to expand target entities.
Extensive experiments and detailed analyses on three widely used datasets
demonstrate the effectiveness of our method. All the codes of our experiments
will be available for reproducibility.
Related papers
- A Hybrid Approach To Aspect Based Sentiment Analysis Using Transfer Learning [3.30307212568497]
We propose a hybrid approach for Aspect Based Sentiment Analysis using transfer learning.
The approach focuses on generating weakly-supervised annotations by exploiting the strengths of both large language models (LLM) and traditional syntactic dependencies.
arXiv Detail & Related papers (2024-03-25T23:02:33Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Exploiting Contextual Target Attributes for Target Sentiment
Classification [53.30511968323911]
Existing PTLM-based models for TSC can be categorized into two groups: 1) fine-tuning-based models that adopt PTLM as the context encoder; 2) prompting-based models that transfer the classification task to the text/word generation task.
We present a new perspective of leveraging PTLM for TSC: simultaneously leveraging the merits of both language modeling and explicit target-context interactions via contextual target attributes.
arXiv Detail & Related papers (2023-12-21T11:45:28Z) - Dynamic Retrieval-Augmented Generation [4.741884506444161]
We propose a novel approach for the Dynamic Retrieval-Augmented Generation (DRAG)
DRAG injects compressed embeddings of the retrieved entities into the generative model.
Our approach achieves several targets: (1) lifting the length limitations of the context window, saving on the prompt size; (2) allowing huge expansion of the number of retrieval entities available for the context; (3) alleviating the problem of misspelling or failing to find relevant entity names.
arXiv Detail & Related papers (2023-12-14T14:26:57Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - Few-Shot Fine-Grained Entity Typing with Automatic Label Interpretation
and Instance Generation [36.541309948222306]
We study the problem of few-shot Fine-grained Entity Typing (FET), where only a few annotated entity mentions with contexts are given for each entity type.
We propose a novel framework for few-shot FET consisting of two modules: (1) an entity type label interpretation module automatically learns to relate type labels to the vocabulary by jointly leveraging few-shot instances and the label hierarchy, and (2) a type-based contextualized instance generator produces new instances based on given instances to enlarge the training set for better generalization.
arXiv Detail & Related papers (2022-06-28T04:05:40Z) - Contrastive Learning with Hard Negative Entities for Entity Set
Expansion [29.155036098444008]
Various NLP and IR applications will benefit from ESE due to its ability to discover knowledge.
We devise an entity-level masked language model with contrastive learning to refine the representation of entities.
In addition, we propose the ProbExpan, a novel probabilistic ESE framework utilizing the entity representation obtained by the aforementioned language model to expand entities.
arXiv Detail & Related papers (2022-04-16T12:26:42Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.