GPT Struct Me: Probing GPT Models on Narrative Entity Extraction
- URL: http://arxiv.org/abs/2311.14583v1
- Date: Fri, 24 Nov 2023 16:19:04 GMT
- Title: GPT Struct Me: Probing GPT Models on Narrative Entity Extraction
- Authors: Hugo Sousa, Nuno Guimar\~aes, Al\'ipio Jorge, Ricardo Campos
- Abstract summary: We evaluate the capabilities of two state-of-the-art language models -- GPT-3 and GPT-3.5 -- in the extraction of narrative entities.
This study is conducted on the Text2Story Lusa dataset, a collection of 119 Portuguese news articles.
- Score: 2.049592435988883
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The importance of systems that can extract structured information from
textual data becomes increasingly pronounced given the ever-increasing volume
of text produced on a daily basis. Having a system that can effectively extract
such information in an interoperable manner would be an asset for several
domains, be it finance, health, or legal. Recent developments in natural
language processing led to the production of powerful language models that can,
to some degree, mimic human intelligence. Such effectiveness raises a pertinent
question: Can these models be leveraged for the extraction of structured
information? In this work, we address this question by evaluating the
capabilities of two state-of-the-art language models -- GPT-3 and GPT-3.5,
commonly known as ChatGPT -- in the extraction of narrative entities, namely
events, participants, and temporal expressions. This study is conducted on the
Text2Story Lusa dataset, a collection of 119 Portuguese news articles whose
annotation framework includes a set of entity structures along with several
tags and attribute values. We first select the best prompt template through an
ablation study over prompt components that provide varying degrees of
information on a subset of documents of the dataset. Subsequently, we use the
best templates to evaluate the effectiveness of the models on the remaining
documents. The results obtained indicate that GPT models are competitive with
out-of-the-box baseline systems, presenting an all-in-one alternative for
practitioners with limited resources. By studying the strengths and limitations
of these models in the context of information extraction, we offer insights
that can guide future improvements and avenues to explore in this field.
Related papers
- Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - A Comparative Analysis of Conversational Large Language Models in
Knowledge-Based Text Generation [5.661396828160973]
We conduct an empirical analysis of conversational large language models in generating natural language text from semantic triples.
We compare four large language models of varying sizes with different prompting techniques.
Our findings show that the capabilities of large language models in triple verbalization can be significantly improved through few-shot prompting, post-processing, and efficient fine-tuning techniques.
arXiv Detail & Related papers (2024-02-02T15:26:39Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Accelerated materials language processing enabled by GPT [5.518792725397679]
We develop generative transformer (GPT)-enabled pipelines for materials language processing.
First, we develop a GPT-enabled document classification method for screening relevant documents.
Secondly, for NER task, we design an entity-centric prompts, and learning few-shot of them improved the performance.
Finally, we develop an GPT-enabled extractive QA model, which provides improved performance and shows the possibility of automatically correcting annotations.
arXiv Detail & Related papers (2023-08-18T07:31:13Z) - Physics of Language Models: Part 1, Learning Hierarchical Language Structures [51.68385617116854]
Transformer-based language models are effective but complex, and understanding their inner workings is a significant challenge.
We introduce a family of synthetic CFGs that produce hierarchical rules, capable of generating lengthy sentences.
We demonstrate that generative models like GPT can accurately learn this CFG language and generate sentences based on it.
arXiv Detail & Related papers (2023-05-23T04:28:16Z) - Document-Level Machine Translation with Large Language Models [91.03359121149595]
Large language models (LLMs) can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks.
This paper provides an in-depth evaluation of LLMs' ability on discourse modeling.
arXiv Detail & Related papers (2023-04-05T03:49:06Z) - Topic Discovery via Latent Space Clustering of Pretrained Language Model
Representations [35.74225306947918]
We propose a joint latent space learning and clustering framework built upon PLM embeddings.
Our model effectively leverages the strong representation power and superb linguistic features brought by PLMs for topic discovery.
arXiv Detail & Related papers (2022-02-09T17:26:08Z) - Pre-training Language Model Incorporating Domain-specific Heterogeneous Knowledge into A Unified Representation [49.89831914386982]
We propose a unified pre-trained language model (PLM) for all forms of text, including unstructured text, semi-structured text, and well-structured text.
Our approach outperforms the pre-training of plain text using only 1/4 of the data.
arXiv Detail & Related papers (2021-09-02T16:05:24Z) - Combining pre-trained language models and structured knowledge [9.521634184008574]
transformer-based language models have achieved state of the art performance in various NLP benchmarks.
It has proven challenging to integrate structured information, such as knowledge graphs into these models.
We examine a variety of approaches to integrate structured knowledge into current language models and determine challenges, and possible opportunities to leverage both structured and unstructured information sources.
arXiv Detail & Related papers (2021-01-28T21:54:03Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.