Temporal Embeddings and Transformer Models for Narrative Text
Understanding
- URL: http://arxiv.org/abs/2003.08811v1
- Date: Thu, 19 Mar 2020 14:23:12 GMT
- Title: Temporal Embeddings and Transformer Models for Narrative Text
Understanding
- Authors: Vani K and Simone Mellace and Alessandro Antonucci
- Abstract summary: We present two approaches to narrative text understanding for character relationship modelling.
The temporal evolution of these relations is described by dynamic word embeddings, that are designed to learn semantic changes over time.
A supervised learning approach based on the state-of-the-art transformer model BERT is used instead to detect static relations between characters.
- Score: 72.88083067388155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present two deep learning approaches to narrative text understanding for
character relationship modelling. The temporal evolution of these relations is
described by dynamic word embeddings, that are designed to learn semantic
changes over time. An empirical analysis of the corresponding character
trajectories shows that such approaches are effective in depicting dynamic
evolution. A supervised learning approach based on the state-of-the-art
transformer model BERT is used instead to detect static relations between
characters. The empirical validation shows that such events (e.g., two
characters belonging to the same family) might be spotted with good accuracy,
even when using automatically annotated data. This provides a deeper
understanding of narrative plots based on the identification of key facts.
Standard clustering techniques are finally used for character de-aliasing, a
necessary pre-processing step for both approaches. Overall, deep learning
models appear to be suitable for narrative text understanding, while also
providing a challenging and unexploited benchmark for general natural language
understanding.
Related papers
- Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Detecting Statements in Text: A Domain-Agnostic Few-Shot Solution [1.3654846342364308]
State-of-the-art approaches usually involve fine-tuning models on large annotated datasets, which are costly to produce.
We propose and release a qualitative and versatile few-shot learning methodology as a common paradigm for any claim-based textual classification task.
We illustrate this methodology in the context of three tasks: climate change contrarianism detection, topic/stance classification and depression-relates symptoms detection.
arXiv Detail & Related papers (2024-05-09T12:03:38Z) - Generating Coherent Narratives by Learning Dynamic and Discrete Entity
States with a Contrastive Framework [68.1678127433077]
We extend the Transformer model to dynamically conduct entity state updates and sentence realization for narrative generation.
Experiments on two narrative datasets show that our model can generate more coherent and diverse narratives than strong baselines.
arXiv Detail & Related papers (2022-08-08T09:02:19Z) - Unsupervised Mismatch Localization in Cross-Modal Sequential Data [5.932046800902776]
We develop an unsupervised learning algorithm that can infer the relationship between content-mismatched cross-modal data.
We propose a hierarchical Bayesian deep learning model, named mismatch localization variational autoencoder (ML-VAE), that decomposes the generative process of the speech into hierarchically structured latent variables.
Our experimental results show that ML-VAE successfully locates the mismatch between text and speech, without the need for human annotations.
arXiv Detail & Related papers (2022-05-05T14:23:27Z) - Learning Syntactic Dense Embedding with Correlation Graph for Automatic
Readability Assessment [17.882688516249058]
We propose to incorporate linguistic features into neural network models by learning syntactic dense embeddings based on linguistic features.
Our proposed methodology can complement BERT-only model to achieve significantly better performances for automatic readability assessment.
arXiv Detail & Related papers (2021-07-09T07:26:17Z) - Improving Generation and Evaluation of Visual Stories via Semantic
Consistency [72.00815192668193]
Given a series of natural language captions, an agent must generate a sequence of images that correspond to the captions.
Prior work has introduced recurrent generative models which outperform synthesis text-to-image models on this task.
We present a number of improvements to prior modeling approaches, including the addition of a dual learning framework.
arXiv Detail & Related papers (2021-05-20T20:42:42Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Syntax-Enhanced Pre-trained Model [49.1659635460369]
We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa.
Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages.
We present a model that utilizes the syntax of text in both pre-training and fine-tuning stages.
arXiv Detail & Related papers (2020-12-28T06:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.