Exploring Contextualized Neural Language Models for Temporal Dependency
Parsing
- URL: http://arxiv.org/abs/2004.14577v2
- Date: Sat, 3 Oct 2020 00:25:39 GMT
- Title: Exploring Contextualized Neural Language Models for Temporal Dependency
Parsing
- Authors: Hayley Ross, Jonathon Cai, Bonan Min
- Abstract summary: We show that BERT significantly improves temporal dependency parsing.
We also present a detailed analysis on why deep contextualized neural LMs help and where they may fall short.
- Score: 10.17066263304299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting temporal relations between events and time expressions has many
applications such as constructing event timelines and time-related question
answering. It is a challenging problem which requires syntactic and semantic
information at sentence or discourse levels, which may be captured by deep
contextualized language models (LMs) such as BERT (Devlin et al., 2019). In
this paper, we develop several variants of BERT-based temporal dependency
parser, and show that BERT significantly improves temporal dependency parsing
(Zhang and Xue, 2018a). We also present a detailed analysis on why deep
contextualized neural LMs help and where they may fall short. Source code and
resources are made available at https://github.com/bnmin/tdp_ranking.
Related papers
- Prompting-based Synthetic Data Generation for Few-Shot Question Answering [23.97949073816028]
We show that using large language models can improve Question Answering performance on various datasets in the few-shot setting.
We suggest that language models contain valuable task-agnostic knowledge that can be used beyond the common pre-training/fine-tuning scheme.
arXiv Detail & Related papers (2024-05-15T13:36:43Z) - LITA: Language Instructed Temporal-Localization Assistant [71.68815100776278]
We introduce time tokens that encode timestamps relative to the video length to better represent time in videos.
We also introduce SlowFast tokens in the architecture to capture temporal information at fine temporal resolution.
We show that our emphasis on temporal localization also substantially improves video-based text generation compared to existing Video LLMs.
arXiv Detail & Related papers (2024-03-27T22:50:48Z) - MRL Parsing Without Tears: The Case of Hebrew [14.104766026682384]
In morphologically rich languages (MRLs), wheres need to identify multiple lexical units in each token, existing systems suffer in latency and setup complexity.
We present a new "flipped pipeline": decisions are made directly on the whole-token units by expert classifiers, each one dedicated to one specific task.
This blazingly fast approach sets a new SOTA in Hebrew POS tagging and dependency parsing, while also reaching near-SOTA performance on other Hebrew tasks.
arXiv Detail & Related papers (2024-03-11T17:54:33Z) - Temporal Validity Change Prediction [20.108317515225504]
Existing benchmarking tasks require models to identify the temporal validity duration of a single statement.
In many cases, additional contextual information, such as sentences in a story or posts on a social media profile, can be collected from the available text stream.
We propose Temporal Validity Change Prediction, a natural language processing task benchmarking the capability of machine learning models to detect contextual statements that induce such change.
arXiv Detail & Related papers (2024-01-01T14:58:53Z) - Once Upon a $\textit{Time}$ in $\textit{Graph}$: Relative-Time
Pretraining for Complex Temporal Reasoning [96.03608822291136]
We make use of the underlying nature of time, and suggest creating a graph structure based on the relative placements of events along the time axis.
Inspired by the graph view, we propose RemeMo, which explicitly connects all temporally-scoped facts by modeling the time relations between any two sentences.
Experimental results show that RemeMo outperforms the baseline T5 on multiple temporal question answering datasets.
arXiv Detail & Related papers (2023-10-23T08:49:00Z) - Jamp: Controlled Japanese Temporal Inference Dataset for Evaluating
Generalization Capacity of Language Models [18.874880342410876]
We present Jamp, a Japanese benchmark focused on temporal inference.
Our dataset includes a range of temporal inference patterns, which enables us to conduct fine-grained analysis.
We evaluate the generalization capacities of monolingual/multilingual LMs by splitting our dataset based on tense fragments.
arXiv Detail & Related papers (2023-06-19T07:00:14Z) - Semantic Parsing for Conversational Question Answering over Knowledge
Graphs [63.939700311269156]
We develop a dataset where user questions are annotated with Sparql parses and system answers correspond to execution results thereof.
We present two different semantic parsing approaches and highlight the challenges of the task.
Our dataset and models are released at https://github.com/Edinburgh/SPICE.
arXiv Detail & Related papers (2023-01-28T14:45:11Z) - Context-Dependent Semantic Parsing for Temporal Relation Extraction [2.5807659587068534]
We propose SMARTER, a neural semantic representation, to extract temporal information in text effectively.
In the inference phase, SMARTER generates a temporal relation graph by executing the logical form.
The accurate logical form representations of an event given context ensure the correctness of the extracted relations.
arXiv Detail & Related papers (2021-12-02T00:29:21Z) - GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and
Event Extraction [107.8262586956778]
We introduce graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic sentence representations.
GCNs struggle to model words with long-range dependencies or are not directly connected in the dependency tree.
We propose to utilize the self-attention mechanism to learn the dependencies between words with different syntactic distances.
arXiv Detail & Related papers (2020-10-06T20:30:35Z) - Temporal Common Sense Acquisition with Minimal Supervision [77.8308414884754]
This work proposes a novel sequence modeling approach that exploits explicit and implicit mentions of temporal common sense.
Our method is shown to give quality predictions of various dimensions of temporal common sense.
It also produces representations of events for relevant tasks such as duration comparison, parent-child relations, event coreference and temporal QA.
arXiv Detail & Related papers (2020-05-08T22:20:16Z) - Local-Global Video-Text Interactions for Temporal Grounding [77.5114709695216]
This paper addresses the problem of text-to-video temporal grounding, which aims to identify the time interval in a video semantically relevant to a text query.
We tackle this problem using a novel regression-based model that learns to extract a collection of mid-level features for semantic phrases in a text query.
The proposed method effectively predicts the target time interval by exploiting contextual information from local to global.
arXiv Detail & Related papers (2020-04-16T08:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.