Temporal Attention for Language Models
- URL: http://arxiv.org/abs/2202.02093v1
- Date: Fri, 4 Feb 2022 11:55:34 GMT
- Title: Temporal Attention for Language Models
- Authors: Guy D. Rosin and Kira Radinsky
- Abstract summary: We extend the key component of the transformer architecture, i.e., the self-attention mechanism, and propose temporal attention.
temporal attention can be applied to any transformer model and requires the input texts to be accompanied with their relevant time points.
We leverage these representations for the task of semantic change detection.
Our proposed model achieves state-of-the-art results on all the datasets.
- Score: 24.34396762188068
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pretrained language models based on the transformer architecture have shown
great success in NLP. Textual training data often comes from the web and is
thus tagged with time-specific information, but most language models ignore
this information. They are trained on the textual data alone, limiting their
ability to generalize temporally. In this work, we extend the key component of
the transformer architecture, i.e., the self-attention mechanism, and propose
temporal attention - a time-aware self-attention mechanism. Temporal attention
can be applied to any transformer model and requires the input texts to be
accompanied with their relevant time points. It allows the transformer to
capture this temporal information and create time-specific contextualized word
representations. We leverage these representations for the task of semantic
change detection; we apply our proposed mechanism to BERT and experiment on
three datasets in different languages (English, German, and Latin) that also
vary in time, size, and genre. Our proposed model achieves state-of-the-art
results on all the datasets.
Related papers
- Metadata Matters for Time Series: Informative Forecasting with Transformers [70.38241681764738]
We propose a Metadata-informed Time Series Transformer (MetaTST) for time series forecasting.
To tackle the unstructured nature of metadata, MetaTST formalizes them into natural languages by pre-designed templates.
A Transformer encoder is employed to communicate series and metadata tokens, which can extend series representations by metadata information.
arXiv Detail & Related papers (2024-10-04T11:37:55Z) - Towards Effective Time-Aware Language Representation: Exploring Enhanced Temporal Understanding in Language Models [24.784375155633427]
BiTimeBERT 2.0 is a novel language model pre-trained on a temporal news article collection.
Each objective targets a unique aspect of temporal information.
Results consistently demonstrate that BiTimeBERT 2.0 outperforms models like BERT and other existing pre-trained models.
arXiv Detail & Related papers (2024-06-04T00:30:37Z) - Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers [55.475142494272724]
Time series prediction is crucial for understanding and forecasting complex dynamics in various domains.
We introduce GridTST, a model that combines the benefits of two approaches using innovative multi-directional attentions.
The model consistently delivers state-of-the-art performance across various real-world datasets.
arXiv Detail & Related papers (2024-05-22T16:41:21Z) - Time Machine GPT [15.661920010658626]
Large language models (LLMs) are often trained on extensive, temporally indiscriminate text corpora.
This approach is not aligned with the evolving nature of language.
This paper presents a new approach: a series of point-in-time LLMs called Time Machine GPT (TiMaGPT)
arXiv Detail & Related papers (2024-04-29T09:34:25Z) - Temporal Validity Change Prediction [20.108317515225504]
Existing benchmarking tasks require models to identify the temporal validity duration of a single statement.
In many cases, additional contextual information, such as sentences in a story or posts on a social media profile, can be collected from the available text stream.
We propose Temporal Validity Change Prediction, a natural language processing task benchmarking the capability of machine learning models to detect contextual statements that induce such change.
arXiv Detail & Related papers (2024-01-01T14:58:53Z) - Can Language Models Learn to Listen? [96.01685069483025]
We present a framework for generating appropriate facial responses from a listener in dyadic social interactions based on the speaker's words.
Our approach autoregressively predicts a response of a listener: a sequence of listener facial gestures, quantized using a VQ-VAE.
We show that our generated listener motion is fluent and reflective of language semantics through quantitative metrics and a qualitative user study.
arXiv Detail & Related papers (2023-08-21T17:59:02Z) - Detecting Text Formality: A Study of Text Classification Approaches [78.11745751651708]
This work proposes the first to our knowledge systematic study of formality detection methods based on statistical, neural-based, and Transformer-based machine learning methods.
We conducted three types of experiments -- monolingual, multilingual, and cross-lingual.
The study shows the overcome of Char BiLSTM model over Transformer-based ones for the monolingual and multilingual formality classification task.
arXiv Detail & Related papers (2022-04-19T16:23:07Z) - TunBERT: Pretrained Contextualized Text Representation for Tunisian
Dialect [0.0]
We investigate the feasibility of training monolingual Transformer-based language models for under represented languages.
We show that the use of noisy web crawled data instead of structured data is more convenient for such non-standardized language.
Our best performing TunBERT model reaches or improves the state-of-the-art in all three downstream tasks.
arXiv Detail & Related papers (2021-11-25T15:49:50Z) - Time-Stamped Language Model: Teaching Language Models to Understand the
Flow of Events [8.655294504286635]
We propose to formulate this task as a question answering problem.
This enables us to use pre-trained language models on other QA benchmarks by adapting those to the procedural text understanding.
Our model evaluated on the Propara dataset shows improvements on the published state-of-the-art results with a $3.1%$ increase in F1 score.
arXiv Detail & Related papers (2021-04-15T17:50:41Z) - VECO: Variable and Flexible Cross-lingual Pre-training for Language
Understanding and Generation [77.82373082024934]
We plug a cross-attention module into the Transformer encoder to explicitly build the interdependence between languages.
It can effectively avoid the degeneration of predicting masked words only conditioned on the context in its own language.
The proposed cross-lingual model delivers new state-of-the-art results on various cross-lingual understanding tasks of the XTREME benchmark.
arXiv Detail & Related papers (2020-10-30T03:41:38Z) - Multi-channel Transformers for Multi-articulatory Sign Language
Translation [59.38247587308604]
We tackle the multi-articulatory sign language translation task and propose a novel multi-channel transformer architecture.
The proposed architecture allows both the inter and intra contextual relationships between different sign articulators to be modelled within the transformer network itself.
arXiv Detail & Related papers (2020-09-01T09:10:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.