STAGE: Tool for Automated Extraction of Semantic Time Cues to Enrich
Neural Temporal Ordering Models
- URL: http://arxiv.org/abs/2105.07314v1
- Date: Sat, 15 May 2021 23:34:02 GMT
- Title: STAGE: Tool for Automated Extraction of Semantic Time Cues to Enrich
Neural Temporal Ordering Models
- Authors: Luke Breitfeller, Aakanksha Naik, Carolyn Rose
- Abstract summary: We develop STAGE, a system that can automatically extract time cues and convert them into representations suitable for integration with neural models.
We demonstrate promising results on two event ordering datasets, and highlight important issues in semantic cue representation and integration for future research.
- Score: 4.6150532698347835
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite achieving state-of-the-art accuracy on temporal ordering of events,
neural models showcase significant gaps in performance. Our work seeks to fill
one of these gaps by leveraging an under-explored dimension of textual
semantics: rich semantic information provided by explicit textual time cues. We
develop STAGE, a system that consists of a novel temporal framework and a
parser that can automatically extract time cues and convert them into
representations suitable for integration with neural models. We demonstrate the
utility of extracted cues by integrating them with an event ordering model
using a joint BiLSTM and ILP constraint architecture. We outline the
functionality of the 3-part STAGE processing approach, and show two methods of
integrating its representations with the BiLSTM-ILP model: (i) incorporating
semantic cues as additional features, and (ii) generating new constraints from
semantic cues to be enforced in the ILP. We demonstrate promising results on
two event ordering datasets, and highlight important issues in semantic cue
representation and integration for future research.
Related papers
- Frame Order Matters: A Temporal Sequence-Aware Model for Few-Shot Action Recognition [14.97527336050901]
We propose a novel Temporal Sequence-Aware Model (TSAM) for few-shot action recognition (FSAR)
It incorporates a sequential perceiver adapter into the pre-training framework, to integrate both the spatial information and the sequential temporal dynamics into the feature embeddings.
Experimental results on five FSAR datasets demonstrate that our method set a new benchmark, beating the second-best competitors with large margins.
arXiv Detail & Related papers (2024-08-22T15:13:27Z) - Hierarchical Temporal Context Learning for Camera-based Semantic Scene Completion [57.232688209606515]
We present HTCL, a novel Temporal Temporal Context Learning paradigm for improving camera-based semantic scene completion.
Our method ranks $1st$ on the Semantic KITTI benchmark and even surpasses LiDAR-based methods in terms of mIoU.
arXiv Detail & Related papers (2024-07-02T09:11:17Z) - Towards Effective Time-Aware Language Representation: Exploring Enhanced Temporal Understanding in Language Models [24.784375155633427]
BiTimeBERT 2.0 is a novel language model pre-trained on a temporal news article collection.
Each objective targets a unique aspect of temporal information.
Results consistently demonstrate that BiTimeBERT 2.0 outperforms models like BERT and other existing pre-trained models.
arXiv Detail & Related papers (2024-06-04T00:30:37Z) - Temporal and Semantic Evaluation Metrics for Foundation Models in Post-Hoc Analysis of Robotic Sub-tasks [1.8124328823188356]
We present an automated framework to decompose trajectory data into temporally bounded and natural language-based descriptive sub-tasks.
Our framework provides both time-based and language-based descriptions for lower-level sub-tasks that comprise full trajectories.
The metrics measure the temporal alignment and semantic fidelity of language descriptions between two sub-task decompositions.
arXiv Detail & Related papers (2024-03-25T22:39:20Z) - Exploiting Contextual Target Attributes for Target Sentiment
Classification [53.30511968323911]
Existing PTLM-based models for TSC can be categorized into two groups: 1) fine-tuning-based models that adopt PTLM as the context encoder; 2) prompting-based models that transfer the classification task to the text/word generation task.
We present a new perspective of leveraging PTLM for TSC: simultaneously leveraging the merits of both language modeling and explicit target-context interactions via contextual target attributes.
arXiv Detail & Related papers (2023-12-21T11:45:28Z) - FLIP: Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR Prediction [49.510163437116645]
Click-through rate (CTR) prediction plays as a core function module in personalized online services.
Traditional ID-based models for CTR prediction take as inputs the one-hot encoded ID features of tabular modality.
Pretrained Language Models(PLMs) has given rise to another paradigm, which takes as inputs the sentences of textual modality.
We propose to conduct Fine-grained feature-level ALignment between ID-based Models and Pretrained Language Models(FLIP) for CTR prediction.
arXiv Detail & Related papers (2023-10-30T11:25:03Z) - Semantics Meets Temporal Correspondence: Self-supervised Object-centric Learning in Videos [63.94040814459116]
Self-supervised methods have shown remarkable progress in learning high-level semantics and low-level temporal correspondence.
We propose a novel semantic-aware masked slot attention on top of the fused semantic features and correspondence maps.
We adopt semantic- and instance-level temporal consistency as self-supervision to encourage temporally coherent object-centric representations.
arXiv Detail & Related papers (2023-08-19T09:12:13Z) - TimeTuner: Diagnosing Time Representations for Time-Series Forecasting
with Counterfactual Explanations [3.8357850372472915]
This paper contributes a novel visual analytics framework, namely TimeTuner, to help analysts understand how model behaviors are associated with localized, stationarity, and correlations of time-series representations.
We show that TimeTuner can help characterize time-series representations and guide the feature engineering processes.
arXiv Detail & Related papers (2023-07-19T11:40:15Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Self-Attention Neural Bag-of-Features [103.70855797025689]
We build on the recently introduced 2D-Attention and reformulate the attention learning methodology.
We propose a joint feature-temporal attention mechanism that learns a joint 2D attention mask highlighting relevant information.
arXiv Detail & Related papers (2022-01-26T17:54:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.