Context Matters: Leveraging Contextual Features for Time Series Forecasting
- URL: http://arxiv.org/abs/2410.12672v2
- Date: Thu, 17 Oct 2024 04:46:29 GMT
- Title: Context Matters: Leveraging Contextual Features for Time Series Forecasting
- Authors: Sameep Chattopadhyay, Pulkit Paliwal, Sai Shankar Narasimhan, Shubhankar Agarwal, Sandeep P. Chinchali,
- Abstract summary: We introduce ContextFormer, a novel plug-and-play method to surgically integrate multimodal contextual information into existing forecasting models.
ContextFormer effectively distills forecast-specific information from rich multimodal contexts, including categorical, continuous, time-varying, and even textual information.
It outperforms SOTA forecasting models by up to 30% on a range of real-world datasets spanning energy, traffic, environmental, and financial domains.
- Score: 2.9687381456164004
- License:
- Abstract: Time series forecasts are often influenced by exogenous contextual features in addition to their corresponding history. For example, in financial settings, it is hard to accurately predict a stock price without considering public sentiments and policy decisions in the form of news articles, tweets, etc. Though this is common knowledge, the current state-of-the-art (SOTA) forecasting models fail to incorporate such contextual information, owing to its heterogeneity and multimodal nature. To address this, we introduce ContextFormer, a novel plug-and-play method to surgically integrate multimodal contextual information into existing pre-trained forecasting models. ContextFormer effectively distills forecast-specific information from rich multimodal contexts, including categorical, continuous, time-varying, and even textual information, to significantly enhance the performance of existing base forecasters. ContextFormer outperforms SOTA forecasting models by up to 30% on a range of real-world datasets spanning energy, traffic, environmental, and financial domains.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - Metadata Matters for Time Series: Informative Forecasting with Transformers [70.38241681764738]
We propose a Metadata-informed Time Series Transformer (MetaTST) for time series forecasting.
To tackle the unstructured nature of metadata, MetaTST formalizes them into natural languages by pre-designed templates.
A Transformer encoder is employed to communicate series and metadata tokens, which can extend series representations by metadata information.
arXiv Detail & Related papers (2024-10-04T11:37:55Z) - Detection of Temporality at Discourse Level on Financial News by Combining Natural Language Processing and Machine Learning [8.504685056067144]
Finance-related news such as Bloomberg News, CNN Business and Forbes are valuable sources of real data for market screening systems.
We propose a novel system to detect the temporality of finance-related news at discourse level.
We have tested our system on a labelled dataset of finance-related news annotated by researchers with knowledge in the field.
arXiv Detail & Related papers (2024-03-30T16:40:10Z) - Modality-aware Transformer for Financial Time series Forecasting [3.401797102198429]
We introduce a novel multimodal transformer-based model named the textitModality-aware Transformer.
Our model excels in exploring the power of both categorical text and numerical timeseries to forecast the target time series effectively.
Our experiments on financial datasets demonstrate that Modality-aware Transformer outperforms existing methods.
arXiv Detail & Related papers (2023-10-02T14:22:41Z) - Incorporating Pre-trained Model Prompting in Multimodal Stock Volume
Movement Prediction [22.949484374773967]
We propose the Prompt-based MUltimodal Stock volumE prediction model (ProMUSE) to process text and time series modalities.
We use pre-trained language models for better comprehension of financial news.
We also propose a novel cross-modality contrastive alignment while reserving the unimodal heads beside the fusion head to mitigate this problem.
arXiv Detail & Related papers (2023-09-11T16:47:01Z) - Information Screening whilst Exploiting! Multimodal Relation Extraction
with Feature Denoising and Multimodal Topic Modeling [96.75821232222201]
Existing research on multimodal relation extraction (MRE) faces two co-existing challenges, internal-information over-utilization and external-information under-exploitation.
We propose a novel framework that simultaneously implements the idea of internal-information screening and external-information exploiting.
arXiv Detail & Related papers (2023-05-19T14:56:57Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - CAMul: Calibrated and Accurate Multi-view Time-Series Forecasting [70.54920804222031]
We propose a general probabilistic multi-view forecasting framework CAMul.
It can learn representations and uncertainty from diverse data sources.
It integrates the knowledge and uncertainty from each data view in a dynamic context-specific manner.
We show that CAMul outperforms other state-of-art probabilistic forecasting models by over 25% in accuracy and calibration.
arXiv Detail & Related papers (2021-09-15T17:13:47Z) - Exploring Context Generalizability in Citywide Crowd Mobility
Prediction: An Analytic Framework and Benchmark [4.367050939292982]
We present a unified analytic framework and a large-scale benchmark for evaluating context generalizability.
We conduct experiments in several crowd mobility prediction tasks such as bike flow, metro passenger flow, and electric vehicle charging demand.
Using more contextual features may not always result in better prediction with existing context modeling techniques.
In context modeling techniques, using a gated unit to incorporate raw contextual features into the deep prediction model has good generalizability.
arXiv Detail & Related papers (2021-06-30T13:19:41Z) - Predicting Temporal Sets with Deep Neural Networks [50.53727580527024]
We propose an integrated solution based on the deep neural networks for temporal sets prediction.
A unique perspective is to learn element relationship by constructing set-level co-occurrence graph.
We design an attention-based module to adaptively learn the temporal dependency of elements and sets.
arXiv Detail & Related papers (2020-06-20T03:29:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.