Leveraging Language Foundation Models for Human Mobility Forecasting
- URL: http://arxiv.org/abs/2209.05479v2
- Date: Wed, 14 Sep 2022 20:51:03 GMT
- Title: Leveraging Language Foundation Models for Human Mobility Forecasting
- Authors: Hao Xue, Bhanu Prakash Voutharoja, Flora D. Salim
- Abstract summary: We propose a novel pipeline that leverages language foundation models for temporal sequential pattern mining.
We perform the forecasting task directly on the natural language input that includes all kinds of information.
Specific prompts are introduced to transform numerical temporal sequences into sentences so that existing language models can be directly applied.
- Score: 8.422257363944295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel pipeline that leverages language foundation
models for temporal sequential pattern mining, such as for human mobility
forecasting tasks. For example, in the task of predicting Place-of-Interest
(POI) customer flows, typically the number of visits is extracted from
historical logs, and only the numerical data are used to predict visitor flows.
In this research, we perform the forecasting task directly on the natural
language input that includes all kinds of information such as numerical values
and contextual semantic information. Specific prompts are introduced to
transform numerical temporal sequences into sentences so that existing language
models can be directly applied. We design an AuxMobLCast pipeline for
predicting the number of visitors in each POI, integrating an auxiliary POI
category classification task with the encoder-decoder architecture. This
research provides empirical evidence of the effectiveness of the proposed
AuxMobLCast pipeline to discover sequential patterns in mobility forecasting
tasks. The results, evaluated on three real-world datasets, demonstrate that
pre-trained language foundation models also have good performance in
forecasting temporal sequences. This study could provide visionary insights and
lead to new research directions for predicting human mobility.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - Prompt Mining for Language-based Human Mobility Forecasting [10.325794804095889]
We propose a novel framework for prompt mining in language-based mobility forecasting.
The framework includes a prompt generation stage based on the information entropy of prompts and a prompt refinement stage to integrate mechanisms such as the chain of thought.
arXiv Detail & Related papers (2024-03-06T08:43:30Z) - Context-aware multi-head self-attentional neural network model for next
location prediction [19.640761373993417]
We utilize a multi-head self-attentional (A) neural network that learns location patterns from historical location visits.
We demonstrate that proposed the model outperforms other state-of-the-art prediction models.
We believe that the proposed model is vital for context-aware mobility prediction.
arXiv Detail & Related papers (2022-12-04T23:40:14Z) - PromptCast: A New Prompt-based Learning Paradigm for Time Series
Forecasting [11.670324826998968]
In existing time series forecasting methods, the models take a sequence of numerical values as input and yield numerical values as output.
Inspired by the successes of pre-trained language foundation models, we propose a new forecasting paradigm: prompt-based time series forecasting.
In this novel task, the numerical input and output are transformed into prompts and the forecasting task is framed in a sentence-to-sentence manner.
arXiv Detail & Related papers (2022-09-20T10:15:35Z) - A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis [90.24921443175514]
We focus on aspect-based sentiment analysis, which involves extracting aspect term, category, and predicting their corresponding polarities.
We propose to reformulate the extraction and prediction tasks into the sequence generation task, using a generative language model with unidirectional attention.
Our approach outperforms the previous state-of-the-art (based on BERT) on average performance by a large margins in few-shot and full-shot settings.
arXiv Detail & Related papers (2022-04-11T18:31:53Z) - Translating Human Mobility Forecasting through Natural Language
Generation [8.727495039722147]
The paper aims to address the human mobility forecasting problem as a language translation task in a sequence-to-sequence manner.
Under this pipeline, a two-branch network, SHIFT, is designed. Specifically, it consists of one main branch for language generation and one auxiliary branch to directly learn mobility patterns.
arXiv Detail & Related papers (2021-12-13T09:56:27Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
in Natural Language Processing [78.8500633981247]
This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning"
Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly.
arXiv Detail & Related papers (2021-07-28T18:09:46Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z) - Parameter Space Factorization for Zero-Shot Learning across Tasks and
Languages [112.65994041398481]
We propose a Bayesian generative model for the space of neural parameters.
We infer the posteriors over such latent variables based on data from seen task-language combinations.
Our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods.
arXiv Detail & Related papers (2020-01-30T16:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.