Prompt Mining for Language-based Human Mobility Forecasting
- URL: http://arxiv.org/abs/2403.03544v1
- Date: Wed, 6 Mar 2024 08:43:30 GMT
- Title: Prompt Mining for Language-based Human Mobility Forecasting
- Authors: Hao Xue, Tianye Tang, Ali Payani, Flora D. Salim
- Abstract summary: We propose a novel framework for prompt mining in language-based mobility forecasting.
The framework includes a prompt generation stage based on the information entropy of prompts and a prompt refinement stage to integrate mechanisms such as the chain of thought.
- Score: 10.325794804095889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advancement of large language models, language-based forecasting has
recently emerged as an innovative approach for predicting human mobility
patterns. The core idea is to use prompts to transform the raw mobility data
given as numerical values into natural language sentences so that the language
models can be leveraged to generate the description for future observations.
However, previous studies have only employed fixed and manually designed
templates to transform numerical values into sentences. Since the forecasting
performance of language models heavily relies on prompts, using fixed templates
for prompting may limit the forecasting capability of language models. In this
paper, we propose a novel framework for prompt mining in language-based
mobility forecasting, aiming to explore diverse prompt design strategies.
Specifically, the framework includes a prompt generation stage based on the
information entropy of prompts and a prompt refinement stage to integrate
mechanisms such as the chain of thought. Experimental results on real-world
large-scale data demonstrate the superiority of generated prompts from our
prompt mining pipeline. Additionally, the comparison of different prompt
variants shows that the proposed prompt refinement process is effective. Our
study presents a promising direction for further advancing language-based
mobility forecasting.
Related papers
- Language Model Pre-Training with Sparse Latent Typing [66.75786739499604]
We propose a new pre-training objective, Sparse Latent Typing, which enables the model to sparsely extract sentence-level keywords with diverse latent types.
Experimental results show that our model is able to learn interpretable latent type categories in a self-supervised manner without using any external knowledge.
arXiv Detail & Related papers (2022-10-23T00:37:08Z) - PromptCast: A New Prompt-based Learning Paradigm for Time Series
Forecasting [11.670324826998968]
In existing time series forecasting methods, the models take a sequence of numerical values as input and yield numerical values as output.
Inspired by the successes of pre-trained language foundation models, we propose a new forecasting paradigm: prompt-based time series forecasting.
In this novel task, the numerical input and output are transformed into prompts and the forecasting task is framed in a sentence-to-sentence manner.
arXiv Detail & Related papers (2022-09-20T10:15:35Z) - Leveraging Language Foundation Models for Human Mobility Forecasting [8.422257363944295]
We propose a novel pipeline that leverages language foundation models for temporal sequential pattern mining.
We perform the forecasting task directly on the natural language input that includes all kinds of information.
Specific prompts are introduced to transform numerical temporal sequences into sentences so that existing language models can be directly applied.
arXiv Detail & Related papers (2022-09-11T01:15:16Z) - Probing via Prompting [71.7904179689271]
This paper introduces a novel model-free approach to probing, by formulating probing as a prompting task.
We conduct experiments on five probing tasks and show that our approach is comparable or better at extracting information than diagnostic probes.
We then examine the usefulness of a specific linguistic property for pre-training by removing the heads that are essential to that property and evaluating the resulting model's performance on language modeling.
arXiv Detail & Related papers (2022-07-04T22:14:40Z) - Few-shot Subgoal Planning with Language Models [58.11102061150875]
We show that language priors encoded in pre-trained language models allow us to infer fine-grained subgoal sequences.
In contrast to recent methods which make strong assumptions about subgoal supervision, our experiments show that language models can infer detailed subgoal sequences without any fine-tuning.
arXiv Detail & Related papers (2022-05-28T01:03:30Z) - Translating Human Mobility Forecasting through Natural Language
Generation [8.727495039722147]
The paper aims to address the human mobility forecasting problem as a language translation task in a sequence-to-sequence manner.
Under this pipeline, a two-branch network, SHIFT, is designed. Specifically, it consists of one main branch for language generation and one auxiliary branch to directly learn mobility patterns.
arXiv Detail & Related papers (2021-12-13T09:56:27Z) - Differentiable Prompt Makes Pre-trained Language Models Better Few-shot
Learners [23.150999852147283]
This study proposes a novel pluggable, and efficient approach named DifferentiAble pRompT (DART)
It can convert small language models into better few-shot learners without any prompt engineering.
A comprehensive evaluation of standard NLP tasks demonstrates that the proposed approach achieves a better few-shot performance.
arXiv Detail & Related papers (2021-08-30T12:29:25Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
in Natural Language Processing [78.8500633981247]
This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning"
Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly.
arXiv Detail & Related papers (2021-07-28T18:09:46Z) - Prompt Programming for Large Language Models: Beyond the Few-Shot
Paradigm [0.0]
We discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language.
We introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks.
arXiv Detail & Related papers (2021-02-15T05:27:55Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.