Translating Human Mobility Forecasting through Natural Language
Generation
- URL: http://arxiv.org/abs/2112.11481v1
- Date: Mon, 13 Dec 2021 09:56:27 GMT
- Title: Translating Human Mobility Forecasting through Natural Language
Generation
- Authors: Hao Xue, Flora D. Salim, Yongli Ren, Charles L. A. Clarke
- Abstract summary: The paper aims to address the human mobility forecasting problem as a language translation task in a sequence-to-sequence manner.
Under this pipeline, a two-branch network, SHIFT, is designed. Specifically, it consists of one main branch for language generation and one auxiliary branch to directly learn mobility patterns.
- Score: 8.727495039722147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing human mobility forecasting models follow the standard design of the
time-series prediction model which takes a series of numerical values as input
to generate a numerical value as a prediction. Although treating this as a
regression problem seems straightforward, incorporating various contextual
information such as the semantic category information of each Place-of-Interest
(POI) is a necessary step, and often the bottleneck, in designing an effective
mobility prediction model. As opposed to the typical approach, we treat
forecasting as a translation problem and propose a novel forecasting through a
language generation pipeline. The paper aims to address the human mobility
forecasting problem as a language translation task in a sequence-to-sequence
manner. A mobility-to-language template is first introduced to describe the
numerical mobility data as natural language sentences. The core intuition of
the human mobility forecasting translation task is to convert the input
mobility description sentences into a future mobility description from which
the prediction target can be obtained. Under this pipeline, a two-branch
network, SHIFT (Translating Human Mobility Forecasting), is designed.
Specifically, it consists of one main branch for language generation and one
auxiliary branch to directly learn mobility patterns. During the training, we
develop a momentum mode for better connecting and training the two branches.
Extensive experiments on three real-world datasets demonstrate that the
proposed SHIFT is effective and presents a new revolutionary approach to
forecasting human mobility.
Related papers
- Multi-Transmotion: Pre-trained Model for Human Motion Prediction [68.87010221355223]
Multi-Transmotion is an innovative transformer-based model designed for cross-modality pre-training.
Our methodology demonstrates competitive performance across various datasets on several downstream tasks.
arXiv Detail & Related papers (2024-11-04T23:15:21Z) - Prompt Mining for Language-based Human Mobility Forecasting [10.325794804095889]
We propose a novel framework for prompt mining in language-based mobility forecasting.
The framework includes a prompt generation stage based on the information entropy of prompts and a prompt refinement stage to integrate mechanisms such as the chain of thought.
arXiv Detail & Related papers (2024-03-06T08:43:30Z) - Humanoid Locomotion as Next Token Prediction [84.21335675130021]
Our model is a causal transformer trained via autoregressive prediction of sensorimotor trajectories.
We show that our model enables a full-sized humanoid to walk in San Francisco zero-shot.
Our model can transfer to the real world even when trained on only 27 hours of walking data, and can generalize commands not seen during training like walking backward.
arXiv Detail & Related papers (2024-02-29T18:57:37Z) - Where Would I Go Next? Large Language Models as Human Mobility
Predictors [21.100313868232995]
We introduce a novel method, LLM-Mob, which leverages the language understanding and reasoning capabilities of LLMs for analysing human mobility data.
Comprehensive evaluations of our method reveal that LLM-Mob excels in providing accurate and interpretable predictions.
arXiv Detail & Related papers (2023-08-29T10:24:23Z) - Leveraging Language Foundation Models for Human Mobility Forecasting [8.422257363944295]
We propose a novel pipeline that leverages language foundation models for temporal sequential pattern mining.
We perform the forecasting task directly on the natural language input that includes all kinds of information.
Specific prompts are introduced to transform numerical temporal sequences into sentences so that existing language models can be directly applied.
arXiv Detail & Related papers (2022-09-11T01:15:16Z) - Pre-Training a Language Model Without Human Language [74.11825654535895]
We study how the intrinsic nature of pre-training data contributes to the fine-tuned downstream performance.
We find that models pre-trained on unstructured data beat those trained directly from scratch on downstream tasks.
To our great astonishment, we uncover that pre-training on certain non-human language data gives GLUE performance close to performance pre-trained on another non-English language.
arXiv Detail & Related papers (2020-12-22T13:38:06Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z) - Semi-supervised Formality Style Transfer using Language Model
Discriminator and Mutual Information Maximization [52.867459839641526]
Formality style transfer is the task of converting informal sentences to grammatically-correct formal sentences.
We propose a semi-supervised formality style transfer model that utilizes a language model-based discriminator to maximize the likelihood of the output sentence being formal.
Experiments showed that our model outperformed previous state-of-the-art baselines significantly in terms of both automated metrics and human judgement.
arXiv Detail & Related papers (2020-10-10T21:05:56Z) - Modeling Future Cost for Neural Machine Translation [62.427034890537676]
We propose a simple and effective method to model the future cost of each target word for NMT systems.
The proposed approach achieves significant improvements over strong Transformer-based NMT baseline.
arXiv Detail & Related papers (2020-02-28T05:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.