Exploring Context Generalizability in Citywide Crowd Mobility
Prediction: An Analytic Framework and Benchmark
- URL: http://arxiv.org/abs/2106.16046v4
- Date: Fri, 23 Jun 2023 05:55:48 GMT
- Title: Exploring Context Generalizability in Citywide Crowd Mobility
Prediction: An Analytic Framework and Benchmark
- Authors: Liyue Chen, Xiaoxiang Wang, Leye Wang
- Abstract summary: We present a unified analytic framework and a large-scale benchmark for evaluating context generalizability.
We conduct experiments in several crowd mobility prediction tasks such as bike flow, metro passenger flow, and electric vehicle charging demand.
Using more contextual features may not always result in better prediction with existing context modeling techniques.
In context modeling techniques, using a gated unit to incorporate raw contextual features into the deep prediction model has good generalizability.
- Score: 4.367050939292982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contextual features are important data sources for building citywide crowd
mobility prediction models. However, the difficulty of applying context lies in
the unknown generalizability of contextual features (e.g., weather, holiday,
and points of interests) and context modeling techniques across different
scenarios. In this paper, we present a unified analytic framework and a
large-scale benchmark for evaluating context generalizability. The benchmark
includes crowd mobility data, contextual data, and advanced prediction models.
We conduct comprehensive experiments in several crowd mobility prediction tasks
such as bike flow, metro passenger flow, and electric vehicle charging demand.
Our results reveal several important observations: (1) Using more contextual
features may not always result in better prediction with existing context
modeling techniques; in particular, the combination of holiday and temporal
position can provide more generalizable beneficial information than other
contextual feature combinations. (2) In context modeling techniques, using a
gated unit to incorporate raw contextual features into the deep prediction
model has good generalizability. Besides, we offer several suggestions about
incorporating contextual factors for building crowd mobility prediction
applications. From our findings, we call for future research efforts devoted to
developing new context modeling solutions.
Related papers
- Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - Context Matters: Leveraging Contextual Features for Time Series Forecasting [2.9687381456164004]
We introduce ContextFormer, a novel plug-and-play method to surgically integrate multimodal contextual information into existing forecasting models.
ContextFormer effectively distills forecast-specific information from rich multimodal contexts, including categorical, continuous, time-varying, and even textual information.
It outperforms SOTA forecasting models by up to 30% on a range of real-world datasets spanning energy, traffic, environmental, and financial domains.
arXiv Detail & Related papers (2024-10-16T15:36:13Z) - Enhancing Traffic Prediction with Textual Data Using Large Language Models [0.0]
The study investigates two types of special scenarios: regional-level and node-level.
For regional-level scenarios, textual information is represented as a node connected to the entire network.
For node-level scenarios, embeddings from the large model represent additional nodes connected only to corresponding nodes.
This approach shows a significant improvement in prediction accuracy according to our experiment of New York Bike dataset.
arXiv Detail & Related papers (2024-05-10T03:14:26Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - JRDB-Traj: A Dataset and Benchmark for Trajectory Forecasting in Crowds [79.00975648564483]
Trajectory forecasting models, employed in fields such as robotics, autonomous vehicles, and navigation, face challenges in real-world scenarios.
This dataset provides comprehensive data, including the locations of all agents, scene images, and point clouds, all from the robot's perspective.
The objective is to predict the future positions of agents relative to the robot using raw sensory input data.
arXiv Detail & Related papers (2023-11-05T18:59:31Z) - ACQUIRED: A Dataset for Answering Counterfactual Questions In Real-Life
Videos [53.92440577914417]
ACQUIRED consists of 3.9K annotated videos, encompassing a wide range of event types and incorporating both first and third-person viewpoints.
Each video is annotated with questions that span three distinct dimensions of reasoning, including physical, social, and temporal.
We benchmark our dataset against several state-of-the-art language-only and multimodal models and experimental results demonstrate a significant performance gap.
arXiv Detail & Related papers (2023-11-02T22:17:03Z) - Quantifying the Plausibility of Context Reliance in Neural Machine
Translation [25.29330352252055]
We introduce Plausibility Evaluation of Context Reliance (PECoRe)
PECoRe is an end-to-end interpretability framework designed to quantify context usage in language models' generations.
We use pecore to quantify the plausibility of context-aware machine translation models.
arXiv Detail & Related papers (2023-10-02T13:26:43Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.