IMAP: Individual huMAn mobility Patterns visualizing platform
- URL: http://arxiv.org/abs/2209.03615v1
- Date: Thu, 8 Sep 2022 07:43:54 GMT
- Title: IMAP: Individual huMAn mobility Patterns visualizing platform
- Authors: Yisheng Alison Zheng, Amani Abusafia, Abdallah Lakhdari, Shing Tai
Tony Lui, Athman Bouguettaya
- Abstract summary: Existing models' accuracy in predicting users' mobility patterns is less than 25%.
We propose a novel perspective to study and analyze human mobility patterns and capture their flexibility.
Our platform enables users to visualize a graph of the places they visited based on their history records.
- Score: 0.39373541926236766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding human mobility is essential for the development of smart cities
and social behavior research. Human mobility models may be used in numerous
applications, including pandemic control, urban planning, and traffic
management. The existing models' accuracy in predicting users' mobility
patterns is less than 25%. The low accuracy may be justified by the flexible
nature of the human movement. Indeed, humans are not rigid in their daily
movement. In addition, the rigid mobility models may result in missing the
hidden regularities in users' records. Thus, we propose a novel perspective to
study and analyze human mobility patterns and capture their flexibility.
Typically, the mobility patterns are represented by a sequence of locations. We
propose to define the mobility patterns by abstracting these locations into a
set of places. Labeling these locations will allow us to detect
close-to-reality hidden patterns. We present IMAP, an Individual huMAn mobility
Patterns visualizing platform. Our platform enables users to visualize a graph
of the places they visited based on their history records. In addition, our
platform displays the most frequent mobility patterns computed using a modified
PrefixSpan approach.
Related papers
- HUMOS: Human Motion Model Conditioned on Body Shape [54.20419874234214]
We introduce a new approach to develop a generative motion model based on body shape.
We show that it's possible to train this model using unpaired data.
The resulting model generates diverse, physically plausible, and dynamically stable human motions.
arXiv Detail & Related papers (2024-09-05T23:50:57Z) - Deep Activity Model: A Generative Approach for Human Mobility Pattern Synthesis [11.90100976089832]
We develop a novel generative deep learning approach for human mobility modeling and synthesis.
It incorporates both activity patterns and location trajectories using open-source data.
The model can be fine-tuned with local data, allowing it to adapt to accurately represent mobility patterns across diverse regions.
arXiv Detail & Related papers (2024-05-24T02:04:10Z) - Human Mobility in the Metaverse [0.03072340427031969]
We find that despite the absence of commuting costs, an individuals inclination to explore new locations diminishes over time.
We also find a lack of correlation between land prices and visitation, a deviation from the patterns characterizing the physical world.
Our ability to predict the characteristics of the emerging meta mobility network implies that the laws governing human mobility are rooted in fundamental patterns of human dynamics.
arXiv Detail & Related papers (2024-04-03T21:26:40Z) - Social-Transmotion: Promptable Human Trajectory Prediction [65.80068316170613]
Social-Transmotion is a generic Transformer-based model that exploits diverse and numerous visual cues to predict human behavior.
Our approach is validated on multiple datasets, including JTA, JRDB, Pedestrians and Cyclists in Road Traffic, and ETH-UCY.
arXiv Detail & Related papers (2023-12-26T18:56:49Z) - A generalized vector-field framework for mobility [0.0]
We propose a general vector-field representation starting from individuals' trajectories valid for any type of mobility.
We show how individuals' elections determine the mesoscopic properties of the mobility field.
Our framework is an essential tool to capture hidden symmetries in mesoscopic urban mobility.
arXiv Detail & Related papers (2023-09-04T07:50:08Z) - CrowdWeb: A Visualization Tool for Mobility Patterns in Smart Cities [0.39373541926236766]
The accuracy of current mobility prediction models is less than 25%.
We propose a web platform to visualize human mobility patterns.
We extend the platform to visualize the mobility of multiple users from a city-scale perspective.
arXiv Detail & Related papers (2023-05-22T11:30:00Z) - Conditioned Human Trajectory Prediction using Iterative Attention Blocks [70.36888514074022]
We present a simple yet effective pedestrian trajectory prediction model aimed at pedestrians positions prediction in urban-like environments.
Our model is a neural-based architecture that can run several layers of attention blocks and transformers in an iterative sequential fashion.
We show that without explicit introduction of social masks, dynamical models, social pooling layers, or complicated graph-like structures, it is possible to produce on par results with SoTA models.
arXiv Detail & Related papers (2022-06-29T07:49:48Z) - Mobility signatures: a tool for characterizing cities using intercity
mobility flows [1.1602089225841632]
We introduce the mobility signature as a tool for understanding how a city is embedded in the wider mobility network.
We demonstrate the potential of the mobility signature approach through two applications that build on mobile-phone-based data from Finland.
arXiv Detail & Related papers (2021-12-03T08:53:58Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Hidden Footprints: Learning Contextual Walkability from 3D Human Trails [70.01257397390361]
Current datasets only tell you where people are, not where they could be.
We first augment the set of valid, labeled walkable regions by propagating person observations between images, utilizing 3D information to create what we call hidden footprints.
We devise a training strategy designed for such sparse labels, combining a class-balanced classification loss with a contextual adversarial loss.
arXiv Detail & Related papers (2020-08-19T23:19:08Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.