A novel activity pattern generation incorporating deep learning for
transport demand models
- URL: http://arxiv.org/abs/2104.02278v1
- Date: Tue, 6 Apr 2021 04:07:05 GMT
- Title: A novel activity pattern generation incorporating deep learning for
transport demand models
- Authors: Danh T. Phan and Hai L. Vu
- Abstract summary: This paper proposes a novel activity pattern generation framework by incorporating deep learning with travel domain knowledge.
We develop different deep neural networks with entity embedding and random forest models to classify activity type.
Results show high accuracy for the start time and end time of work and school activities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Activity generation plays an important role in activity-based demand
modelling systems. While machine learning, especially deep learning, has been
increasingly used for mode choice and traffic flow prediction, much less
research exploiting the advantage of deep learning for activity generation
tasks. This paper proposes a novel activity pattern generation framework by
incorporating deep learning with travel domain knowledge. We model each
activity schedule as one primary activity tour and several secondary activity
tours. We then develop different deep neural networks with entity embedding and
random forest models to classify activity type, as well as to predict activity
times. The proposed framework can capture the activity patterns for individuals
in both training and validation sets. Results show high accuracy for the start
time and end time of work and school activities. The framework also replicates
the start time patterns of stop-before and stop-after primary work activity
well. This provides a promising direction to deploy advanced machine learning
methods to generate more reliable activity-travel patterns for transport demand
systems and their applications.
Related papers
- PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - SelfAct: Personalized Activity Recognition based on Self-Supervised and
Active Learning [0.688204255655161]
SelfAct is a novel framework for Human Activity Recognition (HAR) on wearable and mobile devices.
It combines self-supervised and active learning to mitigate problems such as intra- and inter-variability of activity execution.
Our experiments on two publicly available HAR datasets demonstrate that SelfAct achieves results close to or even better than the ones of fully supervised approaches.
arXiv Detail & Related papers (2023-04-19T09:39:11Z) - Deep Active Learning for Computer Vision: Past and Future [50.19394935978135]
Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions.
By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies.
arXiv Detail & Related papers (2022-11-27T13:07:14Z) - Self-supervised Pretraining with Classification Labels for Temporal
Activity Detection [54.366236719520565]
Temporal Activity Detection aims to predict activity classes per frame.
Due to the expensive frame-level annotations required for detection, the scale of detection datasets is limited.
This work proposes a novel self-supervised pretraining method for detection leveraging classification labels.
arXiv Detail & Related papers (2021-11-26T18:59:28Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - A Survey on Self-supervised Pre-training for Sequential Transfer
Learning in Neural Networks [1.1802674324027231]
Self-supervised pre-training for transfer learning is becoming an increasingly popular technique to improve state-of-the-art results using unlabeled data.
We provide an overview of the taxonomy for self-supervised learning and transfer learning, and highlight some prominent methods for designing pre-training tasks across different domains.
arXiv Detail & Related papers (2020-07-01T22:55:48Z) - Enabling Edge Cloud Intelligence for Activity Learning in Smart Home [1.3858051019755284]
We propose a novel activity learning framework based on Edge Cloud architecture.
We utilize temporal features for activity recognition and prediction in a single smart home setting.
arXiv Detail & Related papers (2020-05-14T11:43:20Z) - Revisiting Few-shot Activity Detection with Class Similarity Control [107.79338380065286]
We present a framework for few-shot temporal activity detection based on proposal regression.
Our model is end-to-end trainable, takes into account the frame rate differences between few-shot activities and untrimmed test videos, and can benefit from additional few-shot examples.
arXiv Detail & Related papers (2020-03-31T22:02:38Z) - ZSTAD: Zero-Shot Temporal Activity Detection [107.63759089583382]
We propose a novel task setting called zero-shot temporal activity detection (ZSTAD), where activities that have never been seen in training can still be detected.
We design an end-to-end deep network based on R-C3D as the architecture for this solution.
Experiments on both the THUMOS14 and the Charades datasets show promising performance in terms of detecting unseen activities.
arXiv Detail & Related papers (2020-03-12T02:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.