Learning to Simulate Daily Activities via Modeling Dynamic Human Needs
- URL: http://arxiv.org/abs/2302.10897v1
- Date: Thu, 9 Feb 2023 12:30:55 GMT
- Title: Learning to Simulate Daily Activities via Modeling Dynamic Human Needs
- Authors: Yuan Yuan, Huandong Wang, Jingtao Ding, Depeng Jin, Yong Li
- Abstract summary: We propose a knowledge-driven simulation framework based on generative adversarial imitation learning.
Our core idea is to model the evolution of human needs as the underlying mechanism that drives activity generation in the simulation model.
Our framework outperforms the state-of-the-art baselines in terms of data fidelity and utility.
- Score: 24.792813473159505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Daily activity data that records individuals' various types of activities in
daily life are widely used in many applications such as activity scheduling,
activity recommendation, and policymaking. Though with high value, its
accessibility is limited due to high collection costs and potential privacy
issues. Therefore, simulating human activities to produce massive high-quality
data is of great importance to benefit practical applications. However,
existing solutions, including rule-based methods with simplified assumptions of
human behavior and data-driven methods directly fitting real-world data, both
cannot fully qualify for matching reality. In this paper, motivated by the
classic psychological theory, Maslow's need theory describing human motivation,
we propose a knowledge-driven simulation framework based on generative
adversarial imitation learning. To enhance the fidelity and utility of the
generated activity data, our core idea is to model the evolution of human needs
as the underlying mechanism that drives activity generation in the simulation
model. Specifically, this is achieved by a hierarchical model structure that
disentangles different need levels, and the use of neural stochastic
differential equations that successfully captures piecewise-continuous
characteristics of need dynamics. Extensive experiments demonstrate that our
framework outperforms the state-of-the-art baselines in terms of data fidelity
and utility. Besides, we present the insightful interpretability of the need
modeling. The code is available at https://github.com/tsinghua-fib-lab/SAND.
Related papers
- Human Mobility Modeling with Limited Information via Large Language Models [11.90100976089832]
We propose an innovative Large Language Model (LLM) empowered human mobility modeling framework.
Our proposed approach significantly reduces the reliance on detailed human mobility statistical data.
We have validated our results using the NHTS and SCAG-ABM datasets.
arXiv Detail & Related papers (2024-09-26T03:07:32Z) - Active Exploration in Bayesian Model-based Reinforcement Learning for Robot Manipulation [8.940998315746684]
We propose a model-based reinforcement learning (RL) approach for robotic arm end-tasks.
We employ Bayesian neural network models to represent, in a probabilistic way, both the belief and information encoded in the dynamic model during exploration.
Our experiments show the advantages of our Bayesian model-based RL approach, with similar quality in the results than relevant alternatives.
arXiv Detail & Related papers (2024-04-02T11:44:37Z) - MATRIX: Multi-Agent Trajectory Generation with Diverse Contexts [47.12378253630105]
We study trajectory-level data generation for multi-human or human-robot interaction scenarios.
We propose a learning-based automatic trajectory generation model, which we call Multi-Agent TRajectory generation with dIverse conteXts (MATRIX)
arXiv Detail & Related papers (2024-03-09T23:28:54Z) - A Framework for Realistic Simulation of Daily Human Activity [1.8877825068318652]
This paper presents a framework for simulating daily human activity patterns in home environments at scale.
We introduce a method for specifying day-to-day variation in schedules and present a bidirectional constraint propagation algorithm for generating schedules from templates.
arXiv Detail & Related papers (2023-11-26T19:50:23Z) - CoDBench: A Critical Evaluation of Data-driven Models for Continuous
Dynamical Systems [8.410938527671341]
We introduce CodBench, an exhaustive benchmarking suite comprising 11 state-of-the-art data-driven models for solving differential equations.
Specifically, we evaluate 4 distinct categories of models, viz., feed forward neural networks, deep operator regression models, frequency-based neural operators, and transformer architectures.
We conduct extensive experiments, assessing the operators' capabilities in learning, zero-shot super-resolution, data efficiency, robustness to noise, and computational efficiency.
arXiv Detail & Related papers (2023-10-02T21:27:54Z) - Model-Based Reinforcement Learning with Multi-Task Offline Pretraining [59.82457030180094]
We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task.
The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance.
We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.
arXiv Detail & Related papers (2023-06-06T02:24:41Z) - User Behavior Simulation with Large Language Model based Agents [116.74368915420065]
We propose an LLM-based agent framework and design a sandbox environment to simulate real user behaviors.
Based on extensive experiments, we find that the simulated behaviors of our method are very close to the ones of real humans.
arXiv Detail & Related papers (2023-06-05T02:58:35Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Bridging the Gap to Real-World Object-Centric Learning [66.55867830853803]
We show that reconstructing features from models trained in a self-supervised manner is a sufficient training signal for object-centric representations to arise in a fully unsupervised way.
Our approach, DINOSAUR, significantly out-performs existing object-centric learning models on simulated data.
arXiv Detail & Related papers (2022-09-29T15:24:47Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.