How Do We Move: Modeling Human Movement with System Dynamics
- URL: http://arxiv.org/abs/2003.00613v3
- Date: Mon, 22 Mar 2021 13:24:48 GMT
- Title: How Do We Move: Modeling Human Movement with System Dynamics
- Authors: Hua Wei, Dongkuan Xu, Junjie Liang, Zhenhui Li
- Abstract summary: We learn the human movement with Generative Adversarial Imitation Learning.
We are the first to learn to model the state transition of moving agents with system dynamics.
- Score: 34.13127840909941
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling how human moves in the space is useful for policy-making in
transportation, public safety, and public health. Human movements can be viewed
as a dynamic process that human transits between states (\eg, locations) over
time. In the human world where intelligent agents like humans or vehicles with
human drivers play an important role, the states of agents mostly describe
human activities, and the state transition is influenced by both the human
decisions and physical constraints from the real-world system (\eg, agents need
to spend time to move over a certain distance). Therefore, the modeling of
state transition should include the modeling of the agent's decision process
and the physical system dynamics. In this paper, we propose \ours to model
state transition in human movement from a novel perspective, by learning the
decision model and integrating the system dynamics. \ours learns the human
movement with Generative Adversarial Imitation Learning and integrates the
stochastic constraints from system dynamics in the learning process. To the
best of our knowledge, we are the first to learn to model the state transition
of moving agents with system dynamics. In extensive experiments on real-world
datasets, we demonstrate that the proposed method can generate trajectories
similar to real-world ones, and outperform the state-of-the-art methods in
predicting the next location and generating long-term future trajectories.
Related papers
- HUMOS: Human Motion Model Conditioned on Body Shape [54.20419874234214]
We introduce a new approach to develop a generative motion model based on body shape.
We show that it's possible to train this model using unpaired data.
The resulting model generates diverse, physically plausible, and dynamically stable human motions.
arXiv Detail & Related papers (2024-09-05T23:50:57Z) - Deep Activity Model: A Generative Approach for Human Mobility Pattern Synthesis [11.90100976089832]
We develop a novel generative deep learning approach for human mobility modeling and synthesis.
It incorporates both activity patterns and location trajectories using open-source data.
The model can be fine-tuned with local data, allowing it to adapt to accurately represent mobility patterns across diverse regions.
arXiv Detail & Related papers (2024-05-24T02:04:10Z) - Persistent-Transient Duality: A Multi-mechanism Approach for Modeling
Human-Object Interaction [58.67761673662716]
Humans are highly adaptable, swiftly switching between different modes to handle different tasks, situations and contexts.
In Human-object interaction (HOI) activities, these modes can be attributed to two mechanisms: (1) the large-scale consistent plan for the whole activity and (2) the small-scale children interactive actions that start and end along the timeline.
This work proposes to model two concurrent mechanisms that jointly control human motion.
arXiv Detail & Related papers (2023-07-24T12:21:33Z) - Deep state-space modeling for explainable representation, analysis, and
generation of professional human poses [0.0]
This paper introduces three novel methods for creating explainable representations of human movement.
The trained models are used for the full-body dexterity analysis of expert professionals.
arXiv Detail & Related papers (2023-04-13T08:13:10Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Transformer Inertial Poser: Attention-based Real-time Human Motion
Reconstruction from Sparse IMUs [79.72586714047199]
We propose an attention-based deep learning method to reconstruct full-body motion from six IMU sensors in real-time.
Our method achieves new state-of-the-art results both quantitatively and qualitatively, while being simple to implement and smaller in size.
arXiv Detail & Related papers (2022-03-29T16:24:52Z) - An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples [38.81854337592694]
This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference.
We present experimental results showing that even in a model-free setup, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot.
arXiv Detail & Related papers (2021-10-28T10:14:47Z) - Improving Human Motion Prediction Through Continual Learning [2.720960618356385]
Human motion prediction is an essential component for enabling closer human-robot collaboration.
It is compounded by the variability of human motion, both at a skeletal level due to the varying size of humans and at a motion level due to individual movement idiosyncrasies.
We propose a modular sequence learning approach that allows end-to-end training while also having the flexibility of being fine-tuned.
arXiv Detail & Related papers (2021-07-01T15:34:41Z) - Scene-aware Generative Network for Human Motion Synthesis [125.21079898942347]
We propose a new framework, with the interaction between the scene and the human motion taken into account.
Considering the uncertainty of human motion, we formulate this task as a generative task.
We derive a GAN based learning approach, with discriminators to enforce the compatibility between the human motion and the contextual scene.
arXiv Detail & Related papers (2021-05-31T09:05:50Z) - GEM: Group Enhanced Model for Learning Dynamical Control Systems [78.56159072162103]
We build effective dynamical models that are amenable to sample-based learning.
We show that learning the dynamics on a Lie algebra vector space is more effective than learning a direct state transition model.
This work sheds light on a connection between learning of dynamics and Lie group properties, which opens doors for new research directions.
arXiv Detail & Related papers (2021-04-07T01:08:18Z) - Modelling Human Kinetics and Kinematics during Walking using
Reinforcement Learning [0.0]
We develop an automated method to generate 3D human walking motion in simulation which is comparable to real-world human motion.
We show that the method generalizes well across human-subjects with different kinematic structure and gait-characteristics.
arXiv Detail & Related papers (2021-03-15T04:01:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.