Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory
Diffusion
- URL: http://arxiv.org/abs/2304.01893v1
- Date: Tue, 4 Apr 2023 15:46:42 GMT
- Title: Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory
Diffusion
- Authors: Davis Rempe, Zhengyi Luo, Xue Bin Peng, Ye Yuan, Kris Kitani, Karsten
Kreis, Sanja Fidler, Or Litany
- Abstract summary: We introduce a method for generating realistic pedestrian trajectories and full-body animations that can be controlled to meet user-defined goals.
Our guided diffusion model allows users to constrain trajectories through target waypoints, speed, and specified social groups.
We propose utilizing the value function learned during RL training of the animation controller to guide diffusion to produce trajectories better suited for particular scenarios.
- Score: 83.88829943619656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a method for generating realistic pedestrian trajectories and
full-body animations that can be controlled to meet user-defined goals. We draw
on recent advances in guided diffusion modeling to achieve test-time
controllability of trajectories, which is normally only associated with
rule-based systems. Our guided diffusion model allows users to constrain
trajectories through target waypoints, speed, and specified social groups while
accounting for the surrounding environment context. This trajectory diffusion
model is integrated with a novel physics-based humanoid controller to form a
closed-loop, full-body pedestrian animation system capable of placing large
crowds in a simulated environment with varying terrains. We further propose
utilizing the value function learned during RL training of the animation
controller to guide diffusion to produce trajectories better suited for
particular scenarios such as collision avoidance and traversing uneven terrain.
Video results are available on the project page at
https://nv-tlabs.github.io/trace-pace .
Related papers
- DINTR: Tracking via Diffusion-based Interpolation [12.130669304428565]
This work proposes a novel diffusion-based methodology to formulate the tracking task.
Our INterpolation TrackeR (DINTR) presents a promising new paradigm and achieves a superior multiplicity on seven benchmarks across five indicator representations.
arXiv Detail & Related papers (2024-10-14T00:41:58Z) - FreeTraj: Tuning-Free Trajectory Control in Video Diffusion Models [41.006754386910686]
We argue that diffusion model itself allows decent control over the generated content without requiring any training.
We introduce a tuning-free framework to achieve trajectory-controllable video generation, by imposing guidance on both noise construction and attention computation.
arXiv Detail & Related papers (2024-06-24T17:59:56Z) - Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - LG-Traj: LLM Guided Pedestrian Trajectory Prediction [9.385936248154987]
We introduce LG-Traj, a novel approach to generate motion cues present in pedestrian past/observed trajectories.
These motion cues, along with pedestrian coordinates, facilitate a better understanding of the underlying representation.
Our method employs a transformer-based architecture comprising a motion encoder to model motion patterns and a social decoder to capture social interactions among pedestrians.
arXiv Detail & Related papers (2024-03-12T19:06:23Z) - TrackDiffusion: Tracklet-Conditioned Video Generation via Diffusion Models [75.20168902300166]
We propose TrackDiffusion, a novel video generation framework affording fine-grained trajectory-conditioned motion control.
A pivotal component of TrackDiffusion is the instance enhancer, which explicitly ensures inter-frame consistency of multiple objects.
generated video sequences by our TrackDiffusion can be used as training data for visual perception models.
arXiv Detail & Related papers (2023-12-01T15:24:38Z) - TLControl: Trajectory and Language Control for Human Motion Synthesis [68.09806223962323]
We present TLControl, a novel method for realistic human motion synthesis.
It incorporates both low-level Trajectory and high-level Language semantics controls.
It is practical for interactive and high-quality animation generation.
arXiv Detail & Related papers (2023-11-28T18:54:16Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - AutoTrajectory: Label-free Trajectory Extraction and Prediction from
Videos using Dynamic Points [92.91569287889203]
We present a novel, label-free algorithm, AutoTrajectory, for trajectory extraction and prediction.
To better capture the moving objects in videos, we introduce dynamic points.
We aggregate dynamic points to instance points, which stand for moving objects such as pedestrians in videos.
arXiv Detail & Related papers (2020-07-11T08:43:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.