DLow: Diversifying Latent Flows for Diverse Human Motion Prediction
- URL: http://arxiv.org/abs/2003.08386v2
- Date: Wed, 22 Jul 2020 17:53:24 GMT
- Title: DLow: Diversifying Latent Flows for Diverse Human Motion Prediction
- Authors: Ye Yuan, Kris Kitani
- Abstract summary: We propose a novel sampling method, Diversifying Latent Flows (DLow), to produce a diverse set of samples from a pretrained deep generative model.
During training, DLow uses a diversity-promoting prior over samples as an objective to optimize the latent mappings to improve sample diversity.
Our experiments demonstrate that DLow outperforms state-of-the-art baseline methods in terms of sample diversity and accuracy.
- Score: 32.22704734791378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative models are often used for human motion prediction as they are
able to model multi-modal data distributions and characterize diverse human
behavior. While much care has been taken into designing and learning deep
generative models, how to efficiently produce diverse samples from a deep
generative model after it has been trained is still an under-explored problem.
To obtain samples from a pretrained generative model, most existing generative
human motion prediction methods draw a set of independent Gaussian latent codes
and convert them to motion samples. Clearly, this random sampling strategy is
not guaranteed to produce diverse samples for two reasons: (1) The independent
sampling cannot force the samples to be diverse; (2) The sampling is based
solely on likelihood which may only produce samples that correspond to the
major modes of the data distribution. To address these problems, we propose a
novel sampling method, Diversifying Latent Flows (DLow), to produce a diverse
set of samples from a pretrained deep generative model. Unlike random
(independent) sampling, the proposed DLow sampling method samples a single
random variable and then maps it with a set of learnable mapping functions to a
set of correlated latent codes. The correlated latent codes are then decoded
into a set of correlated samples. During training, DLow uses a
diversity-promoting prior over samples as an objective to optimize the latent
mappings to improve sample diversity. The design of the prior is highly
flexible and can be customized to generate diverse motions with common features
(e.g., similar leg motion but diverse upper-body motion). Our experiments
demonstrate that DLow outperforms state-of-the-art baseline methods in terms of
sample diversity and accuracy. Our code is released on the project page:
https://www.ye-yuan.com/dlow.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - Controlling the Fidelity and Diversity of Deep Generative Models via Pseudo Density [70.14884528360199]
We introduce an approach to bias deep generative models, such as GANs and diffusion models, towards generating data with enhanced fidelity or increased diversity.
Our approach involves manipulating the distribution of training and generated data through a novel metric for individual samples, named pseudo density.
arXiv Detail & Related papers (2024-07-11T16:46:04Z) - REAL Sampling: Boosting Factuality and Diversity of Open-Ended Generation via Asymptotic Entropy [93.8400683020273]
Decoding methods for large language models (LLMs) usually struggle with the tradeoff between ensuring factuality and maintaining diversity.
We propose REAL sampling, a decoding method that improved factuality and diversity over nucleus sampling.
arXiv Detail & Related papers (2024-06-11T21:44:49Z) - Touring sampling with pushforward maps [3.5897534810405403]
This paper takes a theoretical stance to review and organize many sampling approaches in the generative modeling setting.
It might prove useful to overcome some of the current challenges in sampling with diffusion models.
arXiv Detail & Related papers (2023-11-23T08:23:43Z) - Particle Guidance: non-I.I.D. Diverse Sampling with Diffusion Models [41.192240810280424]
We propose particle guidance, an extension of diffusion-based generative sampling where a joint-particle time-evolving potential enforces diversity.
We analyze theoretically the joint distribution that particle guidance generates, how to learn a potential that achieves optimal diversity, and the connections with methods in other disciplines.
arXiv Detail & Related papers (2023-10-19T19:01:00Z) - StyleGenes: Discrete and Efficient Latent Distributions for GANs [149.0290830305808]
We propose a discrete latent distribution for Generative Adversarial Networks (GANs)
Instead of drawing latent vectors from a continuous prior, we sample from a finite set of learnable latents.
We take inspiration from the encoding of information in biological organisms.
arXiv Detail & Related papers (2023-04-30T23:28:46Z) - Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion
Models [54.1843419649895]
We propose a solution based on denoising diffusion probabilistic models (DDPMs)
Our motivation for choosing diffusion models over other generative models comes from the flexible internal structure of diffusion models.
Our method can unite multiple diffusion models trained on multiple sub-tasks and conquer the combined task.
arXiv Detail & Related papers (2022-12-01T18:59:55Z) - Diverse Human Motion Prediction via Gumbel-Softmax Sampling from an
Auxiliary Space [34.83587750498361]
Diverse human motion prediction aims at predicting multiple possible future pose sequences from a sequence of observed poses.
Previous approaches usually employ deep generative networks to model the conditional distribution of data, and then randomly sample outcomes from the distribution.
We propose a novel sampling strategy for sampling very diverse results from an imbalanced multimodal distribution.
arXiv Detail & Related papers (2022-07-15T09:03:57Z) - Selectively increasing the diversity of GAN-generated samples [8.980453507536017]
We propose a novel method to selectively increase the diversity of GAN-generated samples.
We show the superiority of our method in a synthetic benchmark as well as a real-life scenario simulating data from the Zero Degree Calorimeter of ALICE experiment in CERN.
arXiv Detail & Related papers (2022-07-04T16:27:06Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Relaxed-Responsibility Hierarchical Discrete VAEs [3.976291254896486]
We introduce textitRelaxed-Responsibility Vector-Quantisation, a novel way to parameterise discrete latent variables.
We achieve state-of-the-art bits-per-dim results for various standard datasets.
arXiv Detail & Related papers (2020-07-14T19:10:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.