Conditional Neural Relational Inference for Interacting Systems
- URL: http://arxiv.org/abs/2106.11083v1
- Date: Mon, 21 Jun 2021 13:05:48 GMT
- Title: Conditional Neural Relational Inference for Interacting Systems
- Authors: Joao A. Candido Ramos, Lionel Blond\'e, St\'ephane Armand and
Alexandros Kalousis
- Abstract summary: We learn to model the dynamics of similar yet distinct groups of interacting objects.
We develop a model that allows us to do conditional generation from any such group given its vectorial description.
We evaluate our model in the setting of modeling human gait and, in particular pathological human gait.
- Score: 58.141087282927415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we want to learn to model the dynamics of similar yet distinct
groups of interacting objects. These groups follow some common physical laws
that exhibit specificities that are captured through some vectorial
description. We develop a model that allows us to do conditional generation
from any such group given its vectorial description. Unlike previous work on
learning dynamical systems that can only do trajectory completion and require a
part of the trajectory dynamics to be provided as input in generation time, we
do generation using only the conditioning vector with no access to generation
time's trajectories. We evaluate our model in the setting of modeling human
gait and, in particular pathological human gait.
Related papers
- From pixels to planning: scale-free active inference [42.04471916762639]
This paper describes a discrete state-space model -- and accompanying methods -- for generative modelling.
We consider deep or hierarchical forms using the renormalisation group.
This technical note illustrates the automatic discovery, learning and deployment of RGMs using a series of applications.
arXiv Detail & Related papers (2024-07-27T14:20:48Z) - Neural Persistence Dynamics [8.197801260302642]
We consider the problem of learning the dynamics in the topology of time-evolving point clouds.
Our proposed model -- staticitneural persistence dynamics -- substantially outperforms the state-of-the-art across a diverse set of parameter regression tasks.
arXiv Detail & Related papers (2024-05-24T17:20:18Z) - Generative Pre-training for Speech with Flow Matching [81.59952572752248]
We pre-trained a generative model, named SpeechFlow, on 60k hours of untranscribed speech with Flow Matching and masked conditions.
Experiment results show the pre-trained generative model can be fine-tuned with task-specific data to match or surpass existing expert models on speech enhancement, separation, and synthesis.
arXiv Detail & Related papers (2023-10-25T03:40:50Z) - Persistent-Transient Duality: A Multi-mechanism Approach for Modeling
Human-Object Interaction [58.67761673662716]
Humans are highly adaptable, swiftly switching between different modes to handle different tasks, situations and contexts.
In Human-object interaction (HOI) activities, these modes can be attributed to two mechanisms: (1) the large-scale consistent plan for the whole activity and (2) the small-scale children interactive actions that start and end along the timeline.
This work proposes to model two concurrent mechanisms that jointly control human motion.
arXiv Detail & Related papers (2023-07-24T12:21:33Z) - Action Matching: Learning Stochastic Dynamics from Samples [10.46643972142224]
Action Matching is a method for learning a rich family of dynamics using only independent samples from its time evolution.
We derive a tractable training objective, which does not rely on explicit assumptions about the underlying dynamics.
Inspired by connections with optimal transport, we derive extensions of Action Matching to learn differential equations and dynamics involving creation and destruction of probability mass.
arXiv Detail & Related papers (2022-10-13T01:49:48Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - GEM: Group Enhanced Model for Learning Dynamical Control Systems [78.56159072162103]
We build effective dynamical models that are amenable to sample-based learning.
We show that learning the dynamics on a Lie algebra vector space is more effective than learning a direct state transition model.
This work sheds light on a connection between learning of dynamics and Lie group properties, which opens doors for new research directions.
arXiv Detail & Related papers (2021-04-07T01:08:18Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - The Role of Isomorphism Classes in Multi-Relational Datasets [6.419762264544509]
We show that isomorphism leakage overestimates performance in multi-relational inference.
We propose isomorphism-aware synthetic benchmarks for model evaluation.
We also demonstrate that isomorphism classes can be utilised through a simple prioritisation scheme.
arXiv Detail & Related papers (2020-09-30T12:15:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.