Muscles in Action
- URL: http://arxiv.org/abs/2212.02978v3
- Date: Mon, 20 Mar 2023 19:10:22 GMT
- Title: Muscles in Action
- Authors: Mia Chiquier, Carl Vondrick
- Abstract summary: We present a new dataset, Muscles in Action (MIA), to learn to incorporate muscle activity into human motion representations.
We learn a bidirectional representation that predicts muscle activation from video, and conversely, reconstructs motion from muscle activation.
Putting muscles into computer vision systems will enable richer models of virtual humans, with applications in sports, fitness, and AR/VR.
- Score: 22.482090207522358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human motion is created by, and constrained by, our muscles. We take a first
step at building computer vision methods that represent the internal muscle
activity that causes motion. We present a new dataset, Muscles in Action (MIA),
to learn to incorporate muscle activity into human motion representations. The
dataset consists of 12.5 hours of synchronized video and surface
electromyography (sEMG) data of 10 subjects performing various exercises. Using
this dataset, we learn a bidirectional representation that predicts muscle
activation from video, and conversely, reconstructs motion from muscle
activation. We evaluate our model on in-distribution subjects and exercises, as
well as on out-of-distribution subjects and exercises. We demonstrate how
advances in modeling both modalities jointly can serve as conditioning for
muscularly consistent motion generation. Putting muscles into computer vision
systems will enable richer models of virtual humans, with applications in
sports, fitness, and AR/VR.
Related papers
- Muscles in Time: Learning to Understand Human Motion by Simulating Muscle Activations [64.98299559470503]
Muscles in Time (MinT) is a large-scale synthetic muscle activation dataset.
It contains over nine hours of simulation data covering 227 subjects and 402 simulated muscle strands.
We show results on neural network-based muscle activation estimation from human pose sequences.
arXiv Detail & Related papers (2024-10-31T18:28:53Z) - HUMOS: Human Motion Model Conditioned on Body Shape [54.20419874234214]
We introduce a new approach to develop a generative motion model based on body shape.
We show that it's possible to train this model using unpaired data.
The resulting model generates diverse, physically plausible, and dynamically stable human motions.
arXiv Detail & Related papers (2024-09-05T23:50:57Z) - Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion based model that synthesizes full body motion of a person in two person interaction scenario.
We demonstrate ReMoS across challenging two person scenarios such as pair dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two person interactions containing full body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Intelligent Knee Sleeves: A Real-time Multimodal Dataset for 3D Lower
Body Motion Estimation Using Smart Textile [2.2008680042670123]
We present a multimodal dataset with benchmarks collected using a novel pair of Intelligent Knee Sleeves for human pose estimation.
Our system utilizes synchronized datasets that comprise time-series data from the Knee Sleeves and the corresponding ground truth labels from the visualized motion capture camera system.
We employ these to generate 3D human models solely based on the wearable data of individuals performing different activities.
arXiv Detail & Related papers (2023-10-02T00:34:21Z) - Modelling Human Visual Motion Processing with Trainable Motion Energy
Sensing and a Self-attention Network [1.9458156037869137]
We propose an image-computable model of human motion perception by bridging the gap between biological and computer vision models.
This model architecture aims to capture the computations in V1-MT, the core structure for motion perception in the biological visual system.
In silico neurophysiology reveals that our model's unit responses are similar to mammalian neural recordings regarding motion pooling and speed tuning.
arXiv Detail & Related papers (2023-05-16T04:16:07Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - From Motion to Muscle [0.0]
We show that muscle activity can be artificially generated based on motion features such as position, velocity, and acceleration.
The model achieves remarkable precision for previously trained movements and maintains significantly high precision for new movements that have not been previously trained.
arXiv Detail & Related papers (2022-01-27T13:30:17Z) - OstrichRL: A Musculoskeletal Ostrich Simulation to Study Bio-mechanical
Locomotion [8.849771760994273]
We release a 3D musculoskeletal simulation of an ostrich based on the MuJoCo simulator.
The model is based on CT scans and dissections used to gather actual muscle data.
We also provide a set of reinforcement learning tasks, including reference motion tracking and a reaching task with the neck.
arXiv Detail & Related papers (2021-12-11T19:58:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.